Prosecution Insights
Last updated: April 19, 2026
Application No. 17/711,569

Neural Network Representation Formats

Final Rejection §101§103§112
Filed
Apr 01, 2022
Examiner
HICKS, AUSTIN JAMES
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
308 granted / 403 resolved
+21.4% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
54 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§101 §103 §112
Response to Arguments Applicant's arguments filed 11/7/2025 have been fully considered but they are not persuasive. Applicant argues “relate to efficient compression of a neural network - a task which is not fit to be practically performed in the human mind.” Remarks 8. Applicant claims, [encoding ]a representation of a neural network into a data stream, so that the data stream is structured into one or more individually accessible portions, each portion representing a corresponding neural network layer of the neural network, wherein the apparatus is configured to provide the data stream with, for each of one or more predetermined individually accessible portions, a pointer pointing to a beginning of the respective predetermined individually accessible portion. Claim 2. The broadest reasonable interpretation includes small networks that are broken up into individually accessible portions. The claimed method isn’t necessarily efficient either. Further, even if this claim were amended to claim millions of weights, the abstract idea would still be a mathematical relationship because encoding and quantizing are mathematical operations. Applicant’s claims include an abstract idea. Applicant argues, “More specifically, these weights, biases and further parameter that characterize each connection between two of the potentially very large number of neurons (up to tens of millions) in each layer (up to hundreds) of the NN occupy the major portion of the data associated to a particular NN. [...] When applications involve frequent transmission/updates of the involved NNs, the data rate that may be used becomes a serious bottle neck.’… the coding of parameters of NNs, aimed at practical applications such as audio and/or video processing, does not constitute a mental concept capable of being practically implementable in the human mind.” Remarks 8-9. Applicant does not claim millions of neurons and hundred layers. Applicant argues, “Minezawa is silent about any structuring of the data stream, let alone structuring it to contain individually accessible portions. Further, Minezawa is silent about the use of a pointer for accessing such individually accessible portions of the data stream. That is, there is no suggestion or hint provided in Minezawa which would compel the person skilled in the art to devise the claimed apparatus with the pointer which establishes a mapping between the neural network layers and the individually accessible portions of the data stream.” Remarks 12. Applicant claims, a representation of a neural network into a data stream, so that the data stream is structured into one or more individually accessible portions, each portion representing a corresponding neural network layer of the neural network, wherein the apparatus is configured to provide the data stream with, for each of one or more predetermined individually accessible portions, a pointer pointing to a beginning of the respective predetermined individually accessible portion. Claim 2. The broadest reasonable interpretation includes a data stream with at least two separate blocks, with a pointer to each block. Minezawa teaches a data stream with network configuration information and quantization information, both are encoded.1 Minezawa doesn’t teach the pointer, However, Niccolo fills that gap.2 Applicant argues, “The pointer of Niccolo is a memory-level construct specifically aimed at memory manaqement, while the pointer of the claimed subject-matter is a loqical position indicator within a sequential data source, such as the claimed data stream, for navigation within a sequence of bits or bytes. That is, Niccolo's pointers points to a concrete memory address in a RAM of a computing device while the claimed pointer points to a position, such as by way of an offset, in the data stream.” Remarks 13. Applicant claims a “a pointer pointing to a beginning of the respective predetermined individually accessible portion.” Claim 2. Nothing about that claim requires a pointing to a position, by way of an offset, in a data stream. Niccolo3 teaches data stream pointers, but so do a lot4 of5 other6 references. This claim element is not different than the prior art. Therefore, Niccolo teaches the claimed element. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 2-5 and 21-26 rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of putting data in a data format without significantly more. The claims recite the mental concept and mathematical relationship of putting data on a data stream and taking date off a data stream – this could be as simple as writing a the parameters of a neural network into a comma separated string and giving the string to another person to do the math. This judicial exception is not integrated into a practical application because Applicant has removed any claim elements that improve the functioning of a computer or integrate the abstract idea into a technical field. The new additional element of picture and video processing merely link the abstract idea to picture and video processing. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the computer and computer readable media are generic computer parts. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 23 and 25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Applicant claims “using arithmetic encoding with newly initializing the arithmetic encoding at the beginning of each of the one or more predetermined individually accessible portions.” Claim 23. It is unclear what “encoding with newly initializing the arithmetic encoding” means. Examiner will interpret the claim to mean encoding each of the one or more portions separately. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-5, 21-26 are rejected under 35 U.S.C. 103 as being unpatentable over US20200184318A1 to Minezawa et al and https://cplusplus.com/forum/beginner/261042/#:~:text=If%20you%20need%20the%20calling,A%20*%20p%20=%20new%20A(); comment by Niccolo. Minezawa teaches claim 2. (Currently Amended) An apparatus comprising: a processor; and (Minezawa fig. 3a) a memory coupled to the processor and storing instructions that, when executed by the processor, cause the apparatus to: (Minezawa fig. 3b) encode quantization indices of neural network parameters, which represent a neural network, into a data stream; (Minezawa abs “encodes network configuration information including parameter data which is quantized…”) quantize the neural network parameters differently for different portions of the neural network based on a predetermined criterion; and (Minezawa para 44 “quantization information is information that defines quantization steps which are used when the parameter data of the neural network is quantized.” Minezawa para 165 “controlling unit 102 defines layer_quant_step[i-2] as a quantization step which is independent on a per-layer basis, layer_quant_step[i-2] may be defined as a difference value to a quantization step for an immediately previous layer (ith layer).” The criterion is the layer count i.) generate the data stream including, for each of the neural network portions, a(Minezawa para 91 “quantization information encoded by the encoding unit 103 is outputted to the data processing device 200.” The output is the generated data stream. Minezawa para 94 “the data processing unit 202 calculates edge weight information which is inversely quantized using the quantization information and network configuration information decoded from the compressed data…” The inverse quantization is dequantization. The quantization information is the reconstruction rule, see also para 44 “quantization information is information that defines quantization steps which are used when the parameter data of the neural network is quantized.”) wherein the apparatus is configured to encode a representation of a neural network into a data stream, so that the data stream is structured into one or more individually accessible portions, each portion representing a corresponding neural network layer of the neural network, wherein the apparatus is configured to provide the data stream with, for each of one or more predetermined individually accessible portions, (Minezawa para 91 “Compressed data of the above-described network configuration information and quantization information encoded by the encoding unit 103 is outputted to the data processing device 200.” The network config information is the portion of the stream that encodes a representation of the NN into a portion of the data stream.) Minezawa doesn’t teach how to use pointers. However, Niccolo teaches a pointer pointing to a beginning of the respective predetermined individually accessible portion. (Niccolo “A * p = new A();”) Minezawa, the claims and Niccolo are all directed to memory management. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use pointers because sometimes “you're working at a raw level for the sake of extreme performance, and you may create some custom memory management system, or you're processing video data, audio data, image information...the point here is that by the time you're doing that stuff, you know you need pointers and don't need to even ask anymore.” Niccolo. Minezawa teaches claims 3, 21 and 22. (Currently Amended) Apparatus An apparatus comprising: a processor; and (Minezawa fig. 3a) a memory coupled to the processor and storing instructions that, when executed by the processor, cause the apparatus to: (Minezawa fig. 3b) receive a data stream containing quantization indices of neural network parameters; (Minezawa para 91 “Compressed data of the above-described network configuration information and quantization information encoded by the encoding unit 103 is outputted to the data processing device 200.” Minezawa para 93 “The decoding unit 201 decodes quantization information and network configuration information from the above-described compressed data encoded by the encoding unit 103 (step ST1 a).”) decode the neural network parameters from the data stream, wherein the neural network parameters corresponding to different portions of the neural network have been quantized differently based on a predefined quantization scheme; (Minezawa para 93 “The decoding unit 201 decodes quantization information and network configuration information from the above-described compressed data encoded by the encoding unit 103 (step ST1 a).” Minezawa para 165 “controlling unit 102 defines layer_quant_step[i-2] as a quantization step which is independent on a per-layer basis, layer_quant_step[i-2] may be defined as a difference value to a quantization step for an immediately previous layer (ith layer).” The predefined scheme is a per-layer scheme.) extract the data stream, for each of the neural network portions, a corresponding reconstruction rule for dequantizing neural network parameters; and (Minezawa para 93 “The quantization information and the network configuration information are outputted from the decoding unit 201 to the data processing unit 202.” Quantization information is the reconstruction rule. Network information is the network parameters.) apply the extracted reconstruction rule to reconstruct the neural network parameters for each respective neural network portion; (Minezawa para 94 “the data processing unit 202 calculates edge weight information which is inversely quantized using the quantization information and network configuration information decoded from the compressed data by the decoding unit 201 (step ST2 a).”) wherein the data stream is structured into one or more individually accessible portions, each portion representing a corresponding neural network layer of the neural network, wherein the apparatus is configured to decode from the data stream, for each of one or more predetermined individually accessible portions, (Minezawa para 91 “Compressed data of the above-described network configuration information and quantization information encoded by the encoding unit 103 is outputted to the data processing device 200.” The network config information is the portion of the stream that encodes a representation of the NN into a portion of the data stream.) Minezawa doesn’t teach how to use pointers. However, Niccolo teaches a pointer pointing to a beginning of the respective predetermined individually accessible portion. (Niccolo “A * p = new A();”) Minezawa, the claims and Niccolo are all directed to memory management. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use pointers because sometimes “you're working at a raw level for the sake of extreme performance, and you may create some custom memory management system, or you're processing video data, audio data, image information...the point here is that by the time you're doing that stuff, you know you need pointers and don't need to even ask anymore.” Niccolo. Minezawa teaches claim 4. Apparatus of claim 3, wherein the neural network portions comprise neural network layers of the neural network and/or layer portions into which a predetermined neural network layer of the neural network is subdivided. (Minezawa fig. 6 below. Minezawa para 165 “controlling unit 102 defines layer_quant_step[i-2] as a quantization step which is independent on a per-layer basis, layer_quant_step[i-2] may be defined as a difference value to a quantization step for an immediately previous layer (ith layer).”) PNG media_image1.png 494 706 media_image1.png Greyscale Minezawa teaches claim 5. Apparatus of claim 3, wherein the apparatus is configured to decode, from the data stream, a first reconstruction rule for dequantizing neural network parameters relating to a first neural network portion, in a manner delta-decoded relative to a second reconstruction rule for dequantizing neural network parameters relating to a second neural network portion. (Minezawa para 165 “controlling unit 102 defines layer_quant_step[i-2] as a quantization step which is independent on a per-layer basis, layer_quant_step[i-2] may be defined as a difference value to a quantization step for an immediately previous layer (ith layer).”) Minezawa teaches claim 23. (New) The apparatus according to claim 2, configured to generate the data stream by encoding the representation of the neural network into the data stream using arithmetic encoding with newly initializing the arithmetic encoding at the beginning of each of the one or more predetermined individually accessible portions. (Minezawa para 91 “Compressed data of the above-described network configuration information and quantization information encoded by the encoding unit 103 is outputted to the data processing device 200.” Minezawa para 165 “controlling unit 102 defines layer_quant_step[i-2] as a quantization step which is independent on a per-layer basis, layer_quant_step[i-2] may be defined as a difference value to a quantization step for an immediately previous layer (ith layer).”) Minezawa teaches claim 24. (New) The apparatus according to claim 2, wherein the apparatus is a picture and/or video processing device configured to derive the neural network for picture and/or video processing. (Minezawa para 114 “FIG. 8 is a diagram showing an example of a convolution process for two-dimensional data in the first embodiment, and shows a convolution process for two-dimensional data such as image data.”) Minezawa teaches claim 25. (New) The apparatus according to claim 3, configured to decode the data stream using arithmetic decoding with newly initializing the arithmetic decoding at the beginning of each of the one or more predetermined individually accessible portions. (Minezawa para 93 “The decoding unit 201 decodes quantization information and network configuration information from the above-described compressed data encoded by the encoding unit 103 (step ST1 a).” Minezawa para 165 “controlling unit 102 defines layer_quant_step[i-2] as a quantization step which is independent on a per-layer basis, layer_quant_step[i-2] may be defined as a difference value to a quantization step for an immediately previous layer (ith layer).”) Minezawa teaches claim 26. (New) The apparatus according to claim 3, wherein the apparatus is a picture and/or video processing device configured to use the neural network for picture and/or video processing. (Minezawa para 114 “FIG. 8 is a diagram showing an example of a convolution process for two-dimensional data in the first embodiment, and shows a convolution process for two-dimensional data such as image data.”) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN HICKS/Primary Examiner, Art Unit 2124 1 Minezawa para 91 “Compressed data of the above-described network configuration information and quantization information encoded by the encoding unit 103 is outputted to the data processing device 200.” Minezawa para 165 “controlling unit 102 defines layer_quant_step[i-2] as a quantization step which is independent on a per-layer basis, layer_quant_step[i-2] may be defined as a difference value to a quantization step for an immediately previous layer (ith layer).” 2 Niccolo “A * p = new A();” 3 Niccolo “A * p = new A();” 4 US 20070294500 A1 Abs “Backfilling of appropriate pointer values into the data stream enables a respective user viewing the data stream to initiate navigation amongst the data stream and potentially view a live feed with little or no delay.” 5 US 20020102026 A1 Abs “ Received are a compressed data set, at least one pointer to a location in the compressed data stream whose decoded output comprises a location on a line of data, and decoding information for each received pointer that enables decoding from a point within the compressed data stream addressed by the pointer.  6 US 20190146801 A1 Abs “hash entries each comprising a hash value of an associated subset of following data items of an input data stream and a pointer to a memory location of the associated subset.”
Read full office action

Prosecution Timeline

Apr 01, 2022
Application Filed
Sep 18, 2024
Non-Final Rejection — §101, §103, §112
Mar 17, 2025
Response Filed
Apr 02, 2025
Final Rejection — §101, §103, §112
Jul 07, 2025
Request for Continued Examination
Jul 13, 2025
Response after Non-Final Action
Aug 06, 2025
Non-Final Rejection — §101, §103, §112
Nov 07, 2025
Response Filed
Dec 03, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591767
NEURAL NETWORK ACCELERATION CIRCUIT AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12554795
REDUCING CLASS IMBALANCE IN MACHINE-LEARNING TRAINING DATASET
2y 5m to grant Granted Feb 17, 2026
Patent 12530630
Hierarchical Gradient Averaging For Enforcing Subject Level Privacy
2y 5m to grant Granted Jan 20, 2026
Patent 12524694
OPTIMIZING ROUTE MODIFICATION USING QUANTUM GENERATED ROUTE REPOSITORY
2y 5m to grant Granted Jan 13, 2026
Patent 12524646
VARIABLE CURVATURE BENDING ARC CONTROL METHOD FOR ROLL BENDING MACHINE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.1%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month