Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to data per which is not a statutory category.
Claim 22 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite the mathematical concept of encoding/decoding data related to a neural network using several different mathematical formulas or relationships. This judicial exception is not integrated into a practical application because the hardware components are generic computer parts. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are not additional elements outside of the generic computer parts.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 2-19 are rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, because the claim purports to invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, but fails to recite a combination of elements as required by that statutory provision and thus cannot rely on the specification to provide the structure, material or acts to support the claimed function. As such, the claim recites a function that has no limits and covers every conceivable means for achieving the stated function, while the specification discloses at most only those means known to the inventor. Accordingly, the disclosure is not commensurate with the scope of the claim.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 2-19 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The preamble is directed to an apparatus, and the apparatus is “configured to…” perform method steps in some dependent claims. This makes it unclear whether applicant is claiming a method or any apparatus that is capable of performing the method. Any apparatus that is capable of performing the method is undefined.
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 4 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 4 depends on itself. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 7, 9, 11, 13-19 and 22 are rejected under 35 U.S.C. 102(a)(1) as being described by US20200184318A1 to Minezawa et al.
Minezawa teaches claims 1, 2, 3 and 22. (Original) Apparatus for decoding a representation of a neural network from a data stream, wherein the data stream is structured into one or more individually accessible portions, each portion representing a corresponding neural network layer of the neural network, (Claim 2 claims encoding. Encoding and decoding are taught by Minezawa abs “An encoding unit (103) encodes network configuration information including parameter data…” Minezawa para 15 “using quantization information and network configuration information which are decoded from the compressed data…”) wherein the apparatus is configured to decode from the data stream, for a predetermined neural network layer, a neural network layer type parameter indicating a neural network layer type of the predetermined neural network layer of the neural network. (Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers (e.g., a convolutional layer, a pooling layer, or a filly-connected [sic] layer).” Encoding/decoding are taught by Minezawa abs and para 15.)
Minezawa teaches claim 4. (Original) Apparatus of claim 4, wherein the neural network layer type parameter discriminates, at least, between a fully-connected and a convolutional layer type. (Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers (e.g., a convolutional layer, a pooling layer, or a filly-connected [sic] layer).” Encoding/decoding are taught by Minezawa abs and para 15.)
PNG
media_image1.png
350
530
media_image1.png
Greyscale
Minezawa fig. 7
Minezawa teaches claim 7. (Original) Apparatus of claim 3, wherein the apparatus is configured to decode a representation of a neural network from the data stream, wherein the data stream is structured into one or more individually accessible portions, each individually accessible portion representing a corresponding neural network layer of the neural network, (Minezawa fig. 7 shows that the steps comprise a layer of the neural network, see above.) and wherein the data stream is, within a predetermined portion, further structured into individually accessible sub-portions, each sub-portion representing a corresponding neural network portion of the respective neural network layer of the neural network, (Each node is encoded in Minezawa. The node and the Kernel shown in fig. 7 jointly and separately teach Applicant’s claimed sub-portion.) wherein the apparatus is configured to decode from the data stream, for each of one or more predetermined individually accessible sub-portions a start code at which the respective predetermined individually accessible sub- portion begins, (Minezawa para 94 “ the data processing unit 202 calculates edge weight information which is inversely quantized using the quantization information and network configuration information decoded from the compressed data by the decoding unit 201 (step ST2 a).”) and/or a pointer pointing to a beginning of the respective predetermined individually accessible sub-portion, and/or a data stream length parameter indicating a data stream length of the respective predetermined individually accessible sub-portion for skipping the respective predetermined individually accessible sub-portion in parsing the data stream. (These preceding claim elements are claimed in the alternative and therefore do not need to be taught by the prior art.)
Minezawa teaches claim 9. (Original) Apparatus of claim 3, wherein the apparatus is configured to decode a representation of a neural network from a data stream, wherein the data stream is structured into individually accessible portions, each portion representing a corresponding neural network portion of the neural network, wherein the apparatus is configured to decode from the data stream, for each of one or more predetermined individually accessible portions, an identification parameter for identifying the respective predetermined individually accessible portion. (Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers…” The type information for each of the layers must identify the layers somehow, or else the type information is meaningless, because it wouldn’t say which layer is which type.)
Minezawa teaches claim 11. (Original) Apparatus of claim 9, wherein the apparatus is configured to decode, from the data stream, a higher-level identification parameter for identifying a collection of more than one predetermined individually accessible portion. (Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers…” The higher level ID is the configuration of the neural network and/or the number of layers.)
Minezawa teaches claim 13. (Original) Apparatus of claim 3, wherein the apparatus is configured to decode a representation of a neural network from a data stream, wherein the data stream is structured into individually accessible portions, each portion representing a corresponding neural network portion of the neural network, wherein the apparatus is configured to decode from the data stream, for each of one or more predetermined individually accessible portions a supplemental data for supplementing the representation of the neural network. (Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers…” The number of nodes for each layer is supplemental information.)
Minezawa teaches claim 14. (Original) Apparatus of claim 13, wherein the data stream indicates the supplemental data as being dispensable for inference based on the neural network. (Examiner interprets dispensable as usable for inferences. Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers…” The number of nodes for each layer is supplemental information that is usable/dispensable for inferences.)
Minezawa teaches claim 15. (Original) Apparatus of claim 13, wherein the apparatus is configured to decode the supplemental data for supplementing the representation of the neural network for the one or more predetermined individually accessible portions from further individually accessible portions, wherein the data stream comprises for each of the one or more predetermined individually accessible portions a corresponding further predetermined individually accessible portion relating to the neural network portion to which the respective predetermined individually accessible portion corresponds. (Examiner interprets dispensable as usable for inferences. Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers…” Minezawa fig. 4 says the configuration information is decoded in step st1a.)
Minezawa teaches claim 16. (Original) Apparatus of claim 13, wherein the supplemental data relates to relevance scores of neural network parameters, and/or perturbation robustness of neural network parameters. (Minezawa para 108 “The kernel size K is five, and the kernel is defined by a combination of these weights.” The kernel size is a perturbation robustness because bigger kernels average out minor perturbations better.)
Minezawa teaches claim 17. (Original) Apparatus of claim 3, for decoding a representation of a neural network from a data stream, wherein the apparatus is configured to decode from the data stream hierarchical control data structured into a sequence of control data portions, wherein the control data portions provide information on the neural network at increasing details along the sequence of control data portions. (Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers (e.g., a convolutional layer, a pooling layer, or a filly-connected layer).” Number of layers and number of nodes is increasing detail in a sequence.)
Minezawa teaches claim 18. (Original) Apparatus of claim 17, wherein at least some of the control data portions provide information on the neural network which is partially redundant. (Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers (e.g., a convolutional layer, a pooling layer, or a filly-connected layer).” Activation functions make the number of nodes partially redundant because each node only has one activation function, so the number of activation functions is the number of nodes.)
Minezawa teaches claim 19. (Original) Apparatus of claim 17, wherein a first control data portion provides the information on the neural network by way of indicating a default neural network type implying default settings and a second control data portion comprises a parameter to indicate each of the default settings. (Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers (e.g., a convolutional layer, a pooling layer, or a filly-connected layer).” The edges that link nodes implies the type of the layers and the later explicit type-setting of the layer is the parameter to indicate each of the default settings.)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over US20200184318A1 to Minezawa et al and https://cpluspus.com/forum/beginner/261042:ext=%20you%2need%20the%2caling,A2020p&20-%20new20A(), comment by Niccolo (Niccolo).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over US20200184318A1 to Minezawa et al and Context-Based Adaptive Binary Arithmetic Coding in the H.264/AVC Video Compression Standard by Marpe et al.
Claims 10 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over US20200184318A1 to Minezawa et al and US 20030220996 A1 to Yang.
Minezawa teaches claim 5. (Original) Apparatus of claim 3, wherein the data stream, is structured into individually accessible portions, each individually accessible portion representing a corresponding neural network portion of the neural network, and wherein the apparatus is configured to decode, from the data stream, for each of one or more predetermined individually accessible portions, a (Minezawa teaches a step, in fig. 7, and when decoding minezawa goes through the steps (portions) to decode the quantized/encoded nueral networks, para 89 and fig. 5, “the data processing unit 101 quantizes the above-described edge weight information of the neural network using the quantization step in the quantization information (step ST2). The data processing unit 101 generates network configuration information including the quantized edge weight information, and outputs the network configuration information to the encoding unit 103.” Each step has a beginning.)
Minezawa doesn’t teach a pointer.
However, Niccolo teaches a pointer pointing to a beginning of each individually accessible portion. (Niccolo “A * p = new A();”)
Minezawa, the claims and Niccolo are all directed to memory management. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use pointers because sometimes “you're working at a raw level for the sake of extreme performance, and you may create some custom memory management system, or you're processing video data, audio data, image information...the point here is that by the time you're doing that stuff, you know you need pointers and don't need to even ask anymore.” Niccolo.
Minezawa teaches claim 6. (Original) Apparatus of claim 5, wherein each individually accessible portion represents a corresponding neural network layer of the neural network or a neural network portion of a neural network layer of the neural network. (Minezawa fig. 7 shows that the steps comprise a layer of the neural network, see below.)
PNG
media_image1.png
350
530
media_image1.png
Greyscale
Minezawa fig. 7
Minezawa teaches claim 8. (Original) Apparatus of claim 7, wherein the apparatus is configured to decode, from the data stream, the representation of the neural network using (Minezawa para 94 “the data processing unit 202 calculates edge weight information which is inversely quantized using the quantization information and network configuration information decoded from the compressed data by the decoding unit 201 (step ST2 a).” Each node is encoded in Minezawa. The node and the Kernel shown in fig. 7 jointly and separately teach Applicant’s claimed sub-portion.)
Minezawa doesn’t teach context-adaptive arithmetic decoding with context initialization.
However, Marpe teaches context- adaptive arithmetic decoding and using context initialization at a start of each individually accessible portion. (Marpe CABAC encoder in fig. 1, see below.)
PNG
media_image2.png
244
670
media_image2.png
Greyscale
Minezawa, Marpe and the claims are all compressing data. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use CABAC in Minezawa because Marpe’s CABAC is a “low-complexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations.” Marpe abs.
Minezawa teaches claim 10. (Original) Apparatus of claim 9, wherein the identification parameter is related to the respective predetermined individually accessible portion (Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers…” The type information for each of the layers must identify the layers somehow, or else the type information is meaningless, because it wouldn’t say which layer is which type.)
Minezawa doesn’t teach a hash to relate data.
However, Yang teaches identification parameter is related to the respective predetermined individually accessible portion via a hash function or error detection code or error correction code. (Yang para 21 “The information out of the third layer and the fragment identification recorded in said step (62) are stored with a hash tree data structure.”)
Yang, the claims and Minezawa are all directed to breaking up data and storing it. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to have a hash to correlate fragments with their IDs “to increase the network security.” Yang para 4.
Minezawa teaches claim 12. (Original) Apparatus of claim 11, wherein the higher-level identification parameter is related to the identification parameters of the more than one predetermined individually accessible portion (Minezawa para 40 “The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers…” The higher level ID is the configuration of the neural network and/or the number of layers.)
Minezawa doesn’t teach a hash to relate data.
However, Yang teaches the identification parameters of the more than one predetermined individually accessible portion via a hash function or error detection code or error correction code. (Yang para 21 “The information out of the third layer and the fragment identification recorded in said step (62) are stored with a hash tree data structure.”)
Yang, the claims and Minezawa are all directed to breaking up data and storing it. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to have a hash to correlate fragments with their IDs “to increase the network security.” Yang para 4.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AUSTIN HICKS/ Primary Examiner, Art Unit 2142