CTNF 18/327,865 CTNF 101455 Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/1/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 07-30-01 AIA The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. 07-31-02 AIA Claim 1-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. The claimed subject matter for which the specification is not enabling is, "generating a lossless and sparse representation of the group of descriptors". The specification does not properly explain how this is achieved . Going through the Wands factors set out in MPEP 2164: (A) The breadth of the claims; The examiner finds the claims to be broadly directed to generating a lossless and sparse representation of a group of descriptors. The claims cover any neural network, any descriptors, any expansion factor (as long as N2 > N1) and any quantization scheme. (B) The nature of the invention; The invention is directed to a method for passive readout in the field of machine learning algorithms. Specifically, what is claimed is a lossless, sparse encoding of an arbitrary neural network descriptors. (C) The state of the prior art; Prior art in neural networks and sparse coding does not generally achieve lossless binary quantization, especially at high expansion factors. Typical quantization is lossy. (D) The level of one of ordinary skill; The level of skill in the art is typically high but even skilled artisans would face difficulty replicating the claimed invention. (E) The level of predictability in the art; Neural network quantization can be highly unpredictable. (F) The amount of direction provided by the inventor; While the specification/disclosure describes the general approach, no specific architectures, parameters or detailed procedures are provided that would guarantee losslessness. (G) The existence of working examples; The specification does not provide any working examples or empirical data demonstrating lossless reconstruction. (H) The quantity of experimentation needed to make or use the invention based on the content of the disclosure. The examiner finds that a high quantity of experimentation, even from a skilled artisan, would be required. After reviewing all the Wands factors, the examiner finds that the specifications lacks enablement for “generating a lossless and sparse representation of the group of descriptors”. As such, claims 1-20 are rejected under 35 U.S.C. 112(a) . Claim Rejections - 35 USC § 112 07-30-02 AIA The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 07-34-01 Claim 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 15 recites “The non-transitory computer readable medium of claim 1”, however claim 1 is a method claim thus this term lacks antecedent basis. The examiner believes that claim 11 was intended to reference claim 15, but was mistakenly written as claim 1. Claim Rejections - 35 USC § 101 07-04-01 AIA 07-04 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 9, 11-17, 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. With regard to Claim 1, Step 2A, Prong 1 This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Claim 1 recites: A method for passive readout, the method comprises: obtaining a group of descriptors that were outputted by of one or more neural network layers; wherein descriptors of the group of descriptors comprise a first number (N1) of descriptor elements; and generating a lossless and sparse representation of the group of descriptors , wherein the generating comprises: applying a dimension expanding convolution operation on the group of descriptors to provide a group of expanded descriptors ; wherein expanded descriptors of the group of expanded descriptors comprises a second number (N2) of expanded descriptor elements, wherein N2 exceeds N1 ; and quantizing the group of expanded descriptors to provide a group of binary descriptors that form a lossless and a sparse representation of the group of descriptors. The broadest reasonable interpretation of the bolded limitations above are directed to mathematical concepts. Generating a representation of descriptors using an operation is a mathematical concept. Quantizing the group of expanded descriptors is also a mathematical concept. Step 2A, Prong 1 (Yes). Step 2A, Prong 2 This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). The additional element is the obtaining step. The obtaining step is mere data gathering and is insignificant extra-solution activity. See MPEP 2106.05(g). Even when viewed in combination the additional element does not integrate the recited judicial exception into a practical application. Step 2A, Prong 2 (No). Step 2B This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As discussed above: The obtaining step is mere data gathering and is insignificant extra-solution activity. See MPEP 2106.05(g). These elements amount to receiving or transmitting data over a network and are well-understood, routine and conventional activity. Step 2B (No). Claim 1 is ineligible. Claim 11 is similar in scope and rejected likewise. Dependent Claims : Claims 8, 10, 18, and 20: These claims recite further abstract ideas (outputting, generating, decoding, adjusting) but also recite “repeating, for each training image of multiple training images.” With respect to Step 2A, Prong 2, the limitation meaningfully limits the claim because it applies the training process to specific visual data. Thus, these claims are eligible . Claims 2-7, 9, 12-17, and 19: Each of these dependent claims merely elaborates on the specific mathematical concepts and do not provide any additional elements. Thus, these claims are ineligible . Claim Rejections - 35 USC § 103 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-21-aia AIA Claim s 1, 2, 11, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20210125070 A1), in view of Dumas (Autoencoder Based Image Compression: Can the learning be quantization independent?) . Regarding claim 1, Wang discloses “obtaining a group of descriptors that were outputted by of one or more neural network layers; wherein descriptors of the group of descriptors comprise a first number (N1) of descriptor elements;” (See [0005]; a weight tensor (group of descriptors within a layer) is received from a neural network layer) “generating a lossless and sparse representation of the group of descriptors, wherein the generating comprises:” (See [0005], [0082]; a sparse representation of the group of descriptors is generated through compression and also generates a lossless representation through lossless compression) Wang fails to explicitly disclose, “applying a dimension expanding convolution operation on the group of descriptors to provide a group of expanded descriptors; wherein expanded descriptors of the group of expanded descriptors comprises a second number (N2) of expanded descriptor elements, wherein N2 exceeds N1;” “quantizing the group of expanded descriptors to provide a group of binary descriptors that form a lossless and a sparse representation of the group of descriptors” . Dumas teaches “applying a dimension expanding convolution operation on the group of descriptors to provide a group of expanded descriptors; wherein expanded descriptors of the group of expanded descriptors comprises a second number (N2) of expanded descriptor elements, wherein N2 exceeds N1;” (See [Page 2-3, Section 3, Paragraph 1]; Dumas discloses applying a dimension expanding convolution operation by using a process that first builds a convolutional autoencoder by using a normal composition of convolutional layers and GDNs, and then reverses each component in the autoencoder by replacing each GDN with an inverse GDN and each convolutional layer with a transpose convolutional layer) “quantizing the group of expanded descriptors to provide a group of binary descriptors that form a lossless and a sparse representation of the group of descriptors.” (See [Page 4, Section 4, Paragraph 3, 5]; Dumas discloses using a quantization step on each feature map (group of expanded descriptors) of the system and also discloses that the quantization step implements a binarizer to provide a group of binary descriptors) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Wang and Dumas before them to modify Wang to - - apply a dimension expanding operation on the group of descriptors as well as quantizing the group afterwards. One would be motivated to apply the dimension expanding operation in order to take compressed, low-resolution descriptors and upscale them for better detail, and one would be motivated to quantize the group afterwards to reduce the size of the upscaled group of descriptors for the purpose of higher performance during runtime. Regarding claim 2, Wang discloses “The applying… comprises independently applying a… process on each descriptor of the group of descriptors” (See [0058]; Wang discloses applying an operation to each pixel (descriptor) in a feature map (group).) Wang fails to explicitly disclose, “…the dimension expanding convolution operation…” . Dumas teaches “the dimension expanding convolution operation” (See [Page 2-3, Section 3, Paragraph 1]; Dumas discloses applying a dimension expanding convolution operation). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Wang and Dumas before them to modify Wang to use a dimension expanding convolution operation. One would be motivated to do so in order to upscale each descriptor so each descriptor is in higher detail for better analysis. Regarding claim 11, this claim is similar in scope to claim 1. Regarding claim 12, this claim is similar in scope to claim 2 . Claim Rejections - 35 USC § 103 07-21-aia AIA Claim s 3, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20210125070 A1), in view of Dumas (Autoencoder Based Image Compression: Can the learning be quantization independent?), and further in view of Zhao (US 20220391676 A1) . Regarding claim 3, Wang fails to explicitly disclose, “quantizing is a top-K quantization” . Zhao teaches “quantizing is a top-K quantization” (See [0004]; Zhao discloses a neural network that was quantized with top-k quantization). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Wang and Zhao before them to modify Wang to use Top-K quantization as one of its preferred quantization techniques. One would be motivated to do so in order to focus on preserving the K most significant descriptors with greater precision while quantizing the remaining descriptors with lower precision for the sake of optimizing performance. Regarding claim 13, this claim is similar in scope to claim 3 . Claim Rejections - 35 USC § 103 07-21-aia AIA Claim s 4, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20210125070 A1), in view of Dumas (Autoencoder Based Image Compression: Can the learning be quantization independent?) and further in view of Kim (Distance-aware Quantization) . Regarding claim 4, Wang fails to explicitly disclose, “quantizing is a argmax quantization” . Kim teaches “quantizing is a argmax quantization” (See [Page 3, Figure 2]; Kim discloses a quantization technique using a differentiable version of argmax). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Wang and Kim before them to modify Wang to use argmax quantization as one of its preferred quantization techniques. One would be motivated to do so in order to quantize the highest probability descriptors that are important to the model according to the argmax function with greater precision while quantizing the remaining descriptors with a lower probability, as this helps with determining which descriptors should receive more compression. Regarding claim 14, this claim is similar in scope to claim 4 . Claim Rejections - 35 USC § 103 07-21-aia AIA Claim (s) 5, 6, 15, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20210125070 A1), in view of Dumas (Autoencoder Based Image Compression: Can the learning be quantization independent?), and further in view of Weisel (US 20220222317 A1) . Regarding claim 5, Wang fails to explicitly disclose, “N2 exceeds N1 by at least a factor of 10” . Weisel teaches “N2 exceeds N1 by at least a factor of 10” (See [0067]; Weisel discloses that a number of input channels (second group of numbers N2) exceeds a depth of the input data (first group of numbers N1) by at least a factor of 10). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Wang and Weisel before them to modify Wang to specify that the second group of numbers N2 would exceed the first group of numbers N1 by a factor of 10. One would be motivated to do so in order to define that N1 needs to be upscaled by at least a factor of 10 to improve the resolution or quality of the input data. Regarding claim 6, Wang fails to explicitly disclose, “N2 exceeds N1 by at least a factor of 1000” . Weisel teaches “N2 exceeds N1 by at least a factor of 1000” (See [0067]; Weisel discloses that a number of input channels (second group of numbers N2) exceeds a depth of the input data (first group of numbers N1) by at least a factor of 3 or more, which includes a factor of 1000). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Wang and Weisel before them to modify Wang to specify that the second group of numbers N2 would exceed the first group of numbers N1 by a factor of 1000. One would be motivated to do so in order to define that N1 needs to be upscaled by at least a factor of 1000 to improve the resolution or quality of the input data. Regarding claim 15, this claim is similar in scope to claim 5. Regarding claim 16, this claim is similar in scope to claim 6 . Claim Rejections - 35 USC § 103 07-21-aia AIA Claim (s) 7, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20210125070 A1), in view of Dumas (Autoencoder Based Image Compression: Can the learning be quantization independent?), and further in view of Nyamwange (US 20230188542 A1) and Lipasti (US 20160098629 A1) . Regarding claim 7, Wang fails to explicitly disclose, “generating is executed by a readout unit that is trained by a training process” . Nyamwange teaches “generating is executed by a readout unit” (See [0005]; a readout unit that generates is disclosed). Nyamwange fails to explicitly disclose, “a readout unit that is trained by a training process” . Lipasti teaches “a readout unit that is trained by a training process” (See [0038]; Lipasti discloses a readout unit being trained). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Wang , Nyamwange , and Lipasti before them to modify Wang to use a readout unit that generates and also to train the readout unit. One would be motivated to do so in order to generate passive readout units for the model and use the provided training to generate according to the desired specifications. Regarding claim 17, this claim is similar in scope to claim 7 . Claim Rejections - 35 USC § 103 07-21-aia AIA Claim s 8, 9, 10, 18, 19, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20210125070 A1), in view of Dumas (Autoencoder Based Image Compression: Can the learning be quantization independent?), and further in view of Nyamwange (US 20230188542 A1) and Lipasti (US 20160098629 A1), and further in view of Dirac (US 10970629 B1) . Regarding claim 8, Wang discloses “repeating, for each training image of multiple training images” (See [0081]; Wang discloses a training process that repeats for each element (training image) of a set) “outputting, by a neural network, a group of training descriptors related to the training image;” (See [0005]; Wang discloses a neural network outputting a weight tensor (group of training descriptors)) “generating a passive readout unit output, in response to the group of training descriptors;” (See [0005], [0082]; Wang discloses a sparse and lossless output being generated after the weight tensor (group of training descriptors) was outputted. Passive readout is a lossless output) Wang fails to explicitly disclose, “decoding the passive readout unit output by a process that reverses the generating of the passive readout unit output, to provide a decoded output;” . Dumas teaches “decoding the passive readout unit output by a process that reverses the generating of the passive readout unit output, to provide a decoded output;” (See [Page 2-3, Section 3, Paragraph 1]; Dumas discloses decoding the output with a process that reverses the original composition to provide a decoded output). Dumas fails to explicitly disclose, “adjusting the passive readout unit based on a difference between the group of training descriptors and the decoded output” . Dirac teaches “adjusting the passive readout unit based on a difference between the group of training descriptors and the decoded output” (See [Column 16, Lines 38-46], [Column 3, Lines 18-20]; Dirac discloses adjusting the output based on a difference between encoded training data (group of training descriptors) and encoded reference data output, and Dirac also discloses that the encoded reference data output can be decoded). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Wang , Dumas , and Dirac before them to modify Wang to reverse the passive readout generation to decode the passive readout and then adjusting the output of that reversal based on a difference between the training descriptors group and the output. One would be motivated to do so in order to decode the output to be able to read the results of the passive readout generation and then making adjustments to the output based on the difference between the training descriptors group and the output to correct any missing or flawed parts of the output. Regarding claim 9, Wang fails to explicitly disclose, “training the passive readout circuit by the training circuit” . Lipasti teaches “training the passive readout circuit by the training circuit” (See [0038]; Lipasti discloses training the readout units). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Wang and Lipasti before them to modify Wang to train the passive readout circuit using the training circuit. One would be motivated to do so in order to properly train the passive readout according to the intended training parameters. Regarding claim 10, Wang discloses “repeating, for each training image of multiple training images” (See [0081]; Wang discloses a training process that repeats for each element (training image) of a set) “outputting, by a neural network, a group of training descriptors related to the training image;” (See [0005]; Wang discloses a neural network outputting a weight tensor (group of training descriptors)) “generating a passive readout unit output, in response to the group of training descriptors;” (See [0005], [0082]; Wang discloses a sparse and lossless output being generated after the weight tensor (group of training descriptors) was outputted. Passive readout is a lossless output) Wang fails to explicitly disclose, “decoding the passive readout unit output by a process that reverses the generating of the passive readout unit output, to provide a decoded output;” . Dumas teaches “decoding the passive readout unit output by a process that reverses the generating of the passive readout unit output, to provide a decoded output;” (See [Page 2-3, Section 3, Paragraph 1]; Dumas discloses decoding the output with a process that reverses the original composition to provide a decoded output). Dumas fails to explicitly disclose, “adjusting the passive readout unit based on a difference between the group of training descriptors and the decoded output” . Dirac teaches “adjusting the passive readout unit based on a difference between the group of training descriptors and the decoded output” (See [Column 16, Lines 38-46], [Column 3, Lines 18-20]; Dirac discloses adjusting the output based on a difference between encoded training data (group of training descriptors) and encoded reference data output, and Dirac also discloses that the encoded reference data output can be decoded). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Wang , Dumas , and Dirac before them to modify Wang to reverse the passive readout generation to decode the passive readout and then adjusting the output of that reversal based on a difference between the training descriptors group and the output. One would be motivated to do so in order to decode the output to be able to read the results of the passive readout generation and then making adjustments to the output based on the difference between the training descriptors group and the output to correct any missing or flawed parts of the output. Regarding claim 18, this claim is similar in scope to claim 8. Regarding claim 19, this claim is similar in scope to claim 9. Regarding claim 20, this claim is similar in scope to claim 10. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID KIM whose telephone number is (571)272-4331. The examiner can normally be reached 7:30 AM - 4:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.K./ Examiner, Art Unit 2141 /MATTHEW ELL/ Supervisory Patent Examiner, Art Unit 2141 Application/Control Number: 18/327,865 Page 2 Art Unit: 2141 Application/Control Number: 18/327,865 Page 3 Art Unit: 2141 Application/Control Number: 18/327,865 Page 4 Art Unit: 2141 Application/Control Number: 18/327,865 Page 5 Art Unit: 2141 Application/Control Number: 18/327,865 Page 6 Art Unit: 2141 Application/Control Number: 18/327,865 Page 7 Art Unit: 2141 Application/Control Number: 18/327,865 Page 8 Art Unit: 2141 Application/Control Number: 18/327,865 Page 9 Art Unit: 2141 Application/Control Number: 18/327,865 Page 10 Art Unit: 2141 Application/Control Number: 18/327,865 Page 11 Art Unit: 2141 Application/Control Number: 18/327,865 Page 12 Art Unit: 2141 Application/Control Number: 18/327,865 Page 13 Art Unit: 2141 Application/Control Number: 18/327,865 Page 14 Art Unit: 2141 Application/Control Number: 18/327,865 Page 15 Art Unit: 2141