NON-FINAL REJECTION, FIRST DETAILED ACTION
Status of Prosecution
The present application 17/744,062, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The application was filed in the Office on May 13, 2022 and claims foreign priority to French application FR2105613, filed on May 28, 2021.
Claims 1-20 are pending and are all rejected in this rejection.
Status of Claims
Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 7, 14 and 20 are rejected for indefiniteness under 35 USC S 112(b).
Claims 1, 6-8, 13-15 and 19-20 are rejected under 35 USC § 102(a)(1) as being anticipated by Baluja et al. (“Baluja”), United States Patent Application Publication 2021/0209475, published on July 8, 2021.
Claims 2-3, 9-10 and 16-17 are rejected under 35 USC § 103 as being unpatentable over Baluja et al. (“Baluja”), United States Patent Application Publication 2021/0209475, published on July 8, 2021 in view of Choi et al. (“Choi”) United States Patent Application Publication 2018/0107925, published on Apr. 19, 2018.
Claims 4-5, 11-12 and 18 are rejected under 35 USC § 103 as being unpatentable over Baluja et al. (“Baluja”), United States Patent Application Publication 2021/0209475, published on July 8, 2021.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 7, 14 and 20 are rejected for indefiniteness under 35 USC S § 112(b). Specifically, the claims recite, “wherein the simplified artificial neural network includes less trained weights than the trained artificial neural network.” The use of less here is indefinite as it is unclear as it refers to the weight value or number of unique trained weights. The Specification does not readily appear to resolve this.
Claims 14 and 20 are similarly rejected.
For purposes of prosecution, claim construction will proceed with one that there are fewer (and thus simplified) unique trained weights, which correspondingly means fewer edges of the neural network. Correction and clarification are required.
Claim Rejections – 35 USC § 101, Subject Matter Eligibility
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding representative claim 1, at step 1, the claim recites a method, a process, which is a statutory category of invention. See MPEP § 2106.03.
At step 2A, prong one, the claim recites the apparatus capable of performing a method that determines aggregate data, calculates a value of a predetermine objective function using the aggregate data and a predetermined objective function with respect to a parameter and updating the parameter.
The limitations
forming clusters of trained weights of the weight applied connection for each input of each layer of the trained artificial neural network;
computing a representative weight for each formed cluster; and
replacing the trained weights of the weight applied connection for each cluster with the representative weight to form the simplified artificial neural network.
are each processes or steps, that under a broadest reasonable interpretation, are the abstract idea of a mathematical calculation. See MPEP § 2106.04(a)(2)(I)(C).
Therefore, the claim recites at least an abstract idea per this part of the analysis.
At step 2A prong two, the claim language is analyzed to determine whether it recites additional elements that integrate the judicial exception into a practical application. See MPEP § 2106.04(d).
The claim element, “[a] method for generating a simplified artificial neural network from a trained artificial neural network comprising layers of neurons, each layer having at least one input, and each input coupled to at least one neuron of the layer by a weight applied connection,” are additional elements that generally links the use of the judicial exception to a particular technological environment or field of use, specifically neural networks. See MPEP §§ 2106.04(d), 2106.05(h).
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is therefore directed to an abstract idea.
Next, at step 2B of the analysis, the claim is considered if it recites additional elements that amount to significantly more than the judicial exception. See MPEP § 2106.05.
As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to nothing more than linking the use of the judicial exception to a particular technological environment or field of use, neural networks. See MPEP § 2106.05(h).
Therefore, claim 1 is ineligible.
As to dependent claims 2-4, the analysis of the parent claim is incorporated.
In the step 2A, prong one analysis, the additional limitations are under a broadest reasonable interpretation, also the abstract idea of a mathematical calculation. See MPEP § 2106.04(a)(2)(I)(C).
The claim is therefore also ineligible.
As to dependent claim 5, the analysis of the parent claim is incorporated.
In the step 2A, prong two analysis, the additional limitation of, “wherein the simplified artificial neural network is executed by an embedded system,” is a step or process that, under its broadest reasonable interpretation, is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use, neural network environments. See MPEP §§ 2106.04(d), 2106.05(h). Correspondingly, in the step 2B analysis, the limitation amount to nothing more than linking the use of the judicial exception to a particular technological environment or field of use, specifically neural network environments. See MPEP § 2106.05(h).
The claim is therefore also ineligible.
As to dependent claims 6-7, the analysis of the parent claim is incorporated.
In the step 2A, prong one analysis, the additional limitations of, “wherein the trained weights are determined through a training phase of the trained artificial neural network” and “wherein the simplified artificial neural network includes less trained weights than the trained artificial neural network,” merely further limits the abstract ideas that remain, under a broadest reasonable interpretation, are the abstract idea of a mathematical calculation. See MPEP § 2106.04(a)(2)(I)(C).
The claim is therefore also ineligible.
Claim 8 is substantively similar in analysis as above. The only differences, such as step 1 analysis of the statutory class are statutory (a material or manufacture), but still fail in the rest of the analysis. This claim is also therefore ineligible.
Claims 9-14 are mere recitations of dependent claims 2-7 and the similar analysis is incorporated. These claims are also therefore ineligible.
Claim 15 is substantively similar in analysis as above. The only differences, such as step 1 analysis of the statutory class are statutory (a material or manufacture), but still fail in the rest of the analysis. This claim is also therefore ineligible.
Claims 16-20 are mere recitations of dependent claims 2-7 and the similar analysis is incorporated. These claims are also therefore ineligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 6-8, 13-15 and 19-20 are rejected under 35 USC § 102(a)(1) as being anticipated by Baluja et al. (“Baluja”), United States Patent Application Publication 2021/0209475, published on July 8, 2021.
As to Claim 1, Baluja teaches: A method for generating a simplified artificial neural network from a trained artificial neural network comprising layers of neurons, each layer having at least one input, and each input coupled to at least one neuron of the layer by a weight applied connection (Baluja: par. 0003, a neural network can include a group of connected nodes arranged in one or more layers and can be connected with weights associated with each connection (an edge)), the method comprising:
forming clusters of trained weights of the weight applied connection for each input of each layer of the trained artificial neural network (Baluja: Fig. 3, [306] par. 0096, clusters are formed);
computing a representative weight for each formed cluster (Baluja: par. 0096, [308], a representative weight, such as the mean or median of the weights assigned to a cluster may be used as the representative weight); and
replacing the trained weights of the weight applied connection for each cluster with the representative weight to form the simplified artificial neural network (Baluja: par. 0097, at [310] the weights are replaced accordingly).
PNG
media_image1.png
786
600
media_image1.png
Greyscale
As to Claim 6, Baluja teaches the elements of claim 1.
Baluja further teaches: wherein the trained weights are determined through a training phase of the trained artificial neural network (Baluja: Fig. 3, [302], performa number of training iterations on a neural network).
As to Claim 7, Baluja teaches the elements of claim 1.
Baluja further teaches: wherein the simplified artificial neural network includes less trained weights than the trained artificial neural network (Examiner asserts that a compressed network will have fewer nodes and thus fewer trained weights).
As to Claim 8, it is rejected for similar reasons as claim 1. Baluja further teaches a computer readable medium with instructions (Baluja: par. 0067).
As to Claim 13, it is rejected for similar reasons as claim 6.
As to Claim 14, it is rejected for similar reasons as claim 7.
As to Claim 15, it is rejected for similar reasons as claims 1 and 8. Baluja further teaches a processor(Baluja: par. 0067).
As to Claim 19, it is rejected for similar reasons as claim 6.
As to Claim 20, it is rejected for similar reasons as claim 7.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
A.
Claims 2-3, 9-10 and 16-17 are rejected under 35 USC § 103 as being unpatentable over Baluja et al. (“Baluja”), United States Patent Application Publication 2021/0209475, published on July 8, 2021 in view of Choi et al. (“Choi”) United States Patent Application Publication 2018/0107925, published on Apr. 19, 2018.
As to Claim 2, Baluja teaches the limitations of claim 1.
Baluja may not explicitly teach: forming sets of clusters of trained weights of the weight applied connection for each input of each layer of the trained artificial neural network;
computing a representative weight for each formed cluster of each set of clusters of trained weights;
selecting a subset of the set of clusters of trained weights such that a minimum cost function is achieved by replacing the trained weights of the weight applied connection by the representative weight; and
replacing the trained weights of the weight applied connection for each cluster of the selected subset with the representative weight to form the simplified artificial neural network.
Baluja does however teach that the clustering may be repeated again iteratively (Baluja: par. 0098). Choi teaches in general concepts related to performing network parameter quantization in deep neural networks (Choi: Abstract). Specifically, Choi teaches that an objective function may be used to minimize the loss for selecting how weights are to be adjusted for simplifying the network (Choi: pars. 0054-56). Equations 5(a) and 5(b) are resulting k-means clustering minimization of quantization error (Choi: par. 0059).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Baluja disclosures and teachings by performing the selection of the clustered weights with a minimized cost approach as taught and suggested by Choi. Such a person would have been motivated to do so with a reasonable expectation of success to allow for quantization techniques that allows recovery of original performance (Choi: par. 0043).
As to Claim 3, Baluja and Choi teach the limitations of claim 2.
Choi further teaches: wherein the subset of the clusters of trained weights is selected in accordance with the formula
PNG
media_image2.png
58
270
media_image2.png
Greyscale
, wherein Si are the sets of clusters, Sik are the subset of the set of clusters, K is the number of the subset of the set of clusters Si,k of the sets of clusters Si, wi,j are the trained weights of the layer, the clusters of a set comprising L trained weights in total, w-bar i, k is the representative weight for the trained weights of the subset of the set of clusters Si,k, and
PNG
media_image3.png
47
43
media_image3.png
Greyscale
for j
∈
{1,...,L} are the partial gradients of the cost function with respect to the trained weights wi,j of the layer (Choi: Eq. 8(a), par. 0084, in essence is the cost function, utilizing a Hess0an-weighted k-means clustering minimization quantization loss function).
As to Claim 9, it is rejected for similar reasons as claim 2.
As to Claim 10, it is rejected for similar reasons as claim 3.
As to Claim 16, it is rejected for similar reasons as claim 2.
As to Claim 17, it is rejected for similar reasons as claim 3.
B.
Claims 4-5, 11-12 and 18 are rejected under 35 USC § 103 as being unpatentable over Baluja et al. (“Baluja”), United States Patent Application Publication 2021/0209475, published on July 8, 2021.
As to Claim 4, Baluja teaches the elements of claim 1.
Baluja further teaches: computing weighted inputs for each input of the layer, (Baluja: Fig. 4, [404], par. 0104, using a table lookup, the result value for the weight input is determined);
computing an accumulated value corresponding to the sum of the weighted inputs and bias connections of a neuron connected to the input (Baluja: Fig. 4, [408], par. 0105, the sum is result values identified in [406]); and
computing an output value of the neuron by passing the accumulated value in an activation function of the neuron (Baluja: Fig. 4, [406], par. 0106).
PNG
media_image4.png
790
567
media_image4.png
Greyscale
Baluja may not explicitly teach: the computing comprising multiplying the input by representative weights corresponding to connections connected to the input.
Baluja however does disclose that the typical process is to multiply the input by the weight corresponding to the edge (Baluja: par. 0022).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified Baluja by utilizing the multiplication means as taught by the prior art as mentioned by Baluja. Such a person would have been motivated to do so with a reasonable expectation of success as a design choice or if a lookup table is not practicable.
As to Claim 5, Baluja teaches the elements of claim 4.
Baluja further teaches: wherein the simplified artificial neural network is executed by an embedded system (Baluja: par. 0066, computing device [102] may be an embedded device).
As to Claim 11, it is rejected for similar reasons as claim 2.
As to Claim 12, it is rejected for similar reasons as claim 3.
As to Claim 18, it is rejected for similar reasons as claim 2.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Aravamudan et al., US Patent Application Publication 2018/0082197 (Mar. 22, 2018) (describing knowledge bases identified by generating semantic associations);
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES T TSAI whose telephone number is (571)270-3916. The examiner can normally be reached M-F 8-5 Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000./JAMES T TSAI/
/JAMES T TSAI/ Primary Examiner, Art Unit 2147