Prosecution Insights
Last updated: April 19, 2026
Application No. 18/212,618

Methods And Apparatus For Managing Weight Data Accesses For Neural Network Processors

Non-Final OA §101§103§112
Filed
Jun 21, 2023
Examiner
ZENG, WENWEI
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Expedera Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
10 currently pending
Career history
10
Total Applications
across all art units

Statute-Specific Performance

§101
33.3%
-6.7% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
3.0%
-37.0% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103 §112
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Information Disclosure Statement The information disclosure statement s (IDS) submitted on June 26, 2024 and on November 18, 2024 w ere filed and considered by the examiner . The submission is in compliance with the provisions of 37 CFR 1.97. Claim Objections Claim 16 is objected to under 37 CFR 1.75 as being a substantial duplicate of claim 14 FILLIN "Enter appropriate information" \* MERGEFORMAT . When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. The term " lower " from “is performed with a lower priority” in claim 13 is a relative term which renders the claim indefinite. The term " lower " is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to FILLIN "Identify whether the claim(s) are directed to a law of nature; a natural phenomenon; or an abstract idea." \* MERGEFORMAT an abstract idea (mental process) without significantly more. Claim 1: Regarding claim 1, in step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “ A method of processing a multilayer neural network with a neural network processor by tapering in weight matrix data from a memory coupled to said neural network processor, said method comprising the steps of: dividing said multilayer neural network into subsets of neural network layers wherein each subset will be processed as a group; each said subset of neural network layers referred to as a partition; dividing each neural network layer of each said partition into a set of work fragments, each work fragment comprising a subset of computations for said neural network layer; grouping set of said work fragments of each partition into work fragment subsets that can be processed simultaneously; loading into said neural network processor a first work fragment subset for a first partition from said memory; loading in a first subset of weight matrix data from said external memory for said first work fragment subset of said first partition into said neural network processor; commencing processing of said first work fragment subset when said first subset of weight matrix data is available; loading in a second subset of weight matrix data from said external memory, if not already loaded, for a second work fragment subset for said first partition into said neural network processor while processing said first work fragment subset for said first partition; loading said second work fragment subset for said first partition into said neural network processor from said external memory; and processing said second work fragment subset for said first partition, when said second subset of weight matrix data for said second work fragment subset is available,” and a method is one of the four statutory categories of invention. In step 2A prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components: said method comprising the steps of: dividing said multilayer neural network into subsets of neural network layers wherein each subset will be processed as a group; each said subset of neural network layers referred to as a partition; ( This is considered a mental process, a person can mentally evaluate and divide a multilayer neural network into subsets, see MPEP 2106.04(a)(2)(III)), dividing each neural network layer of each said partition into a set of work fragments, each work fragment comprising a subset of computations for said neural network layer; ( This is considered a mental process, a person can mentally evaluate and divide each neural network layer into work fragments, see MPEP 2106.04(a)(2)(III)), grouping set of said work fragments of each partition into work fragment subsets that can be processed simultaneously; (This is considered a mental process, a person can mentally evaluate and group work fragments into subsets, see MPEP 2106.04(a)(2)(III)), If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea. In step 2A prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: A method of processing a multilayer neural network with a neural network processor by tapering in weight matrix data from a memory coupled to said neural network processor, ( This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), loading into said neural network processor a first work fragment subset for a first partition from said memory; ( This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), loading in a first subset of weight matrix data from said external memory for said first work fragment subset of said first partition into said neural network processor; ( In step 2A, prong 2, loading recites mere data inputting and receiving data, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)), commencing processing of said first work fragment subset when said first subset of weight matrix data is available; ( This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), loading in a second subset of weight matrix data from said external memory, if not already loaded, for a second work fragment subset for said first partition into said neural network processor while processing said first work fragment subset for said first partition; ( In step 2A, prong 2, loading recites mere data inputting and receiving data, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)), loading said second work fragment subset for said first partition into said neural network processor from said external memory; ( In step 2A, prong 2, loading recites mere data inputting and receiving data, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)), and processing said second work fragment subset for said first partition, when said second subset of weight matrix data for said second work fragment subset is available, ( This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, additional elements iv , v , vii , and x recite mere instructions to apply the judicial exception using generic computer components, which are not indicative of significantly more. The additional elements vi, viii, and ix recite mere data gathering or outputting, and are considered insignificant extra-solution activities. In step 2B, these insignificant extra-solution activities are well understood routine and conventional activity which includes receiving or transmitting data over a network from court case Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016), – see MPEP 2106.05(d) (II)(i)), as well as see court case Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering, see MPEP 2106.05(g)(3))). Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Claim 2: Regarding claim 2, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 2 also recites an additional element: The method of processing a multilayer neural network with a neural network processor and managing access to a memory as set forth in claim 1 wherein said work fragment subsets contain work fragments from different neural network layers in said first partition, (In step 2A, prong 2, This is considered mere instructions to apply an exception using generic computer– see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 3: Regarding claim 3, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 3 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and managing access to a memory as set forth in claim 1 wherein work fragments may be processed out of order such that a later neural network layer may be processed before an earlier neural network layer, (In step 2A, prong 2, performing an out of order processing of a multilayer neural network is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 4: Regarding claim 4, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 4 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and managing access to a memory as set forth in claim 1 further comprising: decompressing said first subset of weight matrix data loaded from said external memory , (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer– see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 5: Regarding claim 5, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 5 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and managing access to a memory as set forth in claim 1 wherein said first subset of weight matrix data may comprise one of several different data precisions, (In step 2A, prong 2, This is considered mere instructions to apply an exception using generic computer– see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 6: Regarding claim 6, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 6 recites the following additional elements: The method of processing a multilayer neural network with a neural network processor and managing access to a memory as set forth in claim 1, (In step 2A, prong 2, This is considered mere instructions to apply an exception using generic computer– see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), .. further comprising: reloading in said first subset of weight matrix data from said external memory after a context switch of said neural network processor , (In step 2A, prong 2, this recites mere data gathering, which is considered insignificant extra-solution activity – see MPEP 2106.05(g),). In step 2B, this insignificant extra-solution activity is well understood routine and conventional activity which includes receiving or transmitting data over a network from court case Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) – see MPEP 2106.05(d) (II)(i), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 7: Regarding claim 7, in step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “ A method of processing a multilayer neural network with a neural network processor and tapering out weight matrix from said neural network processor, said method comprising the steps of: dividing said multilayer neural network into subsets of neural network layers wherein each subset will be processed as a group; said subsets of neural network layers referred to as partition; dividing each network layer in each partition into a set of work fragments; grouping set of said work fragments of each cut into work fragment subsets that can be processed simultaneously; loading into said neural network processor a first work fragment subset for a first partition from said external memory; loading in a first weight matrix from said external memory for said set of work fragments layers of said first partition; commencing processing of said work fragments for said neural network layers of said first partition; discarding said first weight matrix for a first neural network fragment of said first partition after processing a final work fragment for said first network layer to free memory resources; and loading a second weight matrix for a neural network layer in a subsequent partition into said neural network processor while completing processing of said work fragments of said first partition ,” and a method is one of the four statutory categories of invention. In step 2A prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components: said method comprising the steps of: dividing said multilayer neural network into subsets of neural network layers wherein each subset will be processed as a group; said subsets of neural network layers referred to as partition; (This is considered a mental process, a person can mentally evaluate and divide a multilayer neural network into subsets of neural network layers, see MPEP 2106.04(a)(2)(III)), dividing each network layer in each partition into a set of work fragments; ( This is considered a mental process, a person can mentally evaluate and divide each network layer into a set of work fragments , see MPEP 2106.04(a)(2)(III)), grouping set of said work fragments of each cut into work fragment subsets that can be processed simultaneously; ( This is considered a mental process, a person can mentally evaluate and group set of work fragments into subsets , see MPEP 2106.04(a)(2)(III)), If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea. In step 2A prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: A method of processing a multilayer neural network with a neural network processor and tapering out weight matrix from said neural network processor , (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), loading into said neural network processor a first work fragment subset for a first partition from said external memory; loading in a first weight matrix from said external memory for said set of work fragments layers of said first partition ; (In step 2A, prong 2, this recites mere data inputting and receiving data, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)), commencing processing of said work fragments for said neural network layers of said first partition; (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), discarding said first weight matrix for a first neural network fragment of said first partition after processing a final work fragment for said first network layer to free memory resources ; (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), and loading a second weight matrix for a neural network layer in a subsequent partition into said neural network processor while completing processing of said work fragments of said first partition , (In step 2A, prong 2, this recites mere data inputting and receiving data, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)), Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, additional elements iv, vi, and vii, recite mere instructions to apply the judicial exception using generic computer components, which are not indicative of significantly more. The additional elements v and viii recite mere data gathering or outputting, and are considered insignificant extra-solution activities. In step 2B, th e s e insignificant extra-solution activities are well understood routine and conventional activities which includes receiving or transmitting data over a network from court case Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016), – see MPEP 2106.05(d) (II)(i)), as well as see court case Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering, see MPEP 2106.05(g)(3))). Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Claim 8: Regarding claim 8, it is dependent upon claim 7, and thereby incorporates the limitations of, and corresponding analysis applied to claim 7. Further, claim 8 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and tapering weight matrix data from said neural network processor as set forth in claim 7, said method further comprising: decompressing said first weight matrix loaded from said external memory , (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 9: Regarding claim 9, it is dependent upon claim 7, and thereby incorporates the limitations of, and corresponding analysis applied to claim 7. Further, claim 9 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and managing access to a memory as set forth in claim 7 wherein said first weight matrix may comprise one of several different data precisions , (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 10: Regarding claim 10, it is dependent upon claim 7, and thereby incorporates the limitations of, and corresponding analysis applied to claim 7. Further, claim 10 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and managing access to a memory as set forth in claim 7 further comprising: reloading in said first weight matrix from said external memory after a context switch of said neural network processor, (In step 2A, prong 2, this recites mere data gathering, which is considered insignificant extra-solution activity – see MPEP 2106.05(g),). In step 2B, this insignificant extra-solution activity is well understood routine and conventional activity which includes receiving or transmitting data over a network from court case Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) – see MPEP 2106.05(d) (II)(i), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 11: Regarding claim 11, it is dependent upon claim 7, and thereby incorporates the limitations of, and corresponding analysis applied to claim 7. Further, claim 11 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and managing access to a memory as set forth in claim 7 wherein said partitions can belong to different neural networks , (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 12: Regarding claim 12 , in step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “ A method of processing a multilayer neural network with a neural network processor and prefetching weight matrix data from an external memory, said method comprising the steps of: dividing said multilayer neural network into subsets of neural network layers wherein each subset will be processed as a group; said subsets of neural network layers referred to as partition; dividing each network layer in each partition into a set of work fragments; grouping set of said work fragments of each partition into work fragment subsets that can be processed simultaneously; loading into said neural network processor a first work fragment subset for a first partition from said external memory; loading in a first weight matrix from said external memory for said set of work fragments layers of said first partition; commencing processing of said work fragments for said neural network layers of said first partition; prefetching a second weight matrix for a neural network layer in a subsequent partition from said external memory into said neural network processor while processing of said work fragments of said first partition when memory bandwidth is available to said external memory; and storing said second weight matrix in said neural network processor until said subsequent partition is triggered and said second weight matrix is needed for processing ,” and a method is one of the four statutory categories of invention. In step 2A prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components: said method comprising the steps of: dividing said multilayer neural network into subsets of neural network layers wherein each subset will be processed as a group; said subsets of neural network layers referred to as partition; ( This is a mental process, a person can mentally evaluate and divide a multilayer neural network into subsets , see MPEP 2106.04(a)(2)(III)) , dividing each network layer in each partition into a set of work fragments; (This is a mental process, a person can mentally evaluate and divide each network layer, see MPEP 2106.04(a)(2)(III)), grouping set of said work fragments of each partition into work fragment subsets that can be processed simultaneously; (This is a mental process, a person can mentally evaluate and group work fragments , see MPEP 2106.04(a)(2)(III)) , If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea. In step 2A prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: A method of processing a multilayer neural network with a neural network processor and prefetching weight matrix data from an external memory, ( This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), loading into said neural network processor a first work fragment subset for a first partition from said external memory; ( In step 2A, prong 2, loading recites mere data gathering, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)), loading in a first weight matrix from said external memory for said set of work fragments layers of said first partition; ( In step 2A, prong 2, loading recites mere data gathering, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)), commencing processing of said work fragments for said neural network layers of said first partition; ( This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), prefetching a second weight matrix for a neural network layer in a subsequent partition from said external memory into said neural network processor while processing of said work fragments of said first partition when memory bandwidth is available to said external memory; ( In step 2A, prong 2, prefetching, which is similar to loading, recites mere data gathering, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)), and storing said second weight matrix in said neural network processor until said subsequent partition is triggered and said second weight matrix is needed for processing , ( In step 2A, prong 2, this recites mere data gathering, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)), Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, additional element iv and vii recite mere instructions to apply the judicial exception using generic computer components, which are not indicative of significantly more. The additional elements v , vi , viii, and ix recite mere data gathering, and are considered insignificant extra-solution activities. In step 2B, these insignificant extra-solution activities are well understood routine and conventional activities which includes receiving or transmitting data over a network from court case Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016), – see MPEP 2106.05(d) (II)(i)), Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea . Therefore, the claim is not patent eligible. Claim 13: Regarding claim 13, it is dependent upon claim 12 , and thereby incorporates the limitations of, and corresponding analysis applied to claim 12 . Further, claim 13 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and prefetching weight matrix data from an external memory as set forth in claim 12 wherein said prefetching is performed with a lower priority than other accesses to said external memory, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 14: Regarding claim 14, it is dependent upon claim 12 , and thereby incorporates the limitations of, and corresponding analysis applied to claim 12 . Further, claim 14 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and prefetching weight matrix data from an external memory as set forth in claim 12 wherein said partitions can belong to different neural networks, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 15: Regarding claim 15, it is dependent upon claim 12 , and thereby incorporates the limitations of, and corresponding analysis applied to claim 12 . Further, claim 15 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and prefetching weight matrix data from an external memory as set forth in claim 12 wherein said work fragments can be executed out of order, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 16: Regarding claim 16, it is dependent upon claim 12 , and thereby incorporates the limitations of, and corresponding analysis applied to claim 12 . Further, claim 16 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and prefetching weight matrix data from an external memory as set forth in claim 12 wherein said partitions can belong to different neural networks, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 17: Regarding claim 17, it is dependent upon claim 12 , and thereby incorporates the limitations of, and corresponding analysis applied to claim 12 . Further, claim 17 recites the following additional element: The method of processing a multilayer neural network with a neural network processor and tapering weight matrix data out from said neural network processor as set forth in claim 12, said method further comprising: decompressing said second weight matrix prefetched from said external memory , ( In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 and 12 are rejected under 35 U.S.C. 103 over TIMOFEJEVS, A. et al. (Pub. No. WO 2021259482 A1), published on December 30, 2021, (hereafter, TIMOFEJEVS), in view of Le Grand, S. et al. (Pub. No. WO2017117186A1), published on July 6, 2017, (hereafter, LEGRAND), and further in view of Fang, W. et al., (Pub. No. CN112667528A), published on April 16, 2021, (hereafter, FANG). Claim 1: Regarding claim 1, TIMOFEJEVS teaches “ A method of processing a multilayer neural network with a neural network processor by tapering in weight matrix data from a memory coupled to said neural network processor, said method comprising the steps of: dividing said multilayer neural network into subsets of neural network layers wherein each subset will be processed as a group; each said subset of neural network layers referred to as a partition; ” See TIMOFEJEVS in paragraph [0016] describe "In some implementations, the neural network topology includes one or more layers of neurons, each layer of neurons computing respective outputs based on a respective mathematical function, and transforming the neural network topology to the equivalent analog network of analog components includes: (i) decomposing a first layer of the neural network topology to a plurality of sub-layers, including decomposing a mathematical function corresponding to the first layer to obtain one or more intermediate mathematical functions. Each sub-layer implements an intermediate mathematical function; and (ii) for each sub-layer of the first layer of the neural network topology: (a) selecting one or more sub-function blocks, based on a respective intermediate mathematical function, for the respective sub-layer; and (b) generating a respective multilayer analog sub-network of analog neurons based on arranging the one or more sub-function blocks. Each analog neuron implements a respective function of the one or more sub-function blocks, and each analog neuron of a first layer of the multilayer analog sub-network is connected to one or more analog neurons of a second layer of the multilayer analog sub-network." Note, the examiner construes the word partition to mean any subdivision of a neural network model. Here, TIMOFEJEVS describes an implementation that sub-divides a neural network first from layers to sub-layers, then from sub-layers into sub-function blocks. TIMOFEJEVS in paragraph [0016] mentioned “Each sub-layer implements an intermediate mathematical function”, relates to (i.e. each subset will be processed as a group ). Regarding the subset of neural network layers referred to as a partition limitation, the instant application’s specification paragraph [0085] states “ each partition of neural network layers may be processed together as a group.” Further, see TIMOFEJEVS in paragraph [0037] mention “in some implementations, the neural network topology includes K inputs, a single layer perceptron with L calculation neurons, and a weight matrix V that includes a row of weights for each calculation neuron of the L calculation neurons.” Note, tapering according to specification paragraphs [0092-0093] is interpreted to mean the same as loading or inputting in data. Here, in [0037], TIMOFEJEVS mentions the neural network includes inputs, which shows loading in data that includes weight matrix values. Further, see TIMOFEJEVS in paragraph [0145] “2. If K>N then: a. Divide K input neurons into m 1 = groups such that every group consists of no more than N inputs. b. Construct the first hidden layer LTHi of the T-NN from rr^neurons, each neuron performing an identity activation function. c. Connect input neurons from every group to corresponding neuron from the next layer.” Here, TIMOFEJEVS describes a part of the neural network that handles data is treated as a group or subdivision (i.e. each said subset of neural network layers referred to as a partition) See TIMOFEJEVS for more information in paragraphs [0142-0143] and [00282]. Further, TIMOFEJEVS teaches “ dividing each neural network layer of each said partition into a set of work fragments, each work fragment comprising a subset of computations for said neural network layer;” See TIMOFEJEVS in paragraph [0016] describe "in some implementations, the neural network topology includes one or more layers of neurons, each layer of neurons computing respective outputs based on a respective mathematical function, and transforming the neural network topology to the equivalent analog network of analog components includes: (i) decomposing a first layer of the neural network topology to a plurality of sub-layers, including decomposing a mathematical function corresponding to the first layer to obtain one or more intermediate mathematical functions. Each sub-layer implements an intermediate mathematical function; and (ii) for each sub-layer of the first layer of the neural network topology: (a) selecting one or more sub-function blocks, based on a respective intermediate mathematical function, for the respective sub-layer; and (b) generating a respective multilayer analog sub-network of analog neurons based on arranging the one or more sub-function blocks. Each analog neuron implements a respective function of the one or more sub-function blocks, and each analog neuron of a first layer of the multilayer analog sub-network is connected to one or more analog neurons of a second layer of the multilayer analog sub-network." Here, TIMOFEJEVS shows the sub-function blocks as the work fragments from “ selecting one or more sub-function blocks, based on a respective intermediate mathematical function, for the respective sub-layer” (i.e. dividing each neural network layer of each said partition into a set of work fragments). This shows that each sub-function blocks have its own math function and calculations associated with its respective task, and further indicates the segmented nature of a neural network. However, TIMOFEJEVS did not teach “grouping set of said work fragments of each partition into work fragment subsets that can be processed simultaneously ;” “ loading into said neural network processor a first work fragment subset for a first partition from said memory ;” “ loading in a first subset of weight matrix data from said external memory for said first work fragment subset of said first partition into said neural network processor ;” “ commencing processing of said first work fragment subset when said first subset of weight matrix data is available ,” “ loading in a second subset of weight matrix data from said external memory, if not already loaded, for a second work fragment subset for said first partition into said neural network processor while processing said first work fragment subset for said first partition;” “ loading said second work fragment subset for said first partition into said neural network processor from said external memory ;” “ and processing said second work fragment subset for said first partition, when said second subset of weight matrix data for said second work fragment subset is available ,” In an analogous art, LEGRAND teaches “grouping set of said work fragments of each partition into work fragment subsets that can be processed simultaneously ;” See LEGRAND in paragraph [0020] describe “as shown in FIG. 1, however, the weight matrix 120 may be split among a plurality of different computer processors, and the processors may generate different portions of the matrix 130 in parallel. For example, the weight matrix 120 may be striped row-wise (separated into subsets of rows), and each processor may be provided with a different subset of the rows. The input matrix 110 may be striped column-wise (separated into subsets of columns), and each processor may be provided with a different subset of the columns. … In some embodiments, the matrix 130 may be generated by performing a series of "reduction" operations or some equivalent operation in which multiple sets of numbers— the intermediate matrices in this example— are reduced into a single set of numbers— the subset of columns of matrix 130 to be stored on an individual processor. A reduction operation can be performed to aggregate, from the intermediate matrices, each separate subset of columns to be stored on each individual processor. In some cases, the reduction operations may be performed substantially in parallel or otherwise at least partially overlapping in time.” Here, LEGRAND shows that each processor has different portions of data named matrix 130. Then, LEGRAND describes that grouping different portions of the matrix data (i.e. relate to partition) with reduction operations being a work fragment, that may be performed substantially in parallel or otherwise at least partially overlapping in time (i.e. work fragment of each partition processed simultaneously). Note the examiner construes work fragment to mean any task, job, part of a data, that is performed on data. Further, see LEGRAND in paragraph [0004] for more details “FIG. 1 is a diagram of an illustrative artificial neural network with multiple layers, indicating how the layers are to be distributed among multiple computer processors for parallel processing.” LEGRAND further elaborates that the process described in [0020] is part of neural networks with multiple layers, and each layer needs a number of processors to process data in parallel or simultaneously. Further , LEGRAND teaches “ loading into said neural network processor a first work fragment subset for a first partition from said memory ;” See LEGRAND in paragraph [0028] describe “..the individual computer processors multiply their own subsets of columns of the current matrix by their own subsets of rows of the current weight matrix. In some embodiments, the subsets of rows of the current weight matrix 120 have already been stored on, or are otherwise accessible by, the corresponding individual computer processors. For example, when the NN 100 is loaded on the computing system 500, when the process 200 is initiated, or at some other time, the subsets of rows and/or columns of the various weight matrices of the NN 100 may be stored on or otherwise made accessible to the corresponding computer processors.” See LEGRAND in paragraph [0023] for more details on loading from memory. Further, see LEGRAND describe in [0002] “Sets of individual input vectors ("mini-batches") may be processed at the same time by using an input matrix instead of a single input vector. The NN can repeatedly process the input data, and the parameters (e.g., the weight matrices) of the NN can be modified in what amounts to a trial-and- error process until the model produces (or "converges" on) the correct or preferred output.” LEGRAND describes this can be done repeatedly for input data of the neural network, … and includes loading data for any work fragment for any layer or partition of the neural network. Later in paragraph [0054] Clause 1. LEGRAND describes “a system comprising a plurality of processors, the system programmed by executable instructions to at least: obtain data defining an artificial neural network, the artificial neural network comprising a first layer of nodes, a second layer of nodes, and a third layer of nodes, wherein the first layer comprises more nodes than the second layer, and wherein the third layer comprises more nodes than the second layer; provide to a first processor of the plurality of processors: a first column of input data from a first data matrix, the first data matrix comprising input data for the artificial neural network; a first row of weights from a first weight matrix, the first weight matrix comprising weights for connections between nodes of the first layer and nodes of the second layer; and a first column of weights from a second weight matrix, the second weight matrix comprising weights for connections between nodes of the second layer and nodes of the third layer
Read full office action

Prosecution Timeline

Jun 21, 2023
Application Filed
Mar 25, 2026
Non-Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month