Prosecution Insights
Last updated: April 19, 2026
Application No. 17/856,494

COMPLEMENTARY SPARSITY IN PROCESSING TENSORS

Non-Final OA §101§103§DP
Filed
Jul 01, 2022
Examiner
GUDAS, JAKOB OSCAR
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
Numenta, Inc.
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
4y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
4 granted / 9 resolved
-10.6% vs TC avg
Strong +71% interview lift
Without
With
+71.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
28 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
37.0%
-3.0% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §103 §DP
Detailed Action The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is Non-Final and is in response to claims filed on 07/01/2022. Claims 1-20 are pending for examination. Information Disclosure Statement The Information Disclosure Statements (IDS) submitted on 04/24/2023, 01/22/2024, 06/01/2024, 10/18/2024, 11/27/2024, 04/12/2025, 04/23/2025, 07/14/2025, 09/10/2025, 10/25/2025, 12/09/2025, and 02/27/2026 are in compliance with the provisions of 37 CFR 1.97, 1.98, and MPEP § 609. They have been placed in the application file, and the information referred to therein has been considered as to the merits. Claim Objections Claims 7 and 17 are objected to because of the following informalities: Claim 7 recites “setting remaining of the plurality of accumulated values as zero”. This should be changed to something of the effect of “setting the remaining values of the plurality of accumulated values [[as]] to zero”. Claim 17 recites “and set remaining of the plurality of accumulated values as zero”. This should be changed to something of the effect of “and set the remaining values of the plurality of accumulated values [[as]] to zero”. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-6 and 11-16 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 5-7, 11-12, and 15-17 of U.S. Patent No. 17/856,480, (hereinafter “application ‘480”). Although the claims at issue are not identical, they are not patentably distinct from each other. Instant Application 17/856,480 Claim 1 Claim 11 A computer-implemented method for operating on tensors, the computer-implemented method comprising: combining a plurality of sparse process tensors to a complementary dense process tensor, A method comprising: generating a complementary dense process tensor, generating the complementary dense process tensor comprising combining a plurality of sparse process tensors the plurality of sparse process tensors having non-overlapping locations of active values; generating the complementary dense process tensor comprising combining a plurality of sparse process tensors that have non-overlapping locations of active values; performing computations between the complementary dense process tensor and an activation tensor to generate a plurality of products; performing computations between two or more tensors to generate a product tensor, and separating the plurality of products into groups, each group corresponding to one of the sparse process tensors. re-arranging values in one of the two or more tensors or in the product tensor to group the values corresponding to one of the sparse process tensors together. Claim 2 Claim 17 The computer-implemented method of claim 1, wherein a distribution of the active values in at least one of the sparse process tensors are partitioned. wherein the active values in the plurality of sparse process tensors are partitioned Claim 3 Claim 12 The computer-implemented method of claim 1, wherein performing the computations between the complementary dense process tensor and the activation tensor comprises: performing elementwise multiplications between values in the complementary dense process tensor and values in the activation tensor. wherein performing the computations between two or more tensors comprises: performing multiplications between two or more tensors; Claim 4 Claim 15 The computer-implemented method of claim 3, wherein separating the plurality of products into groups comprises a pre-multiplication re-arrangement of the activation tensor. wherein re-arranging the values comprises re-arranging the values in an activation tensor, Claim 5 Claim 16 The computer-implemented method of claim 3, wherein separating the plurality of products into groups comprises a post-multiplication re-arrangement of the plurality of products. wherein re-arranging the values comprises re-arranging the values in the product tensor. Claim 6 Claim 12 The computer-implemented method of claim 1, further comprising: accumulating the groups of products to generate a plurality of accumulated values, and accumulating the values corresponding to the one of the sparse process tensors. each accumulated value corresponding to one of the sparse process tensors. and accumulating the values corresponding to the one of the sparse process tensors. Claim 11 Claim 1 A computing device, comprising: memory confirmed to store a model; a memory configured to store a complementary dense process tensor, and a processor coupled to the memory, a computation core coupled to the memory the processor configured to: combine a plurality of sparse process tensors of the model to a complementary dense process tensor, the complementary dense process tensor generated from combining a plurality of sparse process tensors the plurality of sparse process tensors having non- overlapping locations of active values; the complementary dense process tensor generated from combining a plurality of sparse process tensors that have non-overlapping locations of active values; perform computations between the complementary dense process tensor and an activation tensor to generate a plurality of products; the computation core configured to perform computations between two or more tensors to generate a product tensor, and separate the plurality of products into groups, each group corresponding to one of the sparse process tensors. re-arrange values in one of the two or more tensors or in the product tensor to group the values corresponding to one of the sparse process tensors together. Claim 12 Claim 7 The computing device of claim 11, wherein a distribution of the active values in at least one of the sparse process tensors are partitioned. wherein the active values in the plurality of sparse process tensors are partitioned, Claim 13 Claim 2 The computing device of claim 11, wherein perform the computations between the complementary dense process tensor and the activation tensor comprises: perform elementwise multiplications between values in the complementary dense process tensor and values in the activation tensor. wherein the computation core comprises: a multiply circuit configured to perform multiplications between two or more tensors; Claim 14 Claim 5 The computing device of claim 13, wherein separate the plurality of products into groups comprises a pre-multiplication re-arrangement of the activation tensor. wherein the permutation circuit is configured to re-arrange the values in an activation tensor Claim 15 Claim 6 The computing device of claim 13, wherein separate the plurality of products into groups comprises a post-multiplication re-arrangement of the plurality of products. wherein the permutation circuit is configured to re-arrange the values in the product tensor. Claim 16 Claim 2 The computing device of claim 11, wherein the processor is further configured to: accumulate the groups of products to generate a plurality of accumulated values, an adder tree configured to accumulate the values corresponding to the one of the sparse process tensors. each accumulated value corresponding to one of the sparse process tensors. an adder tree configured to accumulate the values corresponding to the one of the sparse process tensors. Claims 7 and 17 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 10 and 20 of application ‘480 in view of Ahmad et al. (US 20210158168 A1) hereinafter Ahmad. Instant Application 17/856,480 Claim 7 Claim 20 The computer-implemented method of claim 6, further comprising: selecting a subset of the plurality of accumulated values as winners of an activation selection; selecting a subset of outputs as values in an output activation tensor. and setting remaining of the plurality of accumulated values as zero. Claim 17 Claim 10 The computing device of claim 16, wherein the processor is further configured to: select a subset of the plurality of accumulated values as winners of an activation selection; an activation circuit configured to select a subset of outputs of the computation core as values in an output activation tensor. and set remaining of the plurality of accumulated values as zero. As per claim 7, application ‘480 teaches the language as shown in the table above. However, application ‘480 does not teach and set remaining of the plurality of accumulated values as zero. Ahmad teaches and set remaining of the plurality of accumulated values as zero (Ahmad [0072]: The convolutional layer may generate sparse layer outputs in the form of a sparse tensor by selecting a subset of nodes in the intermediate tensor, and zeroing the remaining nodes in the intermediate tensor). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of application ‘480 with setting the values to zero as taught by Ahmad. One of ordinary skill in the art would be motivated to make this combination because it would increase the flexibility of the system as the system could choose which outputs to include in the output tensor. Also, it would ensure that only the values that are wanted would be used for future calculations. As per claim 17, application ‘480 teaches the language as shown in the table above. However, application ‘480 does not teach and set remaining of the plurality of accumulated values as zero. Ahmad teaches and set remaining of the plurality of accumulated values as zero (Ahmad [0072]: The convolutional layer may generate sparse layer outputs in the form of a sparse tensor by selecting a subset of nodes in the intermediate tensor, and zeroing the remaining nodes in the intermediate tensor). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of application ‘480 with setting the values to zero as taught by Ahmad. One of ordinary skill in the art would be motivated to make this combination because it would increase the flexibility of the system as the system could choose which outputs to include in the output tensor. Also, it would ensure that only the values that are wanted would be used for future calculations. Claims 8 and 18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 6, 11, and 16 of application ‘480 in view of Rothberg et al. (US 20190237160 A1) hereinafter Rothberg. Instant Application 17/856,480 Claim 8 Claim 11 and Claim 16 The computer-implemented method of claim 1, wherein separating the plurality of products into groups comprises flattening the plurality of products in a form of a tensor into a one-dimensional array Claim 11: re-arranging values in one of the two or more tensors or in the product tensor to group the values corresponding to one of the sparse process tensors together and re-arranging the one-dimensional array to the groups of products corresponding to the sparse process tensors. Claim 16: wherein re-arranging the values comprises re-arranging the values in the product tensor. Claim 18 Claim 1 and Claim 6 The computing device of claim 11, wherein separate the plurality of products into groups comprises flatten the plurality of products in a form of a tensor into a one-dimensional array Claim 1: re-arrange values in one of the two or more tensors or in the product tensor to group the values corresponding to one of the sparse process tensors together and re-arrange the one-dimensional array to the groups of products corresponding to the sparse process tensors. Claim 6: wherein the permutation circuit is configured to re-arrange the values in the product tensor As per claim 8, application ‘480 teaches the language as shown in the table above. However, application ‘480 does not teach flattening the plurality of products in a form of a tensor into a one-dimensional array and the one-dimensional array. Rothberg teaches flattening the plurality of products in a form of a tensor into a one-dimensional array (Rothberg [0194]: In some embodiments, the CNN 4000 may be configured to flatten the output 4002C by converting an 8×43 output matrix into a one dimensional vector) the one-dimensional array (Rothberg [0194]: In some embodiments, the CNN 4000 may be configured to flatten the output 4002C by converting an 8×43 output matrix into a one dimensional vector). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of application ‘480 with flattening the output as taught by Rothberg. One of ordinary skill in the art would be motivated to make this combination because it would increase the efficiency of the system as accessing data in a one dimensional vector is faster than accessing data in a multi-dimensional tensor. As per claim 18, application ‘480 teaches the language as shown in the table above. However, application ‘480 does not teach flatten the plurality of products in a form of a tensor into a one-dimensional array and the one-dimensional array. Rothberg teaches flatten the plurality of products in a form of a tensor into a one-dimensional array (Rothberg [0194]: In some embodiments, the CNN 4000 may be configured to flatten the output 4002C by converting an 8×43 output matrix into a one dimensional vector) the one-dimensional array (Rothberg [0194]: In some embodiments, the CNN 4000 may be configured to flatten the output 4002C by converting an 8×43 output matrix into a one dimensional vector). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of application ‘480 with flattening the output as taught by Rothberg. One of ordinary skill in the art would be motivated to make this combination because it would increase the efficiency of the system as accessing data in a one dimensional vector is faster than accessing data in a multi-dimensional tensor. Claims 9 and 19 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 11 of application ‘480 in view of Wang et al. (WO 2020190772 A1) hereinafter Wang. Instant Application 17/856,480 Claim 9 Claim 11 The computer-implemented method of claim 1, wherein the plurality of sparse process tensors corresponds to a plurality of nodes of a sparse neural network. a plurality of sparse process tensors Claim 19 Claim 1 The computing device of claim I1, wherein the plurality of sparse process tensors corresponds to a plurality of nodes of a sparse neural network. A plurality of sparse process tensors As per claim 9, application ‘480 teaches the language as shown in the table above. However, application ‘480 does not teach corresponds to a plurality of nodes of a sparse neural network. Wang teaches corresponds to a plurality of nodes of a sparse neural network (Wang [0046]: A CNN has multiple layers... The trained CNN includes multiple feature maps; Wang [0058]: Training a CNN 112B may be performed by multiple nodes). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of application ‘480 with the tensors corresponding to a plurality of nodes as taught by Wang. One of ordinary skill in the art would be motivated to make this combination because it would allow the neural network to be trained in parallel reducing training time as taught by Wang (Wang [0058]). As per claim 19, application ‘480 teaches the language as shown in the table above. However, application ‘480 does not teach corresponds to a plurality of nodes of a sparse neural network. Wang teaches corresponds to a plurality of nodes of a sparse neural network (Wang [0046]: A CNN has multiple layers... The trained CNN includes multiple feature maps; Wang [0058]: Training a CNN 112B may be performed by multiple nodes). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of application ‘480 with the tensors corresponding to a plurality of nodes as taught by Wang. One of ordinary skill in the art would be motivated to make this combination because it would allow the neural network to be trained in parallel reducing training time as taught by Wang (Wang [0058]). Claims 10 and 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 11 of application ‘480 in view of Mei et al. (US 20220309124 A1) hereinafter Mei further in view of Wang. Instant Application 17/856,480 Claim 10 Claim 11 The computer-implemented method of claim 1, further comprising: combining a second plurality of sparse process tensors to a second complementary dense process tensor, generating a complementary dense process tensor, generating the complementary dense process tensor comprising combining a plurality of sparse process tensors wherein the plurality of sparse process tensors and the second plurality of sparse process tensors both correspond to nodes in a layer of a sparse neural network. generating a complementary dense process tensor, generating the complementary dense process tensor comprising combining a plurality of sparse process tensors Claim 20 Claim 1 The computing device of claim 11, wherein the processor is further configured to: combining a second plurality of sparse process tensors to a second complementary dense process tensor, the complementary dense process tensor generated from combining a plurality of sparse process tensors wherein the plurality of sparse process tensors and the second plurality of sparse process tensors both correspond to nodes in a layer of a sparse neural network. the complementary dense process tensor generated from combining a plurality of sparse process tensors As per claim 10, application ‘480 teaches the language as shown in the table above. However, application ‘480 does not teach [combining a] second [plurality of sparse process tensors to a] second [complementary dense process tensor,] and [wherein the plurality of sparse process tensors and the] second [plurality of sparse process tensors both] correspond to nodes in a layer of a sparse neural network. Mei teaches [combining a] second [plurality of sparse process tensors to a] second [complementary dense process tensor,] (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors; Mei [0239]: FIG. 21 illustrates operations 2100 for a second round of merging. The two input tiles 1922 of Step 1 (residual tile 1906, merged tile 1908) are shown. Accumulation operations along the K dimension continue with Step 2. Two input tiles 2022 of Step 2 include an incoming tile 2102 and the residual tile 1906 Step 1) [wherein the plurality of sparse process tensors and the] second [plurality of sparse process tensors both] (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors; Mei [0239]: FIG. 21 illustrates operations 2100 for a second round of merging. The two input tiles 1922 of Step 1 (residual tile 1906, merged tile 1908) are shown. Accumulation operations along the K dimension continue with Step 2. Two input tiles 2022 of Step 2 include an incoming tile 2102 and the residual tile 1906 Step 1). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of application ‘480 with combining the second plurality of sparse tensors as taught by Mei. One of ordinary skill in the art would be motivated to make this combination because increased processing efficiency may be realized for those groups or vectors of elements that are made more dense within the column of merged input as taught by Mei (Mei [0237]). Application ‘480 in view of Mei fails to teach correspond to nodes in a layer of a sparse neural network. Wang teaches correspond to nodes in a layer of a sparse neural network (Wang [0046]: A CNN has multiple layers... The trained CNN includes multiple feature maps; Wang [0058]: Training a CNN 112B may be performed by multiple nodes). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of application ‘480 in view of Mei with the tensors corresponding to the nodes as taught by Wang. One of ordinary skill in the art would be motivated to make this combination because it would allow the neural network to be trained in parallel reducing training time as taught by Wang (Wang [0058]). As per claim 20, application ‘480 teaches the language as shown in the table above. However, application ‘480 does not teach [wherein the processor is further configured to: combining a] second [plurality of sparse process tensors to a] second [complementary dense process tensor,] and [wherein the plurality of sparse process tensors and the] second [plurality of sparse process tensors both] correspond to nodes in a layer of a sparse neural network. Mei teaches [wherein the processor is further configured to: combining a] second [plurality of sparse process tensors to a] second [complementary dense process tensor,] (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors; Mei [0239]: FIG. 21 illustrates operations 2100 for a second round of merging. The two input tiles 1922 of Step 1 (residual tile 1906, merged tile 1908) are shown. Accumulation operations along the K dimension continue with Step 2. Two input tiles 2022 of Step 2 include an incoming tile 2102 and the residual tile 1906 Step 1) [wherein the plurality of sparse process tensors and the] second [plurality of sparse process tensors both] (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors; Mei [0239]: FIG. 21 illustrates operations 2100 for a second round of merging. The two input tiles 1922 of Step 1 (residual tile 1906, merged tile 1908) are shown. Accumulation operations along the K dimension continue with Step 2. Two input tiles 2022 of Step 2 include an incoming tile 2102 and the residual tile 1906 Step 1). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of application ‘480 with combining the second plurality of sparse tensors as taught by Mei. One of ordinary skill in the art would be motivated to make this combination because increased processing efficiency may be realized for those groups or vectors of elements that are made more dense within the column of merged input as taught by Mei (Mei [0237]). Application ‘480 in view of Mei fails to teach correspond to nodes in a layer of a sparse neural network. Wang teaches correspond to nodes in a layer of a sparse neural network (Wang [0046]: A CNN has multiple layers... The trained CNN includes multiple feature maps; Wang [0058]: Training a CNN 112B may be performed by multiple nodes). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of application ‘480 in view of Mei with the tensors corresponding to the nodes as taught by Wang. One of ordinary skill in the art would be motivated to make this combination because it would allow the neural network to be trained in parallel reducing training time as taught by Wang (Wang [0058]). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. With regards to claim 1, at step 1, the claim is directed to a method, which is a statutory category of invention. At Step 2A Prong 1, the examiner notes that the claim is directed to mental processes and/or mathematical concepts. The claim language has been reproduced below: A computer-implemented method for operating on tensors, the computer-implemented method comprising: (mental process, evaluation) combining a plurality of sparse process tensors to a complementary dense process tensor, (mathematical calculation) the plurality of sparse process tensors having non-overlapping locations of active values; (mental process, evaluation) performing computations between the complementary dense process tensor and an activation tensor to generate a plurality of products; and (mathematical calculation) separating the plurality of products into groups, each group corresponding to one of the sparse process tensors (mental process, evaluation; mathematical calculation). Each of the non-bolded limitations are mental processes and/or mathematical calculations. The “the computer-implemented method comprising:” limitation is an evaluation mental process that can be performed by choosing what the method comprises. The “combining a plurality of sparse process tensors to a complementary dense process tensor” limitation is a mathematical calculation that can be performed by combining the sparse tensors by hand using pen and paper. The “he plurality of sparse process tensors having non-overlapping locations of active values” limitation is an evaluation mental process that can be performed by choosing the structure of the tensors. The “performing computations between the complementary dense process tensor and an activation tensor” limitation is a mathematical calculation that can be performed by performing the computations by hand using pen and paper. The “separating the plurality of products into groups, each group corresponding to one” limitation is an evaluation mental process and mathematical calculation that can be performed by separating the plurality of products by hand using pen and paper. At Step 2A Prong 2 and 2B, the claim does not recite any additional elements that integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. With regards to claim 11, it recites similar language to claim 1 and is rejected for, at least, the same reasons therein. Herein claim 11 is directed towards the statutory category of a machine, thus also satisfying step 1. Under step 2A prong 1, the “A computing device, comprising” limitation is an evaluation mental process that can be performed by choosing what the computing device comprises. The “memory confirmed to store a model” is an evaluation mental process that can be performed by choosing what the memory is to store. The “a processor coupled to the memory” limitation is an evaluation mental process that can be performed by choosing what the processor is coupled to. The “the processor configured to” limitation is an evaluation mental process that can be performed by choosing what the processor is configured to do. Under step 2A prong 2, the ‘store’ limitation, as claimed and under BRI, is an additional element that is insignificant extra-solution activity. For example, ‘store’ in the context of this claim encompasses mere data gathering. See MPEP 2106.05(g). The remaining additional elements (the memory, the processor, etc.) are no more than high level generic computer components that amount to no more than components comprising mere instructions to apply the exception and do not integrate the judicial exception into a practical application. See MPEP 2106.05(f). Under step 2B, the claim recites “memory confirmed to store a model”, and, per MPEP 2106.05(d) (Il), the courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93. With regards to claims 2 and 12, they are directed to an evaluation mental process and mathematical calculation that can be performed by choosing that the sparse tensors are partitioned and partitioning them by hand using pen and paper. Under steps 2A prong 2 and 2B, the claims do not recite any additional elements that integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. With regards to claims 3 and 13, they are directed to mental processes and/or mathematical concepts. The “wherein perform the computations between the complementary dense process tensor and the activation tensor comprises” limitation is an evaluation mental process that can be performed by choosing what the computations comprises. The “perform elementwise multiplications between values in the complementary dense process tensor and values in the activation tensor” limitation is a mathematical calculation that can be performed by performing the multiplications by hand using pen and paper. Under steps 2A prong 2 and 2B, the claims do not recite any additional elements that integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. With regards to claims 4 and 14, they are directed to mental processes and/or mathematical concepts. The “wherein separating the plurality of products into groups comprises” limitation is an evaluation mental process that can be performed by choosing what the separating comprises. The “a pre-multiplication re-arrangement of the activation tensor” limitation is a mathematical calculation that can be performed by re-arranging the activation tensor by hand using pen and paper. Under steps 2A prong 2 and 2B, the claims do not recite any additional elements that integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. With regards to claims 5 and 15, they are directed to mental processes and/or mathematical concepts. The “wherein separating the plurality of products into groups comprises” limitation is an evaluation mental process that can be performed by choosing what the separating comprises. The “a post-multiplication re-arrangement of the plurality of products” limitation is a mathematical calculation that can be performed by re-arranging the plurality of products by hand using pen and paper. Under steps 2A prong 2 and 2B, the claims do not recite any additional elements that integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. With regards to claims 6 and 16, they are directed to mental processes and/or mathematical concepts. The “wherein the processor is further configured to” limitation is an evaluation mental process that can be performed by choosing what the processor is configured to do. The “accumulate the groups of products to generate a plurality of accumulated values” limitation is a mathematical calculation that can be performed by accumulating the products by hand using pen and paper. The “each accumulated value corresponding to one of the sparse process tensors” limitation is an evaluation mental process and mathematical relationship that can be performed by choosing what the accumulated values correspond to. Under step 2A Prong 2, none of the remaining additional elements regarding the generic computer components (i.e. the processor, etc.) are more than high level generic computer components that amount to no more than components comprising mere instructions to apply the exception and do not integrate the judicial exception into a practical application. See MPEP 2106.05(f). Under Step 2B, the claim does not recite any additional elements that integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. With regards to claims 7 and 17, they are directed to mental processes and/or mathematical concepts. The “wherein the processor is further configured to” limitation is an evaluation mental process that can be performed by choosing what the processor is configured to do. The “select a subset of the plurality of accumulated values as winners of an activation selection” limitation is an evaluation mental process and mathematical relationship that can be performed by selecting the subset of accumulation values by hand using pen and paper. The “set remaining of the plurality of accumulated values as zero” limitation is an evaluation mental process and mathematical relationship that can be performed by setting the remaining values to zero by hand using pen and paper. Under step 2A Prong 2, none of the remaining additional elements regarding the generic computer components (i.e. the processor, etc.) are more than high level generic computer components that amount to no more than components comprising mere instructions to apply the exception and do not integrate the judicial exception into a practical application. See MPEP 2106.05(f). Under Step 2B, the claim does not recite any additional elements that integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. With regards to claims 8 and 18, they are directed to mental processes and/or mathematical concepts. The “wherein separate the plurality of products into groups comprises” limitation is an evaluation mental process that can be performed by choosing what the separating comprises. The “flatten the plurality of products in a form of a tensor into a one-dimensional array” limitation is a mathematical relationship that can be performed by flattening the tensor by hand using pen and paper. The “re-arrange the one-dimensional array to the groups of products corresponding to the sparse process tensors” limitation is a mathematical relationship that can be performed by re-arranging the array by hand using pen and paper. Under steps 2A prong 2 and 2B, the claims do not recite any additional elements that integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. With regards to claims 9 and 19, they are directed to mental processes and/or mathematical concepts. The “wherein the plurality of sparse process tensors corresponds to a plurality of nodes” limitation is an evaluation mental process that can be performed by choosing what the plurality of tensors corresponds to. Under steps 2A prong 2 and 2B, the claims do not recite any additional elements that integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. With regards to claims 10 and 20, they are directed to mental processes and/or mathematical concepts. The “wherein the processor is further configured to” limitation is an evaluation mental process that can be performed by choosing what the processor is configured to do. The “combining a second plurality of sparse process tensors to a second complementary dense process tensor” limitation is a mathematical calculation that can be performed by combining the second plurality of sparse process tensors by hand using pen and paper. The “wherein the plurality of sparse process tensors and the second plurality of sparse process tensors both correspond to nodes” limitation is an evaluation mental process that can be performed by choosing what the plurality of sparse process tensors and the second plurality of sparse process tensors correspond to. Under step 2A Prong 2, none of the remaining additional elements regarding the generic computer components (i.e. the processor, etc.) are more than high level generic computer components that amount to no more than components comprising mere instructions to apply the exception and do not integrate the judicial exception into a practical application. See MPEP 2106.05(f). Under Step 2B, the claim does not recite any additional elements that integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 9-16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mei (US 20220309124 A1) in view of Wang (WO 2020190772 A1). With regards to claim 1, Mei teaches A computer-implemented method for operating on tensors, the computer-implemented method comprising: combining a [plurality] of sparse process tensors to a complementary dense process tensor, (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors) the plurality of sparse process tensors having non-overlapping locations of active values; (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors; Mei Fig. 17C: shows the vectors with non-overlapping active values) performing computations between the complementary dense process tensor and an activation tensor to generate a plurality of products; (Mei [0234]: A multiply and add operation is performed and a partial sum is stored to storage. To utilize sparsity in Matrix A, two input tiles 1822 are loaded and merged together to remove zeros in the input) and separating the [plurality of products] into groups, each group corresponding to one of the sparse process tensors (Mei [0231]: The metadata 1732 can be used by a load unit 1737 to determine how to order vector elements of a Matrix B input vector, which is fed to the functional units via an additional feed unit 1738). Mei fails to teach [combining a] plurality [of sparse process tensors] and [separating the] plurality of products [into groups]. However, Wang teaches [combining a] plurality [of sparse process tensors] (Wang [0046]: The trained CNN includes multiple feature maps) [separating the] plurality of products [into groups] (Wang [0010]: The reordering of the output feature map includes swapping columns of the output feature map). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei with the plurality of tensors and separating the plurality of products as taught by Wang. One of ordinary skill in the art would be motivated to make this combination because it would increase the efficiency of the system as the inference speed can be improved if the matrix sparsity is structured in the same way as taught by Wang (Wang [0086]). With regards to claim 2, Mei in view of Wang teaches all of the limitations of claim 1 above. Mei further teaches wherein a distribution of the active values in at least one of the sparse process tensors are partitioned (Mei [0234]: A multiply and add operation is performed and a partial sum is stored to storage. To utilize sparsity in Matrix A, two input tiles 1822 are loaded and merged together to remove zeros in the input; (the tiles being the partitions)). With regards to claim 3, Mei in view of Wang teaches all of the limitations of claim 1 above. Mei further teaches wherein performing the computations between the complementary dense process tensor and the activation tensor comprises: performing elementwise multiplications between values in the complementary dense process tensor and values in the activation tensor (Mei [0234]: A multiply and add operation is performed and a partial sum is stored to storage. To utilize sparsity in Matrix A, two input tiles 1822 are loaded and merged together to remove zeros in the input). With regards to claim 4, Mei in view of Wang teaches all of the limitations of claim 3 above. Mei further teaches wherein separating the plurality of products into groups comprises a pre-multiplication re-arrangement of the activation tensor (Mei [0231]: The metadata 1732 can be used by a load unit 1737 to determine how to order vector elements of a Matrix B input vector, which is fed to the functional units via an additional feed unit 1738). With regards to claim 5, Mei in view of Wang teaches all of the limitations of claim 3 above. Mei further teaches wherein separating the plurality of products into groups comprises (Mei [0231]: The metadata 1732 can be used by a load unit 1737 to determine how to order vector elements of a Matrix B input vector, which is fed to the functional units via an additional feed unit 1738). Mei fails to teach a post-multiplication re-arrangement of the plurality of products. However, Wang teaches a post-multiplication re-arrangement of the plurality of products (Wang [0010]: The reordering of the output feature map includes swapping columns of the output feature map). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with re-arranging the products as taught by Wang. One of ordinary skill in the art would be motivated to make this combination because it would increase the efficiency of the system as the inference speed can be improved if the matrix sparsity is structured in the same way as taught by Wang (Wang [0086]). With regards to claim 6, Mei in view of Wang teaches all of the limitations of claim 1 above. Mei further teaches further comprising: accumulating the groups of products to generate a plurality of accumulated values, (Mei [0234]: FIG. 18 illustrates operations 1800 for a matrix multiply in which random sparsity is handled via element merges. A matrix multiply operation M×K×N generally accumulates results at K dimension through multiple iterations on inner-product systolic array) each accumulated value corresponding to one of the sparse process tensors (Mei [0234]: FIG. 18 illustrates operations 1800 for a matrix multiply in which random sparsity is handled via element merges. A matrix multiply operation M×K×N generally accumulates results at K dimension through multiple iterations on inner-product systolic array). With regards to claim 9, Mei in view of Wang teaches all of the limitations of claim 1 above. Mei further teaches wherein the [plurality] of sparse process tensors corresponds [to a plurality of nodes of a sparse neural network] (Mei [0222]: FIG. 17A-17D illustrates a sparse matrix multiply accelerator). Mei fails to teach [wherein the] plurality [of sparse process tensors corresponds] to a plurality of nodes of a sparse neural network. However, Wang teaches [wherein the] plurality [of sparse process tensors corresponds] to a plurality of nodes of a sparse neural network (Wang [0046]: A CNN has multiple layers... The trained CNN includes multiple feature maps; Wang [0058]: Training a CNN 112B may be performed by multiple nodes). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with the neural network nodes as taught by Wang. One of ordinary skill in the art would be motivated to make this combination for at least the same reasons as claim 1 above. Also, this would allow the neural network to be trained in parallel reducing training time as taught by Wang (Wang [0058]). With regards to claim 10, Mei in view of Wang teaches all of the limitations of claim 1 above. Mei further teaches further comprising: combining a second [plurality] of sparse process tensors to a second complementary dense process tensor, (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors; Mei [0239]: FIG. 21 illustrates operations 2100 for a second round of merging. The two input tiles 1922 of Step 1 (residual tile 1906, merged tile 1908) are shown. Accumulation operations along the K dimension continue with Step 2. Two input tiles 2022 of Step 2 include an incoming tile 2102 and the residual tile 1906 Step 1) wherein the [plurality] of sparse process tensors and the second [plurality] of sparse process tensors [both correspond to nodes in a layer of a sparse neural network] (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors; Mei [0239]: FIG. 21 illustrates operations 2100 for a second round of merging. The two input tiles 1922 of Step 1 (residual tile 1906, merged tile 1908) are shown. Accumulation operations along the K dimension continue with Step 2. Two input tiles 2022 of Step 2 include an incoming tile 2102 and the residual tile 1906 Step 1). Me fails to teach [combining a second] plurality [of sparse process tensors] and [wherein the] plurality [of sparse process tensors and the second] plurality [of sparse process tensors] both correspond to nodes in a layer of a sparse neural network. However, Wang teaches [combining a second] plurality [of sparse process tensors] (Wang [0046]: The trained CNN includes multiple feature maps) [wherein the] plurality [of sparse process tensors and the second] plurality [of sparse process tensors] both correspond to nodes in a layer of a sparse neural network (Wang [0046]: A CNN has multiple layers... The trained CNN includes multiple feature maps; Wang [0058]: Training a CNN 112B may be performed by multiple nodes). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with the plurality of tensors and the neural network nodes as taught by Wang. One of ordinary skill in the art would be motivated to make this combination for at least the same reasons as claim 1 above. Also, this would allow the neural network to be trained in parallel reducing training time as taught by Wang (Wang [0058]). With regards to claim 11, Mei teaches A computing device, comprising: memory confirmed to store a model; (Mei [0231]: As shown in FIG. 17B, in one embodiment the sparse matrix multiply accelerator 1700 includes or couples with memory 1720 that can store matrix elements) and a processor coupled to the memory, (Mei [0231]: As shown in FIG. 17B, in one embodiment the sparse matrix multiply accelerator 1700 includes or couples with memory 1720 that can store matrix elements) the processor configured to: combine a [plurality] of sparse process tensors of the model to a complementary dense process tensor, (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors) the plurality of sparse process tensors having non- overlapping locations of active values; (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors; Mei Fig. 17C: shows the vectors with non-overlapping active values) perform computations between the complementary dense process tensor and an activation tensor to generate a plurality of products; (Mei [0234]: A multiply and add operation is performed and a partial sum is stored to storage. To utilize sparsity in Matrix A, two input tiles 1822 are loaded and merged together to remove zeros in the input) and separate the [plurality of products] into groups, each group corresponding to one of the sparse process tensors (Mei [0231]: The metadata 1732 can be used by a load unit 1737 to determine how to order vector elements of a Matrix B input vector, which is fed to the functional units via an additional feed unit 1738). Mei fails to teach [combine a] plurality [of sparse process tensors] and [separate the] plurality of products [into groups]. However, Wang teaches [combine a] plurality [of sparse process tensors] (Wang [0046]: The trained CNN includes multiple feature maps) [separate the] plurality of products [into groups] (Wang [0010]: The reordering of the output feature map includes swapping columns of the output feature map). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei with the plurality of tensors and separating the plurality of products as taught by Wang. One of ordinary skill in the art would be motivated to make this combination because it would increase the efficiency of the system as the inference speed can be improved if the matrix sparsity is structured in the same way as taught by Wang (Wang [0086]). With regards to claim 12, Mei in view of Wang teaches all of the limitations of claim 11 above. Mei further teaches wherein a distribution of the active values in at least one of the sparse process tensors are partitioned (Mei [0234]: A multiply and add operation is performed and a partial sum is stored to storage. To utilize sparsity in Matrix A, two input tiles 1822 are loaded and merged together to remove zeros in the input). With regards to claim 13, Mei in view of Wang teaches all of the limitations of claim 11 above. Mei further teaches wherein perform the computations between the complementary dense process tensor and the activation tensor comprises: perform elementwise multiplications between values in the complementary dense process tensor and values in the activation tensor (Mei [0234]: A multiply and add operation is performed and a partial sum is stored to storage. To utilize sparsity in Matrix A, two input tiles 1822 are loaded and merged together to remove zeros in the input). With regards to claim 14, Mei in view of Wang teaches all of the limitations of claim 13 above. Mei further teaches wherein separate the plurality of products into groups comprises a pre-multiplication re-arrangement of the activation tensor (Mei [0231]: The metadata 1732 can be used by a load unit 1737 to determine how to order vector elements of a Matrix B input vector, which is fed to the functional units via an additional feed unit 1738). With regards to claim 15, Mei in view of Wang teaches all of the limitations of claim 13 above. Mei further teaches wherein separate the plurality of products into groups comprises (Mei [0231]: The metadata 1732 can be used by a load unit 1737 to determine how to order vector elements of a Matrix B input vector, which is fed to the functional units via an additional feed unit 1738). Mei fails to teach a post-multiplication re-arrangement of the plurality of products. However, Wang teaches a post-multiplication re-arrangement of the plurality of products (Wang [0010]: The reordering of the output feature map includes swapping columns of the output feature map). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with re-arranging the products as taught by Wang. One of ordinary skill in the art would be motivated to make this combination because it would increase the efficiency of the system as the inference speed can be improved if the matrix sparsity is structured in the same way as taught by Wang (Wang [0086]). With regards to claim 16, Mei in view of Wang teaches all of the limitations of claim 11 above. Mei further teaches wherein the processor is further configured to: accumulate the groups of products to generate a plurality of accumulated values, (Mei [0234]: FIG. 18 illustrates operations 1800 for a matrix multiply in which random sparsity is handled via element merges. A matrix multiply operation M×K×N generally accumulates results at K dimension through multiple iterations on inner-product systolic array) each accumulated value corresponding to one of the sparse process tensors (Mei [0234]: FIG. 18 illustrates operations 1800 for a matrix multiply in which random sparsity is handled via element merges. A matrix multiply operation M×K×N generally accumulates results at K dimension through multiple iterations on inner-product systolic array). With regards to claim 19, Mei in view of Wang teaches all of the limitations of claim 11 above. Mei further teaches wherein the [plurality] of sparse process tensors corresponds [to a plurality of nodes of a sparse neural network] (Mei [0222]: FIG. 17A-17D illustrates a sparse matrix multiply accelerator). Mei fails to teach [wherein the] plurality [of sparse process tensors corresponds] to a plurality of nodes of a sparse neural network. However, Wang teaches [wherein the] plurality [of sparse process tensors corresponds] to a plurality of nodes of a sparse neural network (Wang [0046]: A CNN has multiple layers... The trained CNN includes multiple feature maps; Wang [0058]: Training a CNN 112B may be performed by multiple nodes). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with the neural network nodes as taught by Wang. One of ordinary skill in the art would be motivated to make this combination for at least the same reasons as claim 1 above. Also, this would allow the neural network to be trained in parallel reducing training time as taught by Wang (Wang [0058]). With regards to claim 20, Mei in view of Wang teaches all of the limitations of claim 11 above. Mei further teaches wherein the processor is further configured to: combining a second [plurality] of sparse process tensors to a second complementary dense process tensor, (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors; Mei [0239]: FIG. 21 illustrates operations 2100 for a second round of merging. The two input tiles 1922 of Step 1 (residual tile 1906, merged tile 1908) are shown. Accumulation operations along the K dimension continue with Step 2. Two input tiles 2022 of Step 2 include an incoming tile 2102 and the residual tile 1906 Step 1) wherein the [plurality] of sparse process tensors and the second [plurality] of sparse process tensors [both correspond to nodes in a layer of a sparse neural network] (Mei [0231]: An element merge unit 1742 can merge elements in a first set of column vectors into a second set of column vectors; Mei [0239]: FIG. 21 illustrates operations 2100 for a second round of merging. The two input tiles 1922 of Step 1 (residual tile 1906, merged tile 1908) are shown. Accumulation operations along the K dimension continue with Step 2. Two input tiles 2022 of Step 2 include an incoming tile 2102 and the residual tile 1906 Step 1). Me fails to teach [combining a second] plurality [of sparse process tensors] and [wherein the] plurality [of sparse process tensors and the second] plurality [of sparse process tensors] both correspond to nodes in a layer of a sparse neural network. However, Wang teaches [combining a second] plurality [of sparse process tensors] (Wang [0046]: The trained CNN includes multiple feature maps) [wherein the] plurality [of sparse process tensors and the second] plurality [of sparse process tensors] both correspond to nodes in a layer of a sparse neural network (Wang [0046]: A CNN has multiple layers... The trained CNN includes multiple feature maps; Wang [0058]: Training a CNN 112B may be performed by multiple nodes). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with the plurality of tensors and the neural network nodes as taught by Wang. One of ordinary skill in the art would be motivated to make this combination for at least the same reasons as claim 1 above. Also, this would allow the neural network to be trained in parallel reducing training time as taught by Wang (Wang [0058]). Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Mei in view of Wang further in view of Ahmad (US 20210158168 A1). With regards to claim 7, Mei in view of Wang teaches all of the limitations of claim 6 above. Mei fails to teach further comprising: selecting a subset of the plurality of accumulated values as winners of an activation selection; and setting remaining of the plurality of accumulated values as zero. However, Ahmad teaches further comprising: selecting a subset of the plurality of accumulated values as winners of an activation selection; (Ahmad [0072]: The convolutional layer may generate sparse layer outputs in the form of a sparse tensor by selecting a subset of nodes in the intermediate tensor, and zeroing the remaining nodes in the intermediate tensor) and setting remaining of the plurality of accumulated values as zero (Ahmad [0072]: The convolutional layer may generate sparse layer outputs in the form of a sparse tensor by selecting a subset of nodes in the intermediate tensor, and zeroing the remaining nodes in the intermediate tensor). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with selecting a subset of values and setting the remaining values to zero as taught by Ahmad. One of ordinary skill in the art would be motivated to make this combination because it would improve the flexibility of the system as the system could choose which outputs to include in the output tensor. With regards to claim 17, Mei in view of Wang teaches all of the limitations of claim 16 above. Mei fails to teach wherein the processor is further configured to: select a subset of the plurality of accumulated values as winners of an activation selection; and set remaining of the plurality of accumulated values as zero. However, Ahmad teaches herein the processor is further configured to: select a subset of the plurality of accumulated values as winners of an activation selection; (Ahmad [0072]: The convolutional layer may generate sparse layer outputs in the form of a sparse tensor by selecting a subset of nodes in the intermediate tensor, and zeroing the remaining nodes in the intermediate tensor) and set remaining of the plurality of accumulated values as zero (Ahmad [0072]: The convolutional layer may generate sparse layer outputs in the form of a sparse tensor by selecting a subset of nodes in the intermediate tensor, and zeroing the remaining nodes in the intermediate tensor). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with selecting a subset of values and setting the remaining values to zero as taught by Ahmad. One of ordinary skill in the art would be motivated to make this combination because it would improve the flexibility of the system as the system could choose which outputs to include in the output tensor. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Mei in view of Wang further in view of Rothberg (US 20190237160 A1). With regards to claim 8, Mei in view of Wang teaches all of the limitations of claim 1 above. Mei further teaches and re-arranging [the one-dimensional array to the groups of products] corresponding to the sparse process tensors (Mei [0231]: The metadata 1732 can be used by a load unit 1737 to determine how to order vector elements of a Matrix B input vector, which is fed to the functional units via an additional feed unit 1738). Mei fails to teach [and re-arranging] the one-dimensional array to the groups of products [corresponding to the sparse process tensors]. However, Wang teaches [and re-arranging the one-dimensional array] to the groups of products [corresponding to the sparse process tensors] (Wang [0010]: The reordering of the output feature map includes swapping columns of the output feature map). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with re-arranging the products as taught by Wang. One of ordinary skill in the art would be motivated to make this combination because it would increase the efficiency of the system as the inference speed can be improved if the matrix sparsity is structured in the same way as taught by Wang (Wang [0086]). Mei in view of Wang fails to teach wherein separating the plurality of products into groups comprises flattening the plurality of products in a form of a tensor into a one-dimensional array [and re-arranging] the one-dimensional array [to the groups of products corresponding to the sparse process tensors]. However, Rothberg teaches wherein separating the plurality of products into groups comprises flattening the plurality of products in a form of a tensor into a one-dimensional array (Rothberg [0194]: In some embodiments, the CNN 4000 may be configured to flatten the output 4002C by converting an 8×43 output matrix into a one dimensional vector) [and re-arranging] the one-dimensional array [to the groups of products corresponding to the sparse process tensors] (Rothberg [0194]: In some embodiments, the CNN 4000 may be configured to flatten the output 4002C by converting an 8×43 output matrix into a one dimensional vector). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with flattening the output as taught by Rothberg. One of ordinary skill in the art would be motivated to make this combination because it would increase the efficiency of the system as accessing data in a one dimensional vector is faster than accessing data in a multi-dimensional tensor. With regards to claim 18, Mei in view of Wang teaches all of the limitations of claim 11 above. Mei further teaches and re-arrange [the one-dimensional array to the groups of products] corresponding to the sparse process tensors (Mei [0231]: The metadata 1732 can be used by a load unit 1737 to determine how to order vector elements of a Matrix B input vector, which is fed to the functional units via an additional feed unit 1738). Mei fails to teach [and re-arrange] the one-dimensional array to the groups of products [corresponding to the sparse process tensors]. However, Wang teaches [and re-arrange the one-dimensional array] to the groups of products [corresponding to the sparse process tensors] (Wang [0010]: The reordering of the output feature map includes swapping columns of the output feature map). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with re-arranging the products as taught by Wang. One of ordinary skill in the art would be motivated to make this combination because it would increase the efficiency of the system as the inference speed can be improved if the matrix sparsity is structured in the same way as taught by Wang (Wang [0086]). Mei in view of Wang fails to teach wherein separate the plurality of products into groups comprises flatten the plurality of products in a form of a tensor into a one-dimensional array [and re-arranging] the one-dimensional array [to the groups of products corresponding to the sparse process tensors]. However, Rothberg teaches wherein separate the plurality of products into groups comprises flatten the plurality of products in a form of a tensor into a one-dimensional array (Rothberg [0194]: In some embodiments, the CNN 4000 may be configured to flatten the output 4002C by converting an 8×43 output matrix into a one dimensional vector) [and re-arrange] the one-dimensional array [to the groups of products corresponding to the sparse process tensors] (Rothberg [0194]: In some embodiments, the CNN 4000 may be configured to flatten the output 4002C by converting an 8×43 output matrix into a one dimensional vector). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to combine the teachings of Mei in view of Wang with flattening the output as taught by Rothberg. One of ordinary skill in the art would be motivated to make this combination because it would increase the efficiency of the system as accessing data in a one dimensional vector is faster than accessing data in a multi-dimensional tensor. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jakob O Gudas whose telephone number is (571)272-0695. The examiner can normally be reached Monday-Thursday: 7:30AM-5:00PM Friday: 7:30AM-4:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571) 272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.O.G./Examiner, Art Unit 2151 /James Trujillo/Supervisory Patent Examiner, Art Unit 2151
Read full office action

Prosecution Timeline

Jul 01, 2022
Application Filed
Mar 03, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602200
ANALOG MULTIPLY-ACCUMULATE UNIT FOR MULTIBIT IN-MEMORY CELL COMPUTING
2y 5m to grant Granted Apr 14, 2026
Patent 12566586
HIGH-SPEED QUANTUM RANDOM NUMBER GENERATOR BASED ON VACUUM STATE FLUCTUATION TECHNOLOGY
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
99%
With Interview (+71.1%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month