Prosecution Insights
Last updated: April 19, 2026
Application No. 18/325,790

PERFORMING DYNAMIC SPARSE COMPUTATION ON DENSE COMPUTATION-EFFICIENT COMPUTING DEVICES

Non-Final OA §101§102§103
Filed
May 30, 2023
Examiner
GALVIN-SIEBENALER, PAUL MICHAEL
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
25%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
1 granted / 4 resolved
-30.0% vs TC avg
Minimal -25% lift
Without
With
+-25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
39 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
29.8%
-10.2% vs TC avg
§103
36.8%
-3.2% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the original application filed on May 30th, 2023. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Claim 1 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Claim 1, recites “A system for processing data in a Neural Network (NN) model comprising: one or more processors; a non-transitory computer-readable medium storing a program executable by the one or more processors, the program comprising sets of instructions for:” therefore it is directed to the statutory category of a machine. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “identifying an operator in the NN model, the operator configured to perform a computation on at least one input tensor to generate an output tensor, the at least one input tensor storing input data along a plurality of dimensions in a sparse format;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate a model and identify different layer-wise operations. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “performing, during runtime, a permutation to rearrange the input data from the sparse format to a dense format, wherein the dense format is shorter than the sparse format along a dimension of the plurality of dimensions;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses mathematical concept of utilizing a mathematical formula to perform calculations. A human is able to permute input data using mathematical equations and/or algorithms. This claim discloses a math operation and therefore is ineligible. “performing, during runtime, a computation associated with the operator on the input data in the dense format to generate the output tensor, the output tensor storing output data along the plurality of dimensions in a dense format; and” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and produce an inference. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “performing, during runtime, a reverse permutation to rearrange the output data in the output tensor from the dense format to a specified output data format;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses mathematical concept of utilizing a mathematical formula to perform calculations. A human is able to permute output data using mathematical equations and/or algorithms. This claim discloses a math operation and therefore is ineligible. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the instructions to perform the permutation, the computation, and the reverse permutation are defined as primitives in a sparse kernel.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the instructions to perform the permutation, the computation, and the reverse permutation are defined as primitives in a sparse kernel.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 2 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A machine, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the permutation to rearrange the input data from the sparse format to the dense format is performed when the input data is being loaded from general memory to stored memory.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the permutation to rearrange the input data from the sparse format to the dense format is performed when the input data is being loaded from general memory to stored memory.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 3 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A machine, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the reverse permutation to rearrange the output data from the dense format to output data in the sparse format is performed when the output data is being stored from shared memory to general memory.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the reverse permutation to rearrange the output data from the dense format to output data in the sparse format is performed when the output data is being stored from shared memory to general memory.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 4 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A machine, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “generating a sparsity index configured to identify the location of non-zero values within the input data in the input tensor, the sparsity index based on a sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “generating a sparsity index configured to identify the location of non-zero values within the input data in the input tensor, the sparsity index based on a sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 5 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A machine, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “analyzing the sparsity of the operator;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “selecting the sparse tile from a plurality of pre-constructed sparse tiles based on the sparsity; and” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and make a judgement or inference from that evaluation. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “generating the sparse kernel based on the selected sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “generating the sparse kernel based on the selected sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 6 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A machine, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the analyzing, the selecting, and the generating occur prior to runtime.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the analyzing, the selecting, and the generating occur prior to runtime.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 7 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A machine, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the sparse kernel includes a data tile describing the shape of data in the input tensor and a computation tile describing the shape of the dense format.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the sparse kernel includes a data tile describing the shape of data in the input tensor and a computation tile describing the shape of the dense format.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 8 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A machine, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “where the sparse tile identifies the dimension of the plurality of dimensions as being permutation invariant.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “where the sparse tile identifies the dimension of the plurality of dimensions as being permutation invariant.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 9 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Claim 9, recites “A method for processing data in a Neural Network (NN) model comprising:” therefore it is directed to the statutory category of a process. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “identifying an operator in the NN model, the operator configured to perform a computation on at least one input tensor to generate an output tensor, the at least one input tensor storing input data along a plurality of dimensions in a sparse format;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate a model and identify different layer-wise operations. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “performing, during runtime, a permutation to rearrange the input data from the sparse format to a dense format, wherein the dense format is shorter than the sparse format along a dimension of the plurality of dimensions;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses mathematical concept of utilizing a mathematical formula to perform calculations. A human is able to permute input data using mathematical equations and/or algorithms. This claim discloses a math operation and therefore is ineligible. “performing, during runtime, a computation associated with the operator on the input data in the dense format to generate the output tensor, the output tensor storing output data along the plurality of dimensions in a dense format; and” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and produce an inference. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “performing, during runtime, a reverse permutation to rearrange the output data in the output tensor from the dense format to a specified output data format;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses mathematical concept of utilizing a mathematical formula to perform calculations. A human is able to permute output data using mathematical equations and/or algorithms. This claim discloses a math operation and therefore is ineligible. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the instructions to perform the permutation, the computation, and the reverse permutation are defined as primitives in a sparse kernel.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the instructions to perform the permutation, the computation, and the reverse permutation are defined as primitives in a sparse kernel.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 10 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the permutation to rearrange the input data from the sparse format to the dense format is performed when the input data is being loaded from general memory to stored memory.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the permutation to rearrange the input data from the sparse format to the dense format is performed when the input data is being loaded from general memory to stored memory.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 11 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the reverse permutation to rearrange the output data from the dense format to output data in the sparse format is performed when the output data is being stored from shared memory to general memory.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the reverse permutation to rearrange the output data from the dense format to output data in the sparse format is performed when the output data is being stored from shared memory to general memory.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 12 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “generating a sparsity index configured to identify the location of non-zero values within the input data in the input tensor, the sparsity index based on a sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “generating a sparsity index configured to identify the location of non-zero values within the input data in the input tensor, the sparsity index based on a sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 13 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “analyzing the sparsity of the operator;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “selecting the sparse tile from a plurality of pre-constructed sparse tiles based on the sparsity; and” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and make a judgement or inference from that evaluation. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “generating the sparse kernel based on the selected sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “generating the sparse kernel based on the selected sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 14 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the analyzing, the selecting, and the generating occur prior to runtime.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the analyzing, the selecting, and the generating occur prior to runtime.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 15 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the sparse kernel includes a data tile describing the shape of data in the input tensor and a computation tile describing the shape of the dense format.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the sparse kernel includes a data tile describing the shape of data in the input tensor and a computation tile describing the shape of the dense format.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 16 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “where the sparse tile identifies the dimension of the plurality of dimensions as being permutation invariant.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “where the sparse tile identifies the dimension of the plurality of dimensions as being permutation invariant.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 17 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Claim 17, recites “A non-transitory computer-readable medium storing a program executable by one or more processors, the program comprising sets of instructions for:” therefore it is directed to the statutory category of a machine. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “identifying an operator in the NN model, the operator configured to perform a computation on at least one input tensor to generate an output tensor, the at least one input tensor storing input data along a plurality of dimensions in a sparse format;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate a model and identify different layer-wise operations. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “performing, during runtime, a permutation to rearrange the input data from the sparse format to a dense format, wherein the dense format is shorter than the sparse format along a dimension of the plurality of dimensions;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses mathematical concept of utilizing a mathematical formula to perform calculations. A human is able to permute input data using mathematical equations and/or algorithms. This claim discloses a math operation and therefore is ineligible. “performing, during runtime, a computation associated with the operator on the input data in the dense format to generate the output tensor, the output tensor storing output data along the plurality of dimensions in a dense format; and” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and produce an inference. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “performing, during runtime, a reverse permutation to rearrange the output data in the output tensor from the dense format to a specified output data format;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses mathematical concept of utilizing a mathematical formula to perform calculations. A human is able to permute output data using mathematical equations and/or algorithms. This claim discloses a math operation and therefore is ineligible. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the instructions to perform the permutation, the computation, and the reverse permutation are defined as primitives in a sparse kernel.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the instructions to perform the permutation, the computation, and the reverse permutation are defined as primitives in a sparse kernel.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 18 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “generating a sparsity index configured to identify the location of non- zero values within the input data in the input tensor, the sparsity index based on a sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “generating a sparsity index configured to identify the location of non- zero values within the input data in the input tensor, the sparsity index based on a sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 19 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A machine, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “analyzing the sparsity of the operator;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “selecting the sparse tile from a plurality of pre-constructed sparse tiles based on the sparsity; and” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and make a judgement or inference from that evaluation. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “generating the sparse kernel based on the selected sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “generating the sparse kernel based on the selected sparse tile.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 20 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A machine, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the analyzing, the selecting, and the generating occur prior to runtime.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the analyzing, the selecting, and the generating occur prior to runtime.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim Rejections - 35 USC § 102 (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 9-12, 17, and 18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yu, (Yu, “ORGANIZING NEURAL NETWORK GRAPH INFORMATION”, US 2024/0028878 A1, Filed Sep. 28th, 2022, Hereinafter “Yu”). Regarding claim 1, Yu discloses, “A system for processing data in a Neural Network (NN) model comprising: one or more processors; a non-transitory computer-readable medium storing a program executable by the one or more processors, the program comprising sets of instructions for:” (Computer-Based Systems, pp. 7, [0092]; “In at least one embodiment, one or more processors 702 each include one or more processor cores 707 to process instructions which, when executed, perform operations for system and user software. In at least one embodiment, each of one or more processor cores 707 is configured to process a specific instruction set 709.” Yu discloses a system which contains processors which execute instructions from memory) “identifying an operator in the NN model, the operator configured to perform a computation on at least one input tensor to generate an output tensor, the at least one input tensor storing input data along a plurality of dimensions in a sparse format;” (Detailed Description, pp. 2-3, [0049]; “In at least one embodiment, at step 102, a processor analyzes whether an input tensor is compatible with said processor's processing resources. In at least one embodiment, at step 102, a kernel executed on a GPU automatically analyzes (e.g., detects, calculates, identifies) whether an input tensor, which may be a dense tensor (e.g., a tensor with no values of 0) or sparse tensor, meets a GPU's requirements for further processing (e.g., forward propagation in a neural network) as a sparse tensor (e.g., two out of every four tensor values are zero).” In Yu the model will evaluate the input tensor and the weight tensor to ensure it meets the identified sparsity requirements for the operations of the layer.) “performing, during runtime, a permutation to rearrange the input data from the sparse format to a dense format, wherein the dense format is shorter than the sparse format along a dimension of the plurality of dimensions;” (Detailed Description, pp. 3, [0051]; “In at least one embodiment, a GPU will modify an input tensor based on a weight tensor's dimensions and said GPU's requirement of 2:4 structured sparsity for accelerated processing. In at least one embodiment, a GPU will modify an input tensor by expanding or coalescing said input tensor to make its dimensions and shapes suitable for computing (e.g., perform tensor operations) with a weight tensor meeting a GPU's requirements for structured sparsity.” This model is able to intake a tensor and convert it to a specified format. A sparse tensor may be compressed to meet the required structure for computation. This process would execute during image evaluation and would be considered to occur during program operations.) and (Detailed Description, pp. 5, [0065]; “In at least one embodiment, a GPU compresses sparse tensor 402 to become compressed tensor 404, which is half sparse tensor's 402 size. In at least one embodiment, a GPU compresses sparse tensor 402 to become compressed tensor 404 and creates an array of metadata (e.g., index 406) to keep track of where non-zeros were in uncompressed sparse tensor 402.” This applicant discloses that a sparse or dense tensor is compressed to meet the sparsity requirements. In this example a sparse tensor is compressed to a denser state) “performing, during runtime, a computation associated with the operator on the input data in the dense format to generate the output tensor, the output tensor storing output data along the plurality of dimensions in a dense format; and” (Detailed Description, pp. 3, [0052]; “In at least one embodiment, at step 108, a GPU performs tensor operations on an input tensor and a weight tensor. In at least one embodiment, computing an input tensor and weight tensor together involves tensor multiplication. In at least one embodiment, a GPU operating on an input tensor and weight tensor together involves convolution.” After the tensors are compressed the layer operation is performed and output. The output of the operation is still in the compressed or denser form.) “performing, during runtime, a reverse permutation to rearrange the output data in the output tensor from the dense format to a specified output data format;” (Detailed Description, pp. 3, [0054]; “In at least one embodiment, at step 112, a GPU shapes (e.g., reshapes, re-formats, resizes) a sparse output tensor produced with step 110 to match shapes of input tensors used to train a neural network. In at least one embodiment, step 112 modifies one or more output tensors to have a respective shape identical to shapes of one or more input shapes used to train a neural network.” This model will return the output tensor to the trained or input tensor.) “wherein the instructions to perform the permutation, the computation, and the reverse permutation are defined as primitives in a sparse kernel.” (Detailed Description, pp. 4, [0062]; “In at least one embodiment, modification 300 includes a kernel executed by a GPU that transforms a weight tensor and input tensor. In at least one embodiment, said transformed weight tensor and input tensor are used in operations for training neural network using a GPU's sparse tensor functionality. In at least one embodiment, said transformed weight tensor and input tensor, or some tensor based on said transformed weight tenor and input tensor (e.g., output tensor), are reshaped (e.g., returned) to an original irregular shape to connect with a following layer in a neural network.” The process in this applicant is designed to evaluate images. The process of inputting an image and evaluation would be considered to be occurring during runtime.) Regarding claim 2, Yu discloses, “wherein the permutation to rearrange the input data from the sparse format to the dense format is performed when the input data is being loaded from general memory to stored memory.” (Detailed Description, p. 3, [0055]; “FIG. 2 illustrates a process 200 for modifying layers of a neural network, according to at least one embodiment. One or more aspects of process 200 as described herein can be used in combination with any embodiments as discussed in conjunction with at least FIGS. 1 and 3-5. In at least one embodiment, a processor identifies which layers share an input. In at least one embodiment, a GPU automatically, through a kernel execution, analyzes between layers of a machine learning model at step 202. In at least one embodiment, analyzing between layers 202 includes analyzing layers, input tensors, weight tensors, output tensors, or some combination thereof.” The system will evaluate the input tensors and determine during runtime weather to compress the input tensor to a specified density. This process will rearrange the tensor to the stated dimensions.) and (Computer-Based Systems, pp. 8, [0095]; “In at least one embodiment, memory device 720 can be a dynamic random access memory ("DRAM") device, a static random access memory ("SRAM") device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as processor memory. In at least one embodiment memory device 720 can operate as system memory for processing system 700, to store data 722 and instructions 721 for use when one or more processors 702 executes an application or process.” This process is completed on a processing system. This processing system is able to access memory and load instructions of operations to a cached storage. The System would then execute the functions and return the data from the cached data and save it to long term memory) Regarding claim 3, Yu discloses, “wherein the reverse permutation to rearrange the output data from the dense format to output data in the sparse format is performed when the output data is being stored from shared memory to general memory.” (Detailed Description, pp. 4, [0058]; “In at least one embodiment, a GPU creates a sparse neural network model at step 206 from fused layers resulting from step 204 that are processed through process 100. In at least one embodiment, a sparse neural network model is a model where only some percentage of all possible connections exist between nodes. In at least one embodiment, a sparse neural network model of step 206 is a trained sparse neural network model. In at least one embodiment, elements of processes 100 and 200 create one or more sparse layers of a neural network. In at least one embodiment, elements of process 100 and 200 create neural network models that do not exclusively use sparse layers.” As stated above, the process disclosed in Yu will be able to return a compressed tensor to specified dimensions.) and (Computer-Based Systems, pp. 8, [0095]; “In at least one embodiment, memory device 720 can be a dynamic random access memory ("DRAM") device, a static random access memory ("SRAM") device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as processor memory. In at least one embodiment memory device 720 can operate as system memory for processing system 700, to store data 722 and instructions 721 for use when one or more processors 702 executes an application or process.” Further this process is also completed by a computing system which is able to return or save loaded data from cache memory to long term memory.) Regarding claim 4, Yu discloses, “generating a sparsity index configured to identify the location of non-zero values within the input data in the input tensor, the sparsity index based on a sparse tile.” (Detailed Description, pp. 5, [0063]; “In at least one embodiment, sparse matrix 402 exhibits 2:4 structural sparsity, wherein 50% of values for each contiguous block (e.g., square) of values that is a multiple of 2 on each side (e.g., 2x2, 4x4, 8x8) is a nonzero value. Conversely, 50% of said values are nonzero. In at least one embodiment, a GPU's specialized sparse tensor operations (e.g., NVIDIA Sparse Tensor Core operations) accelerate the sparse matrix 402 format by operating only on nonzero values in compressed matrix 404. Said GPU uses metadata stored with said nonzero values to pull only necessary values from uncompressed sparse matrix 402. In at least one embodiment, said metadata is stored in an index 406 of2-bit indices. In at least one embodiment, index 406 includes location information for nonzero data values in sparse matrix 402. In at least one embodiment, sparse tensor compression 400 is applied to tensors that represent types of mathematical concepts other than two-dimensional matrices.” This model will compress tensors to specified dimensions. During this process the location of the values are stored and indexed in a separate matrix as seen in figure 4.) Regarding claim 9, Yu discloses, “A method for processing data in a Neural Network (NN) model comprising:” (Detailed Description, pp. 2, [0046]; “In at least one embodiment, methods and systems modify dimensions of one or more tensors based, at least in part, on one or more processing resources. In at least one embodiment, methods and systems modify (e.g., reshaping, transforming, converting) tensors (e.g., representations of numbers, scalars, arrays, vectors, two-dimensional (2D) arrays, matrices) used in neural network (e.g., residual neural network (ResNet), convolutional neural network (CNN), generative adversarial networks (GAN), artificial neural network (ANN), recurrent neural network (RNN)) on a graphics processing unit (GPU)) training or inferencing.” This application discloses a method which is execute by a processing system to perform actions on a computing system.) “identifying an operator in the NN model, the operator configured to perform a computation on at least one input tensor to generate an output tensor, the at least one input tensor storing input data along a plurality of dimensions in a sparse format;” (Detailed Description, pp. 2-3, [0049]; “In at least one embodiment, at step 102, a processor analyzes whether an input tensor is compatible with said processor's processing resources. In at least one embodiment, at step 102, a kernel executed on a GPU automatically analyzes (e.g., detects, calculates, identifies) whether an input tensor, which may be a dense tensor (e.g., a tensor with no values of 0) or sparse tensor, meets a GPU's requirements for further processing (e.g., forward propagation in a neural network) as a sparse tensor (e.g., two out of every four tensor values are zero).” In Yu the model will evaluate the input tensor and the weight tensor to ensure it meets the identified sparsity requirements for the operations of the layer.) “performing, during runtime, a permutation to rearrange the input data from the sparse format to a dense format, wherein the dense format is shorter than the sparse format along a dimension of the plurality of dimensions;” (Detailed Description, pp. 3, [0051]; “In at least one embodiment, a GPU will modify an input tensor based on a weight tensor's dimensions and said GPU's requirement of 2:4 structured sparsity for accelerated processing. In at least one embodiment, a GPU will modify an input tensor by expanding or coalescing said input tensor to make its dimensions and shapes suitable for computing (e.g., perform tensor operations) with a weight tensor meeting a GPU's requirements for structured sparsity.” This model is able to intake a tensor and convert it to a specified format. A sparse tensor may be compressed to meet the required structure for computation. This process would execute during image evaluation and would be considered to occur during program operations.) and (Detailed Description, pp. 5, [0065]; “In at least one embodiment, a GPU compresses sparse tensor 402 to become compressed tensor 404, which is half sparse tensor's 402 size. In at least one embodiment, a GPU compresses sparse tensor 402 to become compressed tensor 404 and creates an array of metadata (e.g., index 406) to keep track of where non-zeros were in uncompressed sparse tensor 402.” This applicant discloses that a sparse or dense tensor is compressed to meet the sparsity requirements. In this example a sparse tensor is compressed to a denser state) “performing, during runtime, a computation associated with the operator on the input data in the dense format to generate the output tensor, the output tensor storing output data along the plurality of dimensions in a dense format; and” (Detailed Description, pp. 3, [0052]; “In at least one embodiment, at step 108, a GPU performs tensor operations on an input tensor and a weight tensor. In at least one embodiment, computing an input tensor and weight tensor together involves tensor multiplication. In at least one embodiment, a GPU operating on an input tensor and weight tensor together involves convolution.” After the tensors are compressed the layer operation is performed and output. The output of the operation is still in the compressed or denser form.) “performing, during runtime, a reverse permutation to rearrange the output data in the output tensor from the dense format to a specified output data format;” (Detailed Description, pp. 3, [0054]; “In at least one embodiment, at step 112, a GPU shapes (e.g., reshapes, re-formats, resizes) a sparse output tensor produced with step 110 to match shapes of input tensors used to train a neural network. In at least one embodiment, step 112 modifies one or more output tensors to have a respective shape identical to shapes of one or more input shapes used to train a neural network.” This model will return the output tensor to the trained or input tensor.) “wherein the instructions to perform the permutation, the computation, and the reverse permutation are defined as primitives in a sparse kernel.” (Detailed Description, pp. 4, [0062]; “In at least one embodiment, modification 300 includes a kernel executed by a GPU that transforms a weight tensor and input tensor. In at least one embodiment, said transformed weight tensor and input tensor are used in operations for training neural network using a GPU's sparse tensor functionality. In at least one embodiment, said transformed weight tensor and input tensor, or some tensor based on said transformed weight tenor and input tensor (e.g., output tensor), are reshaped (e.g., returned) to an original irregular shape to connect with a following layer in a neural network.” The process in this applicant is designed to evaluate images. The process of inputting an image and evaluation would be considered to be occurring during runtime.) Regarding claim 10, Yu discloses, “wherein the permutation to rearrange the input data from the sparse format to the dense format is performed when the input data is being loaded from general memory to stored memory.” (Detailed Description, p. 3, [0055]; “FIG. 2 illustrates a process 200 for modifying layers of a neural network, according to at least one embodiment. One or more aspects of process 200 as described herein can be used in combination with any embodiments as discussed in conjunction with at least FIGS. 1 and 3-5. In at least one embodiment, a processor identifies which layers share an input. In at least one embodiment, a GPU automatically, through a kernel execution, analyzes between layers of a machine learning model at step 202. In at least one embodiment, analyzing between layers 202 includes analyzing layers, input tensors, weight tensors, output tensors, or some combination thereof.” The system will evaluate the input tensors and determine during runtime weather to compress the input tensor to a specified density. This process will rearrange the tensor to the stated dimensions.) And (Computer-Based Systems, pp. 8, [0095]; “In at least one embodiment, memory device 720 can be a dynamic random access memory ("DRAM") device, a static random access memory ("SRAM") device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as processor memory. In at least one embodiment memory device 720 can operate as system memory for processing system 700, to store data 722 and instructions 721 for use when one or more processors 702 executes an application or process.” This process is completed on a processing system. This processing system is able to access memory and load instructions of operations to a cached storage. The System would then execute the functions and return the data from the cached data and save it to long term memory) Regarding claim 11, Yu discloses, “wherein the reverse permutation to rearrange the output data from the dense format to output data in the sparse format is performed when the output data is being stored from shared memory to general memory.” (Detailed Description, pp. 4, [0058]; “In at least one embodiment, a GPU creates a sparse neural network model at step 206 from fused layers resulting from step 204 that are processed through process 100. In at least one embodiment, a sparse neural network model is a model where only some percentage of all possible connections exist between nodes. In at least one embodiment, a sparse neural network model of step 206 is a trained sparse neural network model. In at least one embodiment, elements of processes 100 and 200 create one or more sparse layers of a neural network. In at least one embodiment, elements of process 100 and 200 create neural network models that do not exclusively use sparse layers.” As stated above, the process disclosed in Yu will be able to return a compressed tensor to specified dimensions.) and (Computer-Based Systems, pp. 8, [0095]; “In at least one embodiment, memory device 720 can be a dynamic random access memory ("DRAM") device, a static random access memory ("SRAM") device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as processor memory. In at least one embodiment memory device 720 can operate as system memory for processing system 700, to store data 722 and instructions 721 for use when one or more processors 702 executes an application or process.” Further this process is also completed by a computing system which is able to return or save loaded data from cache memory to long term memory.) Regarding claim 12, Yu discloses, “generating a sparsity index configured to identify the location of non-zero values within the input data in the input tensor, the sparsity index based on a sparse tile.” (Detailed Description, pp. 5, [0063]; “In at least one embodiment, sparse matrix 402 exhibits 2:4 structural sparsity, wherein 50% of values for each contiguous block (e.g., square) of values that is a multiple of 2 on each side (e.g., 2x2, 4x4, 8x8) is a nonzero value. Conversely, 50% of said values are nonzero. In at least one embodiment, a GPU's specialized sparse tensor operations (e.g., NVIDIA Sparse Tensor Core operations) accelerate the sparse matrix 402 format by operating only on nonzero values in compressed matrix 404. Said GPU uses metadata stored with said nonzero values to pull only necessary values from uncompressed sparse matrix 402. In at least one embodiment, said metadata is stored in an index 406 of2-bit indices. In at least one embodiment, index 406 includes location information for nonzero data values in sparse matrix 402. In at least one embodiment, sparse tensor compression 400 is applied to tensors that represent types of mathematical concepts other than two-dimensional matrices.” This model will compress tensors to specified dimensions. During this process the location of the values are stored and indexed in a separate matrix as seen in figure 4.) Regarding claim 17, Yu discloses, “A non-transitory computer-readable medium storing a program executable by one or more processors, the program comprising sets of instructions for:” (Computer-Based Systems, pp. 7, [0092]; “In at least one embodiment, one or more processors 702 each include one or more processor cores 707 to process instructions which, when executed, perform operations for system and user software. In at least one embodiment, each of one or more processor cores 707 is configured to process a specific instruction set 709.” Yu discloses a system which contains processors which execute instructions from memory) “identifying an operator in the NN model, the operator configured to perform a computation on at least one input tensor to generate an output tensor, the at least one input tensor storing input data along a plurality of dimensions in a sparse format;” (Detailed Description, pp. 2-3, [0049]; “In at least one embodiment, at step 102, a processor analyzes whether an input tensor is compatible with said processor's processing resources. In at least one embodiment, at step 102, a kernel executed on a GPU automatically analyzes (e.g., detects, calculates, identifies) whether an input tensor, which may be a dense tensor (e.g., a tensor with no values of 0) or sparse tensor, meets a GPU's requirements for further processing (e.g., forward propagation in a neural network) as a sparse tensor (e.g., two out of every four tensor values are zero).” In Yu the model will evaluate the input tensor and the weight tensor to ensure it meets the identified sparsity requirements for the operations of the layer.) “performing, during runtime, a permutation to rearrange the input data from the sparse format to a dense format, wherein the dense format is shorter than the sparse format along a dimension of the plurality of dimensions;” (Detailed Description, pp. 3, [0051]; “In at least one embodiment, a GPU will modify an input tensor based on a weight tensor's dimensions and said GPU's requirement of 2:4 structured sparsity for accelerated processing. In at least one embodiment, a GPU will modify an input tensor by expanding or coalescing said input tensor to make its dimensions and shapes suitable for computing (e.g., perform tensor operations) with a weight tensor meeting a GPU's requirements for structured sparsity.” This model is able to intake a tensor and convert it to a specified format. A sparse tensor may be compressed to meet the required structure for computation. This process would execute during image evaluation and would be considered to occur during program operations.) and (Detailed Description, pp. 5, [0065]; “In at least one embodiment, a GPU compresses sparse tensor 402 to become compressed tensor 404, which is half sparse tensor's 402 size. In at least one embodiment, a GPU compresses sparse tensor 402 to become compressed tensor 404 and creates an array of metadata (e.g., index 406) to keep track of where non-zeros were in uncompressed sparse tensor 402.” This applicant discloses that a sparse or dense tensor is compressed to meet the sparsity requirements. In this example a sparse tensor is compressed to a denser state) “performing, during runtime, a computation associated with the operator on the input data in the dense format to generate the output tensor, the output tensor storing output data along the plurality of dimensions in a dense format; and” (Detailed Description, pp. 3, [0052]; “In at least one embodiment, at step 108, a GPU performs tensor operations on an input tensor and a weight tensor. In at least one embodiment, computing an input tensor and weight tensor together involves tensor multiplication. In at least one embodiment, a GPU operating on an input tensor and weight tensor together involves convolution.” After the tensors are compressed the layer operation is performed and output. The output of the operation is still in the compressed or denser form.) “performing, during runtime, a reverse permutation to rearrange the output data in the output tensor from the dense format to a specified output data format;” (Detailed Description, pp. 3, [0054]; “In at least one embodiment, at step 112, a GPU shapes (e.g., reshapes, re-formats, resizes) a sparse output tensor produced with step 110 to match shapes of input tensors used to train a neural network. In at least one embodiment, step 112 modifies one or more output tensors to have a respective shape identical to shapes of one or more input shapes used to train a neural network.” This model will return the output tensor to the trained or input tensor.) “wherein the instructions to perform the permutation, the computation, and the reverse permutation are defined as primitives in a sparse kernel.” (Detailed Description, pp. 4, [0062]; “In at least one embodiment, modification 300 includes a kernel executed by a GPU that transforms a weight tensor and input tensor. In at least one embodiment, said transformed weight tensor and input tensor are used in operations for training neural network using a GPU's sparse tensor functionality. In at least one embodiment, said transformed weight tensor and input tensor, or some tensor based on said transformed weight tenor and input tensor (e.g., output tensor), are reshaped (e.g., returned) to an original irregular shape to connect with a following layer in a neural network.” The process in this applicant is designed to evaluate images. The process of inputting an image and evaluation would be considered to be occurring during runtime.) Regarding claim 18, Yu discloses, “generating a sparsity index configured to identify the location of non- zero values within the input data in the input tensor, the sparsity index based on a sparse tile.” (Detailed Description, pp. 5, [0063]; “In at least one embodiment, sparse matrix 402 exhibits 2:4 structural sparsity, wherein 50% of values for each contiguous block (e.g., square) of values that is a multiple of 2 on each side (e.g., 2x2, 4x4, 8x8) is a nonzero value. Conversely, 50% of said values are nonzero. In at least one embodiment, a GPU's specialized sparse tensor operations (e.g., NVIDIA Sparse Tensor Core operations) accelerate the sparse matrix 402 format by operating only on nonzero values in compressed matrix 404. Said GPU uses metadata stored with said nonzero values to pull only necessary values from uncompressed sparse matrix 402. In at least one embodiment, said metadata is stored in an index 406 of2-bit indices. In at least one embodiment, index 406 includes location information for nonzero data values in sparse matrix 402. In at least one embodiment, sparse tensor compression 400 is applied to tensors that represent types of mathematical concepts other than two-dimensional matrices.” This model will compress tensors to specified dimensions. During this process the location of the values are stored and indexed in a separate matrix as seen in figure 4.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5-8, 13-16, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yu in view of Yu et al., (Yu et al., “TENSOR MODIFICATION BASED ON PROCESSING RESOURCES”, US 2023/0244942 A1, filed on Fed. 24th, 2022, hereinafter “Xie”) (The primary inventor of this publication is the same inventor as the previously introduced art; therefore, this document will be referred to by the second listed inventor.) Regarding claim 5, Yu fails to explicitly disclose the elements of this claim. However, Xie discloses, “analyzing the sparsity of the operator;” (Detailed Description, pp. 6, [0091]; “In at least one embodiment, at step 606 of example process 600, it is determined whether one or more sparsity parameters received at step 604 are satisfied. In at least one embodiment, at step 606, for example if "2:4" sparsity is desired, it may be determined whether "2:4" sparsity is already satisfied without performing any permutations such as those described herein. In at least one embodiment, at step 606, if it is determined that one or more sparsity parameters received at step 604 are satisfied ("YES" branch), example process 600 continues at step 616. In at least one embodiment, at step 606, if it is determined that one or more sparsity parameters received at step 604 are not satisfied ("NO" branch), example process 600 continues at step 608.” This model is able to change the sparsity requirements of model. It will handle tensors of mixed densities.) “selecting the sparse tile from a plurality of pre-constructed sparse tiles based on the sparsity; and” (Detailed Description, pp. 6, [0092]; “In at least one embodiment, at step 608 of example process 600, a first permutation strategy is selected. In at least one embodiment, at step 608, a first permutation strategy such as those described herein (e.g., a random swap, a greedy channel swap, a greedy block swap, an exhaustive guided greedy search, or a combination of these and/or other such technique) is selected. In at least one embodiment, at step 608, a first permutation strategy such as those described herein is selected based, at least in part, on sparsity parameters received at step 604. In at least one embodiment, after step 608, example process 600 continues at step 610.” This process discloses that different algorithms are used to alter the tensor. The process is selected based on the sparsity requirements.) “generating the sparse kernel based on the selected sparse tile.” (Detailed Description, pp. 6, [0096]; “In at least one embodiment, at step 616 of example process 600, a permuted neural network is processed with acceleration 616 (e.g., using a graphics acceleration module such as graphics acceleration module 112, described herein at least in connection with FIG. 1). In at least one embodiment after step 616, example process 600 terminates. In at least one embodiment, not shown in FIG. 6, after step 616, example process 600 continues at step 602 to receive a next neural network specification.” Once the algorithm is selected to compress the tensor it is iterative checked to ensure it meets the sparsity requirements parameters. After meeting the requirements, the tensor is then operated on.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Yu and Xie. Yu teaches a machine learning model that uses specified sparsity dimensions to process input tensors. Xie teaches a machine learning model that is able to alter the sparsity dimensions for an input tensor for further processing. One of ordinary skill would have motivation to combine a system that has a set sparsity constraint for input data and compress’ that data to a denser specified dimension with a system that is able to alter the sparsity constraints and compress/decompress input tensor according to the mutable constraints, “In at least one embodiment, said transformed weight tensor and input tensor, or some tensor based on said transformed weight tenor and input tensor (e.g., output tensor), are reshaped (e.g., returned) to an original irregular shape to connect with a following layer in a neural network. In at least one embodiment, said kernel designed to transform a weight tensor and input tensor to accelerate sparse tensors can improve computation time by 1.73x over time spent processing irregular weight and input tensors in a first convolutional layer of a neural network based on ResNet.” (Xie, Detailed Description, pp. 4, [0062]). Regarding claim 6, Yu fails to explicitly disclose the elements of this claim. However, Xie discloses, “wherein the analyzing, the selecting, and the generating occur prior to runtime.” (Detailed Description, pp. 6, [0099]; “In at least one embodiment, steps of example process 600 are performed in a different order than is illustrated in FIG. 6. In at least one embodiment, steps of example process 600 are performed in parallel. In at least one embodiment, steps of example steps of example process 600 are performed by a plurality of threads executing on one or more processors such as those described herein.” This article discloses that some of the operation of fig 6 can occur at different times using multithreading. This process would occur during the operation of the model and during normal runtime of a program.) Regarding claim 7, Yu fails to explicitly disclose the elements of this claim. However, Xie discloses, “wherein the sparse kernel includes a data tile describing the shape of data in the input tensor and a computation tile describing the shape of the dense format.” (Detailed Description, pp. 4, [0074]; “In at least one embodiment, sparsity constraints are constraints such as those described herein where a minimum number of a contiguous number of elements of neural network graph data are zero (e.g., a maximum number of a contiguous number of elements of neural network graph data are non-zero). In at least one embodiment, for example, a sparsity constraint for "2:4" sparsity is a sparsity constraint that, at most two elements of each four contiguous elements of neural network graph data are non-zero. In at least one embodiment, other types of sparsity constraints may be indicated such as, for example, "4:8" sparsity (e.g., where at most four of each eight contiguous elements are non-zero), "4: 16" sparsity (e.g., where at most four of each sixteen contiguous elements are non-zero), "M:N" sparsity (e.g., where at most "M" of each "N" contiguous elements are non-zero), or other such sparsity constraints including, but not limited to, those described herein.” The sparsity requirements are set for the model. This would mean that the operators would also be set to operate on the specified tensor sizes.) Regarding claim 8, Yu fails to explicitly disclose the elements of this claim. However, Xie discloses, “where the sparse tile identifies the dimension of the plurality of dimensions as being permutation invariant.” (Detailed Description, pp. 4, [0074]; In at least one embodiment, other types of sparsity constraints may be indicated such as, for example, "4:8" sparsity (e.g., where at most four of each eight contiguous elements are non-zero), "4: 16" sparsity (e.g., where at most four of each sixteen contiguous elements are non-zero), "M:N" sparsity ( e.g., where at most "M" of each "N" contiguous elements are non-zero), or other such sparsity constraints including, but not limited to, those described herein. In at least one embodiment, sparsity constraints such as those described herein may include one or more additional constraints such as, for example, data layout constraints (e.g., row or column order), data type constraints (e.g., integer, Boolean, floating point, double precision, etc.), data storage constraints (e.g., a maximum number of bits used to store an element), data structure constraints (e.g., matrix, graph, list, etc.), and/or other such constraints.” The system is able to evaluate an input tensor and determine the tensors dimensions and the dimensions set by the sparsity constraints. This is used to select the permutation algorithm as seen in fig. 6) Regarding claim 13, Yu fails to explicitly disclose the elements of this claim. However, Xie discloses, “analyzing the sparsity of the operator;” (Detailed Description, pp. 6, [0091]; “In at least one embodiment, at step 606 of example process 600, it is determined whether one or more sparsity parameters received at step 604 are satisfied. In at least one embodiment, at step 606, for example if "2:4" sparsity is desired, it may be determined whether "2:4" sparsity is already satisfied without performing any permutations such as those described herein. In at least one embodiment, at step 606, if it is determined that one or more sparsity parameters received at step 604 are satisfied ("YES" branch), example process 600 continues at step 616. In at least one embodiment, at step 606, if it is determined that one or more sparsity parameters received at step 604 are not satisfied ("NO" branch), example process 600 continues at step 608.” This model is able to change the sparsity requirements of model. It will handle tensors of mixed densities.) “selecting the sparse tile from a plurality of pre-constructed sparse tiles based on the sparsity; and” (Detailed Description, pp. 6, [0092]; “In at least one embodiment, at step 608 of example process 600, a first permutation strategy is selected. In at least one embodiment, at step 608, a first permutation strategy such as those described herein (e.g., a random swap, a greedy channel swap, a greedy block swap, an exhaustive guided greedy search, or a combination of these and/or other such technique) is selected. In at least one embodiment, at step 608, a first permutation strategy such as those described herein is selected based, at least in part, on sparsity parameters received at step 604. In at least one embodiment, after step 608, example process 600 continues at step 610.” This process discloses that different algorithms are used to alter the tensor. The process is selected based on the sparsity requirements.) “generating the sparse kernel based on the selected sparse tile.” (Detailed Description, pp. 6, [0096]; “In at least one embodiment, at step 616 of example process 600, a permuted neural network is processed with acceleration 616 (e.g., using a graphics acceleration module such as graphics acceleration module 112, described herein at least in connection with FIG. 1). In at least one embodiment after step 616, example process 600 terminates. In at least one embodiment, not shown in FIG. 6, after step 616, example process 600 continues at step 602 to receive a next neural network specification.” Once the algorithm is selected to compress the tensor it is iterative checked to ensure it meets the sparsity requirements parameters. After meeting the requirements, the tensor is then operated on.) Regarding claim 14, Yu fails to explicitly disclose the elements of this claim. However, Xie discloses, “wherein the analyzing, the selecting, and the generating occur prior to runtime.” (Detailed Description, pp. 6, [0099]; “In at least one embodiment, steps of example process 600 are performed in a different order than is illustrated in FIG. 6. In at least one embodiment, steps of example process 600 are performed in parallel. In at least one embodiment, steps of example steps of example process 600 are performed by a plurality of threads executing on one or more processors such as those described herein.” This article discloses that some of the operation of fig 6 can occur at different times using multithreading. This process would occur during the operation of the model and during normal runtime of a program.) Regarding claim 15, Yu fails to explicitly disclose the elements of this claim. However, Xie discloses, “wherein the sparse kernel includes a data tile describing the shape of data in the input tensor and a computation tile describing the shape of the dense format.” (Detailed Description, pp. 4, [0074]; “In at least one embodiment, sparsity constraints are constraints such as those described herein where a minimum number of a contiguous number of elements of neural network graph data are zero (e.g., a maximum number of a contiguous number of elements of neural network graph data are non-zero). In at least one embodiment, for example, a sparsity constraint for "2:4" sparsity is a sparsity constraint that, at most two elements of each four contiguous elements of neural network graph data are non-zero. In at least one embodiment, other types of sparsity constraints may be indicated such as, for example, "4:8" sparsity (e.g., where at most four of each eight contiguous elements are non-zero), "4: 16" sparsity (e.g., where at most four of each sixteen contiguous elements are non-zero), "M:N" sparsity (e.g., where at most "M" of each "N" contiguous elements are non-zero), or other such sparsity constraints including, but not limited to, those described herein.” The sparsity requirements are set for the model. This would mean that the operators would also be set to operate on the specified tensor sizes.) Regarding claim 16, Yu fails to explicitly disclose the elements of this claim. However, Xie discloses, “where the sparse tile identifies the dimension of the plurality of dimensions as being permutation invariant.” (Detailed Description, pp. 4, [0074]; In at least one embodiment, other types of sparsity constraints may be indicated such as, for example, "4:8" sparsity (e.g., where at most four of each eight contiguous elements are non-zero), "4: 16" sparsity (e.g., where at most four of each sixteen contiguous elements are non-zero), "M:N" sparsity ( e.g., where at most "M" of each "N" contiguous elements are non-zero), or other such sparsity constraints including, but not limited to, those described herein. In at least one embodiment, sparsity constraints such as those described herein may include one or more additional constraints such as, for example, data layout constraints (e.g., row or column order), data type constraints (e.g., integer, Boolean, floating point, double precision, etc.), data storage constraints (e.g., a maximum number of bits used to store an element), data structure constraints (e.g., matrix, graph, list, etc.), and/or other such constraints.” The system is able to evaluate an input tensor and determine the tensors dimensions and the dimensions set by the sparsity constraints. This is used to select the permutation algorithm as seen in fig. 6) Regarding claim 19, Yu fails to explicitly disclose the elements of this claim. However, Xie discloses, “analyzing the sparsity of the operator;” (Detailed Description, pp. 6, [0091]; “In at least one embodiment, at step 606 of example process 600, it is determined whether one or more sparsity parameters received at step 604 are satisfied. In at least one embodiment, at step 606, for example if "2:4" sparsity is desired, it may be determined whether "2:4" sparsity is already satisfied without performing any permutations such as those described herein. In at least one embodiment, at step 606, if it is determined that one or more sparsity parameters received at step 604 are satisfied ("YES" branch), example process 600 continues at step 616. In at least one embodiment, at step 606, if it is determined that one or more sparsity parameters received at step 604 are not satisfied ("NO" branch), example process 600 continues at step 608.” This model is able to change the sparsity requirements of model. It will handle tensors of mixed densities.) “selecting the sparse tile from a plurality of pre-constructed sparse tiles based on the sparsity; and” (Detailed Description, pp. 6, [0092]; “In at least one embodiment, at step 608 of example process 600, a first permutation strategy is selected. In at least one embodiment, at step 608, a first permutation strategy such as those described herein (e.g., a random swap, a greedy channel swap, a greedy block swap, an exhaustive guided greedy search, or a combination of these and/or other such technique) is selected. In at least one embodiment, at step 608, a first permutation strategy such as those described herein is selected based, at least in part, on sparsity parameters received at step 604. In at least one embodiment, after step 608, example process 600 continues at step 610.” This process discloses that different algorithms are used to alter the tensor. The process is selected based on the sparsity requirements.) “generating the sparse kernel based on the selected sparse tile” (Detailed Description, pp. 6, [0096]; “In at least one embodiment, at step 616 of example process 600, a permuted neural network is processed with acceleration 616 (e.g., using a graphics acceleration module such as graphics acceleration module 112, described herein at least in connection with FIG. 1). In at least one embodiment after step 616, example process 600 terminates. In at least one embodiment, not shown in FIG. 6, after step 616, example process 600 continues at step 602 to receive a next neural network specification.” Once the algorithm is selected to compress the tensor it is iterative checked to ensure it meets the sparsity requirements parameters. After meeting the requirements, the tensor is then operated on.) Regarding claim 20, Yu fails to explicitly disclose the elements of this claim. However, Xie discloses, “wherein the analyzing, the selecting, and the generating occur prior to runtime.” (Detailed Description, pp. 6, [0099]; “In at least one embodiment, steps of example process 600 are performed in a different order than is illustrated in FIG. 6. In at least one embodiment, steps of example process 600 are performed in parallel. In at least one embodiment, steps of example steps of example process 600 are performed by a plurality of threads executing on one or more processors such as those described herein.” This article discloses that some of the operation of fig 6 can occur at different times using multithreading. This process would occur during the operation of the model and during normal runtime of a program.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL MICHAEL GALVIN-SIEBENALER whose telephone number is (571)272-1257. The examiner can normally be reached Monday - Friday 8AM to 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAUL M GALVIN-SIEBENALER/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

May 30, 2023
Application Filed
Mar 03, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
25%
Grant Probability
0%
With Interview (-25.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month