Prosecution Insights
Last updated: April 19, 2026
Application No. 17/866,194

SYSTEMS AND METHODS FOR NEURAL NETWORK TRAINING WITH WEIGHT SPARSITY

Final Rejection §101§103
Filed
Jul 15, 2022
Examiner
KIM, HARRISON CHAN YOUNG
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Alibaba (China) Co., Ltd.
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
3 granted / 6 resolved
-5.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
4.9%
-35.1% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is made final. Claims 1-20 are pending. Claims 1, 7 and 14 are independent claims. Response to Arguments Applicant’s amendments to claims 14-20 have overcome the 101 rejections related to the interpretation of those claims encompassing signals per se. Applicant's arguments filed 9/8/2025 regarding the 35 U.S.C. 101 rejections of the previous office action have been fully considered but they are not persuasive. Applicant argues that the steps of claim 1 are not fundamental economic practices, mental processes, or methods of organizing human activity. Examiner explained in the previous office action that the steps of claim 1 were mainly mathematical calculations, as opposed to mental processes or any other abstract idea grouping. The only step in claim 1 that the examiner described as a mental process was finding the transpose of a matrix. This can also be classified as a mathematical calculation. Applicant argues that the steps of claim 1 are directed to a specific technological improvement because they recite particular modules and thus are not directed towards the abstract idea grouping of mathematical calculations. Applicant further describes possible benefits derived from the invention. The examiner argues that in claim 1, the abstract ideas are not integrated into a practical application by any additional elements, because the modules intended for sparse matrix multiplication are recited broadly. There is no technological improvement stated or made obvious by additional elements in claim 1. Applicant argues various elements of claim 1, such as “use of a sparse matrix multiplication (spMM) module configured for transpose-invariant sparse weight matrices…” yields more efficient training and increased accuracy but omits any description on how the configuration of the module is performed or accomplishes these goals as claimed. Applicant's arguments filed 9/8/2025 regarding the 35 U.S.C. 103 rejections of the previous office action have been fully considered but they are not persuasive. Applicant argues that Latorre does not disclose or suggest “using transpose-invariant sparsity in forward and backward propagation of neural networks”. The examiner argues that in the previous office action, Elsen was used to teach “using a sparse weight matrix” and “using a transpose of the sparse weight matrix received from a weight transpose module”. Latorre was used to teach a sparse matrix that is “transpose invariant”. Latorre at least suggests the idea of using the sparse matrix calculations in PARA15: As ANNs are often used for data-intensive tasks such as computer vision and speech and image recognition, the complexity and amount of data ANNs deal with is great. As ANNs are generally represented as tensors, data involved often takes on the form of a matrix and PARA16, Introduced herein are techniques for efficiently performing operations on matrix-based data. Using the introduced techniques, operations such as compressing, decompressing and transposing matrix-based data can be realized using a smaller amount of data and processing power than the conventional methods. Applicant argues that the combination of Elsen and Latorre is “not motivated by an unmet problem” because Elsen “already addresses memory efficiency without requiring transpose invariance”. The examiner argues that Latorre offers superior memory savings in sparse matrix processing over standard sparse matrix processing implementations, partly by introducing structural constraints, as described in Latorre PARA18, The introduced techniques change how metadata of a compressed sparse matrix has been considered… is based on a recognition that a sparse matrix of certain size, e.g., 4×4 matrix, can have only a limited number of patterns, i.e., locations of non-zeros elements. The introduced techniques use the pattern number as a compressed sparse matrix's metadata. As the data needed to represent each pattern is much smaller than the data needed to represent indices of non-zero elements along the compression dimension, the introduced techniques use a much smaller amount of data to store and access a matrix than the conventional methods. Applicant argues that the claimed invention uses transpose invariant sparsity which allows for “maintaining FLOPS while enabling much larger weight matrices and improving accuracy”, but these qualities are not reflected in the limitations recited in claim 1. The same rationale used to reject claim 1 still applies to claims 7 and 14, and the dependent claims 2-6, 8-13 and 15-20 also remain rejected. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a weight data transpose module configured to transpose sparse weight data…, a weight indices transpose module configured to transpose sparse weight indices…, the one or more sparse matrix-matrix multiplication (spMM) modules configured to, compute activations for a current layer…, the one or more sampled dense-dense matrix multiplication (SDDMM) modules configured to compute weight gradients for the current layer…, and a weight update module configured to compute new sparse weights in claim 7, and in its dependents 8-13. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. Claim 1 is directed to a process (Step 1: YES). Step 2A prong 1: Does the claim recite a judicial exception? Claim 1 recites: A method of training a neural network (NN) model comprising: computing activations, of the neural network (NN) model, in a forward pass… using a sparse weight matrix that is transpose invariant (computing activations by using sparse matrix-matrix multiplication is repeating mathematical calculations); computing activation gradients, of the neural network (NN) model, in a backward pass… using a transpose of the sparse weight matrix (computing activation gradients is repeating mathematical calculations, and finding the transpose of a matrix is a mental process)… and computing weight gradients, of the neural network (NN) model, in a backward pass… using the activations received from the forward pass (computing weight gradients using activations is repeating mathematical calculations)... These steps can be performed mentally or are mathematical calculations (Step 2A prong 1: YES). Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, considered individually and in combination, integrate the judicial exception into a practical application? Claim 1 recites: by a sparse matrix-matrix multiplication (spMM) module… by the sparse matrix-matrix multiplication (spMM) module… received from a weight transpose module… by a sampled dense-dense matrix multiplication (SDDMM) module… of the sparse matrix-matrix multiplication (spMM) module. Receiving a transposed matrix from a module is extra-solution activity of data gathering that does not add a meaningful limitation to the neural network training method. Performing matrix calculations with a “sparse matrix-matrix multiplication module” or “sampled dense-dense matrix multiplication module” is recited at a high level of generality, i.e., as a generic computer performing generic computer functions. (Step 2A prong 2: NO). Step 2B: These elements are recited at such a high level of generality that they fail to integrate the abstract idea into a practical application, since they only amount to data gathering without significantly more (MPEP 2106.05(g)) or provide nothing more than mere instructions to implement an abstract idea on a generic computer (MPEP 2106.05(f)). These limitations, taken either alone or in combination, fail to provide an inventive concept (Step 2B: NO). Thus, the claim is not patent eligible. Regarding claims 2-6, they recite limitations which further narrow the abstract idea by specifying more details of the mental and mathematical process that occurs (Claim 2, repeating the method of claim 1 on different layers is still a mental process; Claim 3, specifying that the sparse weight data and indices are transpose invariant further describes the data gathering step without adding a meaningful limitation to the training method; Claim 4, reusing the sparse weight matrix to compute activations is still repeated mathematical calculations; Claim 5, using a sparse weight matrix in the training process to perform non-zero computations involves determining where zero value matrix elements are, which is a mental process, and in places with non-zero values, performing mathematical calculations; Claim 6, transposing a matrix in a compressed format is a mental process or mathematical calculation). Regarding claim 7, it is an apparatus implementing a method similar to claim 2 and is rejected on the same grounds – see above. Regarding claims 8-11, they recite similar limitations to claims 3-6 respectively and are rejected on the same grounds- see above. Regarding claims 12 and 13, they recite limitations which further narrow the abstract idea by specifying more details of the mental and mathematical process that occurs (Claim 12; storing data is a well-understood, routine and conventional computer function – see MPEP 2106.05(d); Claim 13, the reduction of memory utilization is a result-oriented solution without details on how the system is to achieve the reduction, and is equivalent to the words “apply it” – see MPEP 2106.05(f)). Regarding claim 14, it is an apparatus implementing a method similar to claim 1 and is rejected on the same grounds – see above. Regarding claims 15-19, they recite similar limitations to claims 2-6 respectively and are rejected on the same grounds – see above. Regarding claim 20, it adds a limitation that further narrows the abstract idea – reusing a portion of a different module to perform matrix multiplications is a repeated mathematical calculation. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-11 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Elsen et al. (US 20230041163 A1), herein Elsen, in view of Latorre et al. (US 20200272425 A1), herein Latorre. Regarding claim 1, Elsen teaches: A method of training a neural network (NN) model comprising: computing activations, of the neural network (NN) model, in a forward pass by a sparse matrix-matrix multiplication (spMM) module using a sparse weight matrix (¶33, to compute W·X=Y, where W is the sparse weight matrix for the neural network layer, X is the input activation matrix) computing activation gradients, of the neural network (NN) model, in a backward pass by the sparse matrix-matrix multiplication (spMM) module using a transpose of the sparse weight matrix received from a weight transpose module; (¶39, to compute WT·δY=δX, where W is a sparse weight matrix for a neural network layer, δY is the gradient of the output activation matrix for the neural network layer, and δX is the gradient of the input activation matrix for the neural network layer) and computing weight gradients, of the neural network (NN) model, in a backward pass by a sampled dense-dense matrix multiplication (SDDMM) module using the activations received from the forward pass of the sparse matrix-matrix multiplication (spMM) module (¶50, The parallel processing device 150 can execute the sampled dense-dense matrix multiplication to generate the gradient of the weight matrix (as the sparse output matrix 182) and use it to update the values of the weight matrix). Elsen fails to teach a sparse matrix that is transpose invariant. The term “transpose invariant sparse matrix” is interpreted to mean a matrix that has the same per-row and per-column sparsity ratio, as per the specification and drawings. However, in the same field of endeavor, Latorre teaches a sparse matrix that is transpose invariant (¶21, As a two element constraint is imposed on two dimensions, the matrix 100 has two non-zero elements per row and also per column – see also Fig. 1A). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a transpose invariant sparse matrix as disclosed by Latorre in the method disclosed by Elsen to decrease memory usage (Latorre, ¶18, The introduced techniques change how metadata of a compressed sparse matrix has been considered. Unlike the conventional methods, which have considered indices of nonzero elements along the compression dimension as metadata of a compressed sparse matrix, the introduced techniques consider patterns of its non-zero elements in logical space (in uncompressed form) as a sparse matrix's metadata. This is based on a recognition that a sparse matrix of certain size, e.g., 4×4 matrix, can have only a limited number of patterns, i.e., locations of non-zeros elements. The introduced techniques use the pattern number as a compressed sparse matrix's metadata. As the data needed to represent each pattern is much smaller than the data needed to represent indices of non-zero elements along the compression dimension, the introduced techniques use a much smaller amount of data to store and access a matrix than the conventional methods). Regarding claim 2, Elsen teaches: The method according to Claim 1, further comprising: computing the activations for a current layer, by the sparse matrix-matrix multiplication (spMM) module, based on activations of a previous layer, sparse weight data of the sparse weight matrix of the current layer and sparse weight indices of the sparse weight matrix of the current layer in response to input datasets; (¶33, to compute W·X=Y, where W is the sparse weight matrix for the neural network layer, X is the input activation matrix) transposing the sparse weight indices of the current layer by the weight transpose module; transposing the sparse weight data of the current layer by the weight transpose; (¶39, In particular, the training system can provide, to the parallel processing device 100, i) the transpose of the current sparse weight matrix as the sparse matrix) computing activation gradients for the previous layer, by the sparse matrix-matrix multiplication (spMM) module, based on the transposed sparse weight indices of the current layer, the transposed sparse weight data of the current layer, and activation gradients of the current layer; (¶39, to compute WT·δY=δX, where W is a sparse weight matrix for a neural network layer, δY is the gradient of the output activation matrix for the neural network layer, and δX is the gradient of the input activation matrix for the neural network layer) computing weight gradients of the current layer, by the sampled dense-dense matrix multiplication (SDDMM) module, based on activations of the previous layer, the sparse weight indices of the current layer and the activation gradients of the current layer; (¶50, The parallel processing device 150 can execute the sampled dense-dense matrix multiplication to generate the gradient of the weight matrix (as the sparse output matrix 182) and use it to update the values of the weight matrix) and computing sparse weight data of the current layer for a next iteration based on the weight gradients and the current sparse weight data (¶50, to generate the gradient of the input activation matrix (as the sparse output matrix 182) and use it to continue the backpropagation). Regarding claim 3, Elsen fails to teach: The method according to Claim 2, wherein: the sparse weight data is transpose invariant; and the sparse weight indices is transpose invariant. However, Latorre teaches: wherein: the sparse weight data is transpose invariant; and the sparse weight indices is transpose invariant (¶21, As a two element constraint is imposed on two dimensions, the matrix 100 has two non-zero elements per row and also per column – see also Fig. 1A). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a transpose invariant sparse matrix as disclosed by Latorre in the method disclosed by Elsen to decrease memory usage (Latorre, ¶18, the introduced techniques use a much smaller amount of data to store and access a matrix than the conventional methods). Regarding claim 4, Elsen teaches: The method according to Claim 2, wherein the sparse matrix-matrix multiplication (spMM) module in the forward pass is reused utilizing the transpose invariant sparse weight matrix in the backward pass to compute the activation gradients (forward pass – ¶33, use the parallel processing device 100 to compute W·X=Y, where W is the sparse weight matrix for the neural network layer, X is the input activation matrix – and – backwards pass – ¶39, use the parallel processing device 100 to compute WT·δY=δX, where W is a sparse weight matrix for a neural network layer, δY is the gradient of the output activation matrix for the neural network layer, and δX is the gradient of the input activation matrix for the neural network layer). Regarding claim 5, Elsen teaches: The method according to Claim 2, wherein the neural network (NN) model is trained with non-zero computations using the transpose invariant sparse weight matrix (¶60, However, since many of the values of row r of the sparse matrix are null, the computing system can increase efficiency by only retrieving those values in the ith column of the second matrix that will be multiplied by a non-zero element of row r of the sparse matrix). Regarding claim 6, Elsen fails to teach: The method according to Claim 2, wherein the transposed sparse weight data is transposed from the sparse weight data in a compressed format. However, Latorre teaches wherein the transposed sparse weight data is transposed from the sparse weight data in a compressed format (¶3, One aspect provides a method of transposing a compressed sparse matrix). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to transpose sparse weight data in a compressed format to reduce computational cost (Latorre, ¶20, The introduced techniques thus are not only simpler and faster than the conventional methods, but also are much more efficient because they do not need to store the decompressed data before transposing). Regarding claim 7, Elsen teaches: A system for neural network (NN) model training comprising: a multiplication module including one or more sparse matrix-matrix multiplication (spMM) modules and one or more sampled dense-dense matrix multiplication (SDDMM) modules; a weight data transpose module configured to transpose sparse weight data of a… sparse weight matrix for a current layer; a weight indices transpose module configured to transpose sparse weight indices of the sparse weight matrix for the current layer; (¶39, In particular, the training system can provide, to the parallel processing device 100, i) the transpose of the current sparse weight matrix as the sparse matrix 102) the one or more sparse matrix-matrix multiplication (spMM) modules configured to, compute activations for a current layer based on the activations for a previous layer, the sparse weight data for the current layer, and the sparse weight indices for the current layer in forward propagation of a current cycle of batch datasets; (¶33, to compute W·X=Y, where W is the sparse weight matrix for the neural network layer, X is the input activation matrix) and compute activation gradients for the previous layer based on the transposed sparse weight data of the current layer, the transposed sparse weight indices of the current layer and activation gradients for the current layer in back propagation; (¶39, to compute WT·δY=δX, where W is a sparse weight matrix for a neural network layer, δY is the gradient of the output activation matrix for the neural network layer, and δX is the gradient of the input activation matrix for the neural network layer) the one or more sampled dense-dense matrix multiplication (SDDMM) modules configured to compute weight gradients for the current layer based on the activation gradients for the current layer and the activations for the previous layer in the back propagation; (¶50, The parallel processing device 150 can execute the sampled dense-dense matrix multiplication to generate the gradient of the weight matrix (as the sparse output matrix 182) and use it to update the values of the weight matrix) and a weight update module configured to compute new sparse weights based on the current weight gradients (¶50, generate the gradient of the weight matrix… use it to update the values of the weight matrix). Elsen fails to teach a sparse matrix that is transpose invariant. However, in the same field of endeavor, Latorre teaches a sparse matrix that is transpose invariant (¶21, As a two element constraint is imposed on two dimensions, the matrix 100 has two non-zero elements per row and also per column. – see also Fig. 1A). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a transpose invariant sparse matrix as disclosed by Latorre in the system disclosed by Elsen to decrease memory usage (Latorre, ¶18, The introduced techniques change how metadata of a compressed sparse matrix has been considered. Unlike the conventional methods, which have considered indices of nonzero elements along the compression dimension as metadata of a compressed sparse matrix, the introduced techniques consider patterns of its non-zero elements in logical space (in uncompressed form) as a sparse matrix's metadata. This is based on a recognition that a sparse matrix of certain size, e.g., 4×4 matrix, can have only a limited number of patterns, i.e., locations of non-zeros elements. The introduced techniques use the pattern number as a compressed sparse matrix's metadata. As the data needed to represent each pattern is much smaller than the data needed to represent indices of non-zero elements along the compression dimension, the introduced techniques use a much smaller amount of data to store and access a matrix than the conventional methods). Regarding claims 8-11, they recite limitations similar to those of claims 3-6 respectively and are rejected on the same grounds. Regarding claim 14, it is an apparatus implementing the method of claim 1 as is rejected on the same grounds – see above. Regarding claims 15-19, they recite limitations similar to those of claims 2-6 respectively and are rejected on the same grounds. Regarding claim 20, Elsen teaches: The one or more computing device readable media having instructions stored thereon that when executed by the one or more processing units perform the method according to Claim 15, wherein the sampled dense-dense matrix multiplication (SDDMM) module utilizes portions of the sparse matrix-matrix multiplication (spMM) module to compute sampled dense-dense matrix multiplication functions (¶70, The computing system is configured to execute a sampled dense-dense matrix multiplication using the two dense matrices 260 and 270 and the sparse input matrix 252 to generate a sparse output matrix 280 of size M×N). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Elsen in view of Latorre as applied to claim 7 above, and further in view of Pudipeddi et al. (US 20210019151 A1), herein Pudipeddi. Regarding claim 12, Elsen in view of Latorre fails to explicitly teach: The system of Claim 7, further comprising a memory configured to: store the activations for the current layer for use as the activation for the previous layer for a next batch dataset; store the activation gradients for the current layer for use as the activation gradients for the previous layer for the next batch dataset; store the new sparse weight data for use as the sparse weight data for the current layer for the batch dataset; and store the spare weight data for the current layer and the sparse weight indices for the current layer. However, in the same field of endeavor, Pudipeddi teaches: further comprising a memory configured to: store the activations for the current layer for use as the activation for the previous layer for a next batch dataset; store the activation gradients for the current layer for use as the activation gradients for the previous layer for the next batch dataset; store the new sparse weight data for use as the sparse weight data for the current layer for the batch dataset; and store the spare weight data for the current layer and the sparse weight indices for the current layer (¶44, Parameter server 102 may be configured to store an AI model 106 in memory 104. AI model 106 may include weights 108 and during execution of AI model 106, activations 112 and gradients 110 may be stored in memory 104 – during execution of an AI model, there will typically be a forward and backward pass involved for multiple datasets). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use intermediate data storage as disclosed by Pudipeddi in the system disclosed by Elsen in view of Latorre to avoid the computational cost of recomputing model parameters (Pudipeddi, ¶36, activations may be recomputed in the backward pass as a tradeoff between the computational cost and efficient memory usage). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Elsen in view of Latorre and Pudipeddi as applied to claim 12 above, and further in view of Sumbul et al. (US 20180181861 A1), herein Sumbul. Regarding claim 13, Elsen in view of Latorre and Pudipeddi fails to explicitly teach: The system of Claim 12, wherein utilization of the memory is reduced proportional to the non-zero value weight ratio of the sparse weight matrix as compared to training the neural network (NN) using a dense weight matrix. However, in the same field of endeavor, Sumbul teaches wherein utilization of the memory is reduced proportional to the non-zero value weight ratio of the sparse weight matrix as compared to training the neural network (NN) using a dense weight matrix (¶63, The sparser the network (e.g., the less is the sparsity percentage), the higher is the number free entries in the memory 304). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to decrease memory utilization proportionally to the increase in sparsity ratio for the weight matrix as disclosed by Sumbul in the system disclosed by Elsen in view of Latorre and Pudipeddi to save memory (Sumbul, ¶63, to save storage area). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARRISON CHAN YOUNG KIM whose telephone number is (571)272-0713. The examiner can normally be reached Monday - Thursday 10:00 am - 6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HARRISON C KIM/ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Jul 15, 2022
Application Filed
Jun 17, 2025
Non-Final Rejection — §101, §103
Sep 08, 2025
Response Filed
Dec 01, 2025
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
83%
With Interview (+33.3%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month