Prosecution Insights
Last updated: April 19, 2026
Application No. 18/056,331

SYSTEM AND METHOD FOR STRUCTURING A TENSOR NETWORK

Non-Final OA §102§103
Filed
Nov 17, 2022
Examiner
CHEN, KUANG FU
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
203 granted / 252 resolved
+25.6% vs TC avg
Strong +67% interview lift
Without
With
+67.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
289
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the claims elected 1/7/2026. Claims 1-9 and 18-25 are presented for examination. Election/Restrictions Claims 10-17 are withdrawn as they are directed to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 1/7/2026. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 5-6, 8-9, 18, 22-23 and 25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by CHENG et al. (hereinafter CHENG), US 2021/0241094 A1. Regarding independent claim 1, CHENG discloses a computer-implemented method for determining a structure for a tensor network ([0006] “present disclosure provide computer-implemented method for selecting ranks to decompose weight tensors... of a pretrained deep neural network", [0033]-[0035] FIG. 1 discloses that the method determines the "Tensor Ring" structure, specifically the "tensor ranks" which define the dimensions of the bonds between the nodes in the tensor network), the method comprising: calculating a cost function for a tensor network structure ([0006] The method involves "performing inference on a target dataset using the pretrained DNN with the decomposed weight tensors to obtain a reward metric". This reward metric serves as the inverse cost function for decomposed weight tensors, [0033-0034] wherein tensor ring decomposition (for a tensor network structure) is explicitly described as having cores (nodes) and ranks (bonds) as shown in FIG. 1(d)), the tensor network structure comprising a plurality of nodes and one or more bonds between at least some of the plurality of nodes ([0033-0034] wherein tensor ring decomposition (the tensor network structure) is explicitly described as having cores (nodes) and ranks (bonds) as shown in FIG. 1(d) (comprising a plurality of nodes and one or more bonds between at least some of the plurality of nodes));modifying the tensor network structure to generate a modified tensor network structure ([0006] the method involves an agent “to determine an action value related to a rank for the layer" and "decomposing its weight tensor according to its rank determined from its action value", [0034]-[0035] wherein modification of the "rank" is changing the bond size between tensor nodes of the tensor ring (modifying the tensor network structure to generate a modified tensor network structure));re-calculating the cost function for the modified tensor network structure ([0006], [0033], [0055] the process "iterating, until a stop condition has been reached". In each iteration, the system performs inference on the newly decomposed tensor ring (for the modified tensor network structure) to "obtain a reward metric" construed as re-calculating the cost function);determining, based at least on the re-calculated cost function, to output the modified tensor network structure ([0006]-[0007] the method includes "responsive to a stop condition having been reached, outputting ranks...corresponding to a best reward metric", [0033], [0055]-[0056] the determination to output rank of corresponding modified tensor ring decomposition weight tensor is based on finding the "best reward metric" (based at least on the re-calculated cost function, to output the modified tensor network structure)); and outputting the modified tensor network structure ([0007] the method outputs the "ranks" and, per FIG. 3 step 325 and FIG. 4 step 410, the "modified pretrained DNN" itself. By outputting the specific ranks that define the tensor ring, the method outputs the updated tensor ring weight tensor structure). Regarding dependent claim 5, CHENG discloses the method of claim 1, wherein modifying the tensor network structure comprises changing a bond size between a pair of nodes in the tensor network structure ([0026], [0042]-[0044], [0052]-[0053] by "selecting ranks" for the Tensor Ring decomposition, the "rank" R determines the dimension of the indices connecting the tensor nodes (cores), which is the "bond size." Modifications to these ranks (e.g., trying ranks 10, 11, 9) using Reinforcement Learning to optimize the weight tensor structure of the Tensor Ring FIG. 1(d)). Regarding dependent claim 6, CHENG discloses the method of claim 1, wherein the cost function rewards tensor network structures that allow bonds between tensor nodes to be contracted in parallel ([0036], [0042]-[0044] discloses optimizes for "Tensor Ring" (TR) format, wherein the TR format "does not require strict ordering of multilinear products between cores... due to the trace operation". This property allows parallel contraction. Since construed cost function rewards the selection of efficient TR structures, it inherently rewards structures that allow parallel contraction). Regarding dependent claim 8, CHENG discloses the method of claim 1, wherein the cost function rewards tensor network structures with greater computational efficiency in contraction of the tensor network ([0005]-[0006] cost function (reward metric) is based on "inference accuracy and model compression"), wherein computational efficiency is represented by at least one of convergence time and number of computing operations to contract the tensor network ([0032], [0069] reducing the number of parameters (compression) reduces the "computation workload" (number of computing operations) and accelerates "inference speed" (convergence time)). Regarding dependent claim 9, CHENG discloses the method of claim 1, further comprising: determining, based on the re-calculated cost function, that a cost function convergence criterion has been achieved ([0074] “1. …iterating, until a stop condition has been reached", [0055]-[0056] Stop conditions include when "the difference between reward metrics of consecutive iterations is less than a first threshold value" (convergence criterion)); and outputting the modified tensor network structure in response to the cost function convergence criterion being achieved ([0074] Upon reaching this condition “1. …outputting ranks for each layer of the pretrained DNN that had its weight tensor decomposed corresponding to a best reward metric”). Regarding claims 18, 22-23, and 25, these are one or more transitory computer-readable media claims that are substantially the same as the computer-implemented method of claims 1, 5-6, and 25, respectively. Thus, claims 18, 22-23, and 25 are rejected for the same reasons as claims 1, 5-6, and 25. In addition, CHENG discloses one or more non-transitory computer-readable media storing instructions executable to perform operations…the operations comprising ([0072]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over CHENG, as applied in the rejection of claims 1 and 18 above, in view of Merom et al. (hereinafter Meirom) “Optimizing Tensor Network Contraction Using Reinforcement Learning” (June 28, 2022). Regarding dependent claim 2, CHENG teaches all the elements of claim 1. CHENG does not expressly teach further comprising: inputting one or more values representing a physical material into the modified tensor network structure; and receiving at least one output from the modified tensor network structure, the at least one output representing a property of the physical material. However, Meirom teaches inputting and outputting values representing a physical material (page 1 “Going beyond simulations of quantum computers, tensor network contractions play a key role in other areas of science including many-body physics…statistical mechanics" and Figure 1 shows “Input: Tensor Network” and output, wherein “many-body physics…statistical mechanics” model physical materials). Because Cheng and Meirom both address utilizing reinforcement learning to optimize tensor networks, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of inputting and outputting values representing a physical material as suggested by Meirom into Cheng’s method, with a reasonable expectation of success, such that Cheng's "rank selection" optimization includes the "many-body physics" tensor networks described by Meirom to teach further comprising: inputting one or more values representing a physical material into the modified tensor network structure; and receiving at least one output from the modified tensor network structure, the at least one output representing a property of the physical material. This modification would have been motivated by the desire to achieve efficient simulations of physical materials (Meirom Abstract). Regarding dependent claim 3, Cheng teaches all the elements of claim 1. Cheng does not expressly teach wherein modifying the tensor network structure comprises at least one of: adding a node to the tensor network structure; and removing a node from the tensor network structure. However, Meirom teaches modifying a tensor network structure by adding a node or removing a node (page 4 left column describes the contraction process as an iterative graph modification where "a new tensor network... is defined by removing the two contracted tensors u, v [nodes] and adding the resulting tensor [node]"). Because Cheng and Meirom both address utilizing reinforcement learning to optimize tensor networks, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of modifying a tensor network structure by adding a node or removing a node as suggested by Meirom into Cheng’s method, with a reasonable expectation of success, such that Cheng's method involving a reinforcement learning approach to optimize the bond dimensions/ranks within a topology can be enhanced to handle dynamic structural changes to the graph topology via node addition/removal described by Meirom to teach wherein modifying the tensor network structure comprises at least one of: adding a node to the tensor network structure; and removing a node from the tensor network structure. This modification would have been motivated by the desire to achieve efficient simulations of physical materials (Meirom Abstract). Regarding dependent claims 19-20, these are one or more transitory computer-readable media claims that are substantially the same as the computer-implemented method of claims 2-3, respectively. Thus, claims 19-20 are rejected for the same reasons as claims 2-3. Claims 4 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over CHENG, as applied in the rejection of claims 1 and 18 above, in view of Li et al. (hereinafter Li) “Evolutionary Topology Search for Tensor Network Decomposition” (2020). Regarding dependent claim 4, CHENG teaches all the elements of claim 1. CHENG does not expressly teach wherein modifying the tensor network structure comprises at least one of: adding a bond between a pair of nodes in the tensor network structure; and removing a bond between a pair of nodes in the tensor network structure. However, Li teaches modifying a tensor network structure (Section 2.2, Section 3.2 describes an adjacency matrix representing a tensor network TN structure wherein the adjacency matrix encodes the TN’s topology into a fixed length binary string that is modified by flip bits in the string) comprises at least one of: adding a bond between a pair of nodes in the tensor network structure; and removing a bond between a pair of nodes in the tensor network structure (Section 3.2, Figure 3, and page 6 left column describes modifying the adjacency matrix by “flopping each bit independently” which adds or removes a connection between pairs of nodes). Because CHENG and Li both address modifying structure of tensor networks, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings wherein modifying a tensor network structure comprises at least one of: adding a bond between a pair of nodes in the tensor network structure; and removing a bond between a pair of nodes in the tensor network structure as suggested by Li into CHENG’s method, with a reasonable expectation of success, such that CHENG's method involving a reinforcement learning approach to optimize the bond dimensions/ranks within a topology can be enhanced with Li’s topology search adding/removing bonds between a pair of nodes in the tensor network structure described by Li to teach wherein modifying the tensor network structure comprises at least one of: adding a bond between a pair of nodes in the tensor network structure; and removing a bond between a pair of nodes in the tensor network structure. This modification would have been motivated by the desire to effectively discover the ground-truth topology or even better structures with a small number of generations, and significantly boost the representational power of TN decomposition compared with well-known tensor-train (TT) or tensor-ring (TR) models (Li Abstract). Regarding dependent claim 21, this is an one or more transitory computer-readable media claim that is substantially the same as the computer-implemented method of claim 4. Thus, claim 21 is rejected for the same reason as claim 4. Claims 7 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over CHENG, as applied in the rejection of claims 1 and 18 above, in view of Fawzi et al. (hereinafter Fawzi) “Discovering faster matrix multiplication algorithms with reinforcement learning” (October 5, 2020). Regarding dependent claim 7, CHENG teaches all the elements of claim 1. CHENG does not expressly teach wherein the cost function rewards tensor network structures that reduce interactions with a cache during contraction of the tensor network. However, Fawzi teaches a cost function that rewards tensor network structures during contraction (page 51 Section Rapid tailored algorithm discovery describes modifying the reward of AlphaTensor to include “the negative of the runtime…on the target hardware”) reducing interactions with a cache (page 52 left column describes that the discovered algorithms allow operations to be "efficiently fused". Operation fusion is a standard technique specifically designed to increase cache locality and reduce cache interactions (memory bandwidth pressure). Because CHENG and Fawzi both address reducing computation workload associated with tensor network operations, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of a cost function that rewards tensor network structures during contraction reducing interactions with a cache as suggested by Fawzi into CHENG’s method, with a reasonable expectation of success, such that CHENG's method associated with the reward function based on model size is refined with Fawzi’s hardware-specific benchmarking reward runtime/fusion to explicitly target cache efficiency, which is the primary bottleneck for the GPUs CHENG utilizes to teach wherein the cost function rewards tensor network structures that reduce interactions with a cache during contraction of the tensor network. This modification would have been motivated by the desire to provide ability to accelerate the process of algorithmic discovery of the tensor network on a range of problems, and to optimize for different criteria. (Fawzi Abstract). Regarding dependent claim 24, this is an one or more transitory computer-readable media claim that is substantially the same as the computer-implemented method of claim 7. Thus, claim 24 is rejected for the same reason as claim 7. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ye et al. “Quantum Architecture Search via Continual Reinforcement Learning (2021) (ABSTRACT Quantum computing has promised significant improvement in solving difficult computational tasks over classical computers. Designing quantum circuits for practical use, however, is not a trivial objective and requires expert-level knowledge. To aid this endeavor, this paper proposes a machine learning-based method to construct quantum circuit architectures. Previous works have demonstrated that classical deep reinforcement learning (DRL) algorithms can successfully construct quantum circuit architectures without encoded physics knowledge. However, these DRL based works are not generalizable to settings with changing device noises, thus requiring considerable amounts of training resources to keep the RL models up-to-date. With this in mind, we incorporated continual learning to enhance the performance of our algorithm. In this paper, we present the Probabilistic Policy Reuse with deep Q-learning (PPR-DQL) framework to tackle this circuit design challenge. By conducting numerical simulations over various noise patterns, we demonstrate that the RL agent with PPR was able to find the quantum gate sequence to generate the two-qubit Bell state faster than the agent that was trained from scratch. The proposed framework is general and can be applied to other quantum gate synthesis or control problems – including the automatic calibration of quantum devices). Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUANG FU CHEN whose telephone number is (571)272-1393. The examiner can normally be reached M-F 9:00-5:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KC CHEN/Primary Patent Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Nov 17, 2022
Application Filed
Dec 30, 2022
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579425
PARAMETERIZED ACTIVATION FUNCTIONS TO ADJUST MODEL LINEARITY
2y 5m to grant Granted Mar 17, 2026
Patent 12566994
SYSTEMS AND METHODS TO CONFIGURE DEFAULTS BASED ON A MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12561593
METHOD FOR DETERMINING PRESENCE OF A SIGNATURE CONSISTENT WITH A PAIR OF MAJORANA ZERO MODES AND A QUANTUM COMPUTER
2y 5m to grant Granted Feb 24, 2026
Patent 12561561
Mapping User Vectors Between Embeddings For A Machine Learning Model for Authorizing Access to Resource
2y 5m to grant Granted Feb 24, 2026
Patent 12561497
AUTOMATED OPERATING MODE DETECTION FOR A MULTI-MODAL SYSTEM WITH MULTIVARIATE TIME-SERIES DATA
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+67.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month