Prosecution Insights
Last updated: April 19, 2026
Application No. 18/012,528

Distributed Quantum Computing Simulation Method and Apparatus

Non-Final OA §101§103
Filed
Dec 22, 2022
Examiner
MCINTOSH, ANDREW T
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Suzhou Inspur Intelligent Technology Cp Ltd.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
95%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
393 granted / 511 resolved
+21.9% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
538
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 511 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is responsive to communications filed on April 25, 2023. This action is made Non-Final. Claims 1-4 and 6-21 are pending in the case. Claims 1 and 8 are independent claims. Claims 1, 2, 6-12, and 16-21 are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement The information disclosure statement (IDS(s)) submitted on 03/15/2023 is/are in compliance with the provisions of 37 C.F.R. 1.97. Accordingly, the IDS(s) is/are being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 8-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 8-12: Claim 8 recites an “apparatus, comprising a main process node and a plurality of sub-process nodes ... to perform quantum computing simulation”. The recited “apparatus” can be embodied in the form of software are, and are not necessarily hardware components (e.g. a processor). Accordingly, the recited “apparatus” is computer software per se and not a “process,” a “machine,” a “manufacture,” or a “composition of matter,” as defined in 35 U.S.C. 101. Claims 9-12 depend from claim 8 and merely further define the “apparatus”. Thus, Claims 8-12 fail to recite statutory subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 6-8, 16, 17, and 19-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang, Cupjin, et al. "Classical simulation of quantum supremacy circuits." arXiv preprint arXiv:2005.06787 (2020) (“Huang”), in view of Farshbaf, Mehdi, and Mohammad-Reza Feizi-Derakhshi. "Multi-objective optimization of graph partitioning using genetic algorithms." 2009 Third International Conference on Advanced Engineering Computing and Applications in Sciences. IEEE, 2009 (“Farshbaf”), and further in view of Pan, Feng, et al. "Contracting arbitrary tensor networks: general approximate algorithm and applications in graphical models and quantum circuit simulations." Physical Review Letters 125.6 (2020): 060503 (“Pan”). Claim 1: Huang teaches or suggests a distributed quantum computing simulation method, comprising: converting a quantum circuit to be simulated into a tensor network that is represented by an ... graph, and segmenting the undirected graph into a plurality of sub-graphs ... that is based on operation resources of a distributed system (see Fig. 1, 3; §3 - We view a quantum circuit as a tensor network. A tensor network is an attributed multihypergraph H = (V;E), where each vertex v is associated to a multi-dimensional array Tv called a tensor, and each hyperedge is associated to a shared index among all the tensors it connects. two vertices in the hypergraph are chosen. The corresponding tensors are contracted according to their shared indices, and the two-vertex subgraph is replaced by a single vertex representing the newly formed tensor. This process is repeated until only one vertex is left. where each sub-tensor network corresponds to the full tensor network with the fixed hyperedges removed. Contracting these sub-tensor networks is perfectly parallellizable; §4 Hypergraph partitioning - divides the vertices of a hypergraph into several components so that the size of each component is neither too small nor too big, while minimizing the number of interconnecting hyperedges; §A.1.2 - corresponding subgraph is then replaced by a single vertex preserving all the outgoing connections, with an updated tensor associated to it.); respectively performing, on sub-process nodes, tensor contraction and merging on the plurality of sub-graphs for connected tensors until only one tensor is left, so as to finally obtain zero-order tensors of the plurality of sub-graphs at the same time (see Fig. 1, 3; §1.1 - parallelizable problem into nearly-identical subtasks that run in parallel; §3 - We view a quantum circuit as a tensor network. A tensor network is an attributed multihypergraph H = (V;E), where each vertex v is associated to a multi-dimensional array Tv called a tensor, and each hyperedge is associated to a shared index among all the tensors it connects. two vertices in the hypergraph are chosen. The corresponding tensors are contracted according to their shared indices, and the two-vertex subgraph is replaced by a single vertex representing the newly formed tensor. This process is repeated until only one vertex is left. small subset of indices. For each assignment of these indices, the computational subtask is itself a tensor network contraction. where each sub-tensor network corresponds to the full tensor network with the fixed hyperedges removed. Contracting these sub-tensor networks is perfectly parallellizable; §4 Hypergraph partitioning - divides the vertices of a hypergraph into several components so that the size of each component is neither too small nor too big, while minimizing the number of interconnecting hyperedges; §A.1.2 - corresponding subgraph is then replaced by a single vertex preserving all the outgoing connections, with an updated tensor associated to it; §A.4.1 - split the large tensor network contraction task into many smaller tensor network contraction subtasks. This step is called preprocessing. assign different subtasks to different computation nodes.); acquiring and superposing the zero-order tensors of the plurality of sub-graphs from the sub-process nodes at the same time, so as to determine a zero-order tensor of the ... graph, and using the zero-order tensor of the ... graph ... so as to perform quantum computing simulation (see Fig. 1, 3; §Abstract - a tensor network-based classical simulation algorithm§1.1 - parallelizable problem into nearly-identical subtasks that run in parallel; §1.2 - tensor network-based algorithm dynamically decomposes the computational task into many smaller subtasks that can be executed in parallel on a cluster; §3 - We view a quantum circuit as a tensor network. A tensor network is an attributed multihypergraph H = (V;E), where each vertex v is associated to a multi-dimensional array Tv called a tensor, and each hyperedge is associated to a shared index among all the tensors it connects. two vertices in the hypergraph are chosen. The corresponding tensors are contracted according to their shared indices, and the two-vertex subgraph is replaced by a single vertex representing the newly formed tensor. This process is repeated until only one vertex is left. small subset of indices. For each assignment of these indices, the computational subtask is itself a tensor network contraction. where each sub-tensor network corresponds to the full tensor network with the fixed hyperedges removed. Contracting these sub-tensor networks is perfectly parallellizable; §4 Hypergraph partitioning - divides the vertices of a hypergraph into several components so that the size of each component is neither too small nor too big, while minimizing the number of interconnecting hyperedges; §A.1.2 - corresponding subgraph is then replaced by a single vertex preserving all the outgoing connections, with an updated tensor associated to it; §A.4.1 - split the large tensor network contraction task into many smaller tensor network contraction subtasks. This step is called preprocessing. assign different subtasks to different computation nodes.). Huang does not explicitly disclose undirected graph; by using a genetic algorithm; as a probability amplitude of a positive operator value measurement element. Pan teaches or suggests undirected graph; as a probability amplitude of a positive operator value measurement element (see Fig. 1-4; p. 2, col. 1 - contraction, using approaches analogous to the density matrix renormalization group (DMRG) [11], until the final result, a scalar Z, is obtained. We show applications of our method in graphical models, where Z represents the normalization factor of the joint distribution of a large number of random variables (i.e., the partition function in physics), and applications in quantum circuit simulations, where Z represents a single amplitude of the quantum circuit; p. 3, col. 2 - probability distribution over discrete variables is a tensor, thus every graphical model can be converted to a tensor network by introducing copy tensors on each node of the graph and matrices (or tensors) on each edge (or multibody factor) of the (factor) graph. The computation of the partition function Z naturally translates to contraction of the tensor network defined exactly on the same graph; p. 4, col. 1 - computing single amplitude estimates of a superconducting quantum circuit [45], which can be treated as a graphical model with complex couplings; p. 4, col. 2 - approximate single-amplitude simulation of quantum circuits with any kind of connectivities ... after converting the initial state, the measurement qubit string, and the gates into tensors.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Huang, to include undirected graph; as a probability amplitude of a positive operator value measurement element for the purpose of efficiently simulating quantum circuits based on contraction applied to undirected graphs and determining amplitudes as quantum output, improving quantum computer modeling, as taught by Pan (2-4). Farshbaf teaches or suggests by using a genetic algorithm (see Abstract - prevent the GA from getting stuck in the local optima and make the search more efficient and increase the probability of finding more optimal solutions; §I – Graph partitioning is used directly in a wide range of problems, such as design allocation problems [1] and job scheduling in multi processor architecture [2]. Graph partitioning involves dividing a set of objects into a specified number of partitions according to the minimization of some optimization criterion additive over the partitions ... improvements are injecting best solutions of last runs into the first generation of the next runs and also storing the non-dominated set of the previous generations to combine with the last generation's non-dominated set increase the opportunity for better solutions; §II, A - Pareto optimal solution sets are often preferred to single solutions because they can be practical when considering real-life problems.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Huang, to include by using a genetic algorithm for the purpose of efficiently implementing graph partitioning using genetic algorithms, increasing partition solution optimization and practicality, as taught by Farshbaf (I and II). Claim(s) 8: Claim(s) 8 correspond to Claim 1, and thus, Huang, Pan, and Farshbaf, teach or suggest the limitations of claim(s) 8 as well. Claim 2: Pan further teaches or suggests wherein the step of converting the quantum circuit to be simulated into the tensor network that is represented by the undirected graph, comprises: converting, by using an trace operation, an input state, an operation gate and a measurement of a quantum bit in the quantum circuit into tensors, and determining the tensors to be vertexes in the undirected graph; and determining connection relationships among the input state, the operation gate and the measurement of the quantum bit in the quantum circuit to be connected edges between corresponding vertices in the undirected graph (see Fig. 1-4; p. 2, col. 1 - contraction, using approaches analogous to the density matrix renormalization group (DMRG) [11], until the final result, a scalar Z, is obtained. We show applications of our method in graphical models, where Z represents the normalization factor of the joint distribution of a large number of random variables (i.e., the partition function in physics), and applications in quantum circuit simulations, where Z represents a single amplitude of the quantum circuit; p. 3, col. 2 - probability distribution over discrete variables is a tensor, thus every graphical model can be converted to a tensor network by introducing copy tensors on each node of the graph and matrices (or tensors) on each edge (or multibody factor) of the (factor) graph. The computation of the partition function Z naturally translates to contraction of the tensor network defined exactly on the same graph; p. 4, col. 1 - computing single amplitude estimates of a superconducting quantum circuit [45], which can be treated as a graphical model with complex couplings; p. 4, col. 2 - approximate single-amplitude simulation of quantum circuits with any kind of connectivities ... after converting the initial state, the measurement qubit string, and the gates into tensors.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Huang, to include wherein the step of converting the quantum circuit to be simulated into the tensor network that is represented by the undirected graph, comprises: converting, by using an trace operation, an input state, an operation gate and a measurement of a quantum bit in the quantum circuit into tensors, and determining the tensors to be vertexes in the undirected graph; and determining connection relationships among the input state, the operation gate and the measurement of the quantum bit in the quantum circuit to be connected edges between corresponding vertices in the undirected graph for the purpose of efficiently simulating quantum circuits based on contraction applied to undirected graphs and determining amplitudes as quantum output, improving quantum computer modeling, as taught by Pan (2-4). Claim 6: Huang further teaches or suggests wherein the step of respectively performing, on the sub-process nodes, tensor contraction and merging on the plurality of sub-graphs for the connected tensors until only one tensor is left, so as to finally obtain the zero-order tensors of the plurality of sub-graphs at the same time, comprises: respectively performing, by the sub-process nodes, tensor contraction and merging for different nodes in the plurality of sub-graphs in sequence by using the same tensor contraction and merging sequence, consuming the same computing resources within a unit computing time, an enabling the sub-process nodes with the same computing capability to obtain the zero-order tensors of the plurality of sub-graphs at the same time (see Fig. 1, 3; §Abstract - a tensor network-based classical simulation algorithm§1.1 - parallelizable problem into nearly-identical subtasks that run in parallel; §1.2 - tensor network-based algorithm dynamically decomposes the computational task into many smaller subtasks that can be executed in parallel on a cluster; §3 - We view a quantum circuit as a tensor network. A tensor network is an attributed multihypergraph H = (V;E), where each vertex v is associated to a multi-dimensional array Tv called a tensor, and each hyperedge is associated to a shared index among all the tensors it connects. two vertices in the hypergraph are chosen. The corresponding tensors are contracted according to their shared indices, and the two-vertex subgraph is replaced by a single vertex representing the newly formed tensor. This process is repeated until only one vertex is left. small subset of indices. For each assignment of these indices, the computational subtask is itself a tensor network contraction. where each sub-tensor network corresponds to the full tensor network with the fixed hyperedges removed. Contracting these sub-tensor networks is perfectly parallellizable; §4 Hypergraph partitioning - divides the vertices of a hypergraph into several components so that the size of each component is neither too small nor too big, while minimizing the number of interconnecting hyperedges; §A.1.2 - corresponding subgraph is then replaced by a single vertex preserving all the outgoing connections, with an updated tensor associated to it; §A.4.1 - split the large tensor network contraction task into many smaller tensor network contraction subtasks. This step is called preprocessing. assign different subtasks to different computation nodes.). Claim 7: Huang further teaches or suggests wherein the step of converting the quantum circuit to be simulated into the tensor network that is represented by the ... graph, and segmenting the ... graph into the plurality of subgraphs ... that is based on the operation resources of the distributed system; and the step of acquiring and superposing the zero-order tensors of the plurality of sub-graphs from the sub-process nodes at the same time, so as to determine the zero-order tensor of the ... graph, and using the zero-order tensor of the ... graph ... so as to perform quantum computing simulation, are all performed on a main process node of the distributed system (see Fig. 1, 3; §Abstract - a tensor network-based classical simulation algorithm§1.1 - parallelizable problem into nearly-identical subtasks that run in parallel; §1.2 - tensor network-based algorithm dynamically decomposes the computational task into many smaller subtasks that can be executed in parallel on a cluster; §3 - We view a quantum circuit as a tensor network. A tensor network is an attributed multihypergraph H = (V;E), where each vertex v is associated to a multi-dimensional array Tv called a tensor, and each hyperedge is associated to a shared index among all the tensors it connects. two vertices in the hypergraph are chosen. The corresponding tensors are contracted according to their shared indices, and the two-vertex subgraph is replaced by a single vertex representing the newly formed tensor. This process is repeated until only one vertex is left. small subset of indices. For each assignment of these indices, the computational subtask is itself a tensor network contraction. where each sub-tensor network corresponds to the full tensor network with the fixed hyperedges removed. Contracting these sub-tensor networks is perfectly parallellizable; §4 Hypergraph partitioning - divides the vertices of a hypergraph into several components so that the size of each component is neither too small nor too big, while minimizing the number of interconnecting hyperedges; §A.1.2 - corresponding subgraph is then replaced by a single vertex preserving all the outgoing connections, with an updated tensor associated to it; §A.4.1 - split the large tensor network contraction task into many smaller tensor network contraction subtasks. This step is called preprocessing. assign different subtasks to different computation nodes.). Pan further teaches or suggests undirected graph; as the probability amplitude of the positive operator value measurement element (see Fig. 1-4; p. 2, col. 1 - contraction, using approaches analogous to the density matrix renormalization group (DMRG) [11], until the final result, a scalar Z, is obtained. We show applications of our method in graphical models, where Z represents the normalization factor of the joint distribution of a large number of random variables (i.e., the partition function in physics), and applications in quantum circuit simulations, where Z represents a single amplitude of the quantum circuit; p. 3, col. 2 - probability distribution over discrete variables is a tensor, thus every graphical model can be converted to a tensor network by introducing copy tensors on each node of the graph and matrices (or tensors) on each edge (or multibody factor) of the (factor) graph. The computation of the partition function Z naturally translates to contraction of the tensor network defined exactly on the same graph; p. 4, col. 1 - computing single amplitude estimates of a superconducting quantum circuit [45], which can be treated as a graphical model with complex couplings; p. 4, col. 2 - approximate single-amplitude simulation of quantum circuits with any kind of connectivities ... after converting the initial state, the measurement qubit string, and the gates into tensors.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Huang, to include undirected graph; as the probability amplitude of the positive operator value measurement element for the purpose of efficiently simulating quantum circuits based on contraction applied to undirected graphs and determining amplitudes as quantum output, improving quantum computer modeling, as taught by Pan (2-4). Farshbaf further teaches or suggests by using the genetic algorithm (see Abstract - prevent the GA from getting stuck in the local optima and make the search more efficient and increase the probability of finding more optimal solutions; §I – Graph partitioning is used directly in a wide range of problems, such as design allocation problems [1] and job scheduling in multi processor architecture [2]. Graph partitioning involves dividing a set of objects into a specified number of partitions according to the minimization of some optimization criterion additive over the partitions ... improvements are injecting best solutions of last runs into the first generation of the next runs and also storing the non-dominated set of the previous generations to combine with the last generation's non-dominated set increase the opportunity for better solutions; §II, A - Pareto optimal solution sets are often preferred to single solutions because they can be practical when considering real-life problems.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Huang, to include by using the genetic algorithm for the purpose of efficiently implementing graph partitioning using genetic algorithms, increasing partition solution optimization and practicality, as taught by Farshbaf (I and II). Claim 16: As indicated above, Huang teaches or suggests the step of respectively performing, on sub-process nodes, tensor contraction and merging on the plurality of sub-graphs for connected tensors. Pan further teaches or suggests contracting two inner edges of two connected tensors and merging two vertices of the two connected tensors into one, wherein each of the two connected tensors have an inner edge and an open edge (see Fig. 1-4; p. 2, col. 1 - contraction, using approaches analogous to the density matrix renormalization group (DMRG) [11], until the final result, a scalar Z, is obtained. We show applications of our method in graphical models, where Z represents the normalization factor of the joint distribution of a large number of random variables (i.e., the partition function in physics), and applications in quantum circuit simulations, where Z represents a single amplitude of the quantum circuit; p. 2, col. 2 - contract operation is processed by merging two tensors to a single tensor; operations swap, contraction, and merge are repeated until the overall tensor network is finally contracted; p. 3, col. 2 - probability distribution over discrete variables is a tensor, thus every graphical model can be converted to a tensor network by introducing copy tensors on each node of the graph and matrices (or tensors) on each edge (or multibody factor) of the (factor) graph. The computation of the partition function Z naturally translates to contraction of the tensor network defined exactly on the same graph; p. 4, col. 1 - computing single amplitude estimates of a superconducting quantum circuit [45], which can be treated as a graphical model with complex couplings; p. 4, col. 2 - approximate single-amplitude simulation of quantum circuits with any kind of connectivities ... after converting the initial state, the measurement qubit string, and the gates into tensors.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Huang, to include contracting two inner edges of two connected tensors and merging two vertices of the two connected tensors into one, wherein each of the two connected tensors have an inner edge and an open edge for the purpose of efficiently simulating quantum circuits based on contraction applied to undirected graphs and determining amplitudes as quantum output, improving quantum computer modeling, as taught by Pan (2-4). Claim 17: Huang further teaches or suggests respectively computing the contraction and merging of different sub-graphs on different cores of distributed processor threads, to realize distributed contraction and merging of the tensor network (see Fig. 1, 3; §Abstract - a tensor network-based classical simulation algorithm§1.1 - parallelizable problem into nearly-identical subtasks that run in parallel; §1.2 - tensor network-based algorithm dynamically decomposes the computational task into many smaller subtasks that can be executed in parallel on a cluster; §3 - We view a quantum circuit as a tensor network. A tensor network is an attributed multihypergraph H = (V;E), where each vertex v is associated to a multi-dimensional array Tv called a tensor, and each hyperedge is associated to a shared index among all the tensors it connects. two vertices in the hypergraph are chosen. The corresponding tensors are contracted according to their shared indices, and the two-vertex subgraph is replaced by a single vertex representing the newly formed tensor. This process is repeated until only one vertex is left. small subset of indices. For each assignment of these indices, the computational subtask is itself a tensor network contraction. where each sub-tensor network corresponds to the full tensor network with the fixed hyperedges removed. Contracting these sub-tensor networks is perfectly parallellizable; §4 Hypergraph partitioning - divides the vertices of a hypergraph into several components so that the size of each component is neither too small nor too big, while minimizing the number of interconnecting hyperedges; §A.1.2 - corresponding subgraph is then replaced by a single vertex preserving all the outgoing connections, with an updated tensor associated to it; §A.4.1 - split the large tensor network contraction task into many smaller tensor network contraction subtasks. This step is called preprocessing. assign different subtasks to different computation nodes.). Claim 19: Huang further teaches or suggests wherein the structure of each sub-graph obtained by the contraction and merging of each process is completely consistent (see Fig. 1, 3; §1.1 - parallelizable problem into nearly-identical subtasks that run in parallel; §3 - We view a quantum circuit as a tensor network. A tensor network is an attributed multihypergraph H = (V;E), where each vertex v is associated to a multi-dimensional array Tv called a tensor, and each hyperedge is associated to a shared index among all the tensors it connects. two vertices in the hypergraph are chosen. The corresponding tensors are contracted according to their shared indices, and the two-vertex subgraph is replaced by a single vertex representing the newly formed tensor. This process is repeated until only one vertex is left. small subset of indices. For each assignment of these indices, the computational subtask is itself a tensor network contraction. where each sub-tensor network corresponds to the full tensor network with the fixed hyperedges removed. Contracting these sub-tensor networks is perfectly parallellizable; §4 Hypergraph partitioning - divides the vertices of a hypergraph into several components so that the size of each component is neither too small nor too big, while minimizing the number of interconnecting hyperedges; §A.1.2 - corresponding subgraph is then replaced by a single vertex preserving all the outgoing connections, with an updated tensor associated to it; §A.4.1 - split the large tensor network contraction task into many smaller tensor network contraction subtasks. This step is called preprocessing. assign different subtasks to different computation nodes.). Pan further teaches or suggests and used computing time of each sub-graph is consistent (see Fig. 1-4; p. 4, col. 1- computational time on each instance is of a few seconds; p. 4, col. 2 - approximate single-amplitude simulation of quantum circuits with any kind of connectivities ... after converting the initial state, the measurement qubit string, and the gates into tensors.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Huang, to include and used computing time of each sub-graph is consistent for the purpose of efficiently simulating quantum circuits based on contraction applied to undirected graphs and determining amplitudes as quantum output, improving quantum computer modeling, as taught by Pan (2-4). Claim 20: Huang for teaches or suggests wherein the tensor network is formed by connecting different tensors via topology (see Fig. 1, 3; §1.1 - parallelizable problem into nearly-identical subtasks that run in parallel; §3 - We view a quantum circuit as a tensor network. A tensor network is an attributed multihypergraph H = (V;E), where each vertex v is associated to a multi-dimensional array Tv called a tensor, and each hyperedge is associated to a shared index among all the tensors it connects. two vertices in the hypergraph are chosen. The corresponding tensors are contracted according to their shared indices, and the two-vertex subgraph is replaced by a single vertex representing the newly formed tensor. This process is repeated until only one vertex is left. small subset of indices. For each assignment of these indices, the computational subtask is itself a tensor network contraction. where each sub-tensor network corresponds to the full tensor network with the fixed hyperedges removed. Contracting these sub-tensor networks is perfectly parallellizable; §4 Hypergraph partitioning - divides the vertices of a hypergraph into several components so that the size of each component is neither too small nor too big, while minimizing the number of interconnecting hyperedges; §A.1.2 - corresponding subgraph is then replaced by a single vertex preserving all the outgoing connections, with an updated tensor associated to it; §A.4.1 - split the large tensor network contraction task into many smaller tensor network contraction subtasks. This step is called preprocessing. assign different subtasks to different computation nodes.). Claim 21: Huang further teaches or suggests wherein the method is capable of preforming density matrix-based single-amplitude strategy quantum computing simulation on a distributed computing system (see Fig. 1, 3; §1.1 - parallelizable problem into nearly-identical subtasks that run in parallel; §3 - We view a quantum circuit as a tensor network. aggregated result of the amplitudes can be expressed as an open tensor network. A tensor network is an attributed multihypergraph H = (V;E), where each vertex v is associated to a multi-dimensional array Tv called a tensor, and each hyperedge is associated to a shared index among all the tensors it connects. two vertices in the hypergraph are chosen. The corresponding tensors are contracted according to their shared indices, and the two-vertex subgraph is replaced by a single vertex representing the newly formed tensor. This process is repeated until only one vertex is left. small subset of indices. For each assignment of these indices, the computational subtask is itself a tensor network contraction. where each sub-tensor network corresponds to the full tensor network with the fixed hyperedges removed. Contracting these sub-tensor networks is perfectly parallellizable; §4 Hypergraph partitioning - divides the vertices of a hypergraph into several components so that the size of each component is neither too small nor too big, while minimizing the number of interconnecting hyperedges; §5 – calculating 64 amplitudes can be made as efficient as calculating a single amplitude; §A.1.2 - corresponding subgraph is then replaced by a single vertex preserving all the outgoing connections, with an updated tensor associated to it; §A.4.1 - split the large tensor network contraction task into many smaller tensor network contraction subtasks. This step is called preprocessing. assign different subtasks to different computation nodes.). Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang, in view of Farshbaf, in view of Pan, and further in view of Shimizu et al., US Publication 2014/0047421 (“Shimizu”). Claim 18: Shimizu further teaches or suggests in response to the number of generated sub-graphs is not greater than the number of computing cores of distributed processor threads, performing contraction an merging of different sub-graphs by using different cores; in response to the number of sub-graphs is greater than the number of computing cores of distributed processor threads, performing contraction and merging of different sub-graphs by a serial manner (see para. 0017 - graphical model including blocks as nodes and dependence as a link by processing performed by a computer including a plurality of processors, the program product solving a graph. merging the upstream segments and merging the downstream segments so as to reduce overlapping such that a number of the upstream segments and the downstream segments is reduced to at or below a predetermined number of parallel executions; compiling each of the merged segments and converting it into executable code; and individually assigning the executable code for the segments to the plurality of processors and causing the plurality of processors to execute their respective executable code in parallel; para. 0034 - each of the segments is merged so as to reduce the number of segments to the number of parallel executions and as the number of blocks shared among different segments is reduced; para. 0041 - merging the segments sharing many blocks reduces the overlaps between the segments and leads to higher speed of the resulted simulation. Here, the number of parallel executions may typically be the number of usable cores or processors; para. 0054 - graph form in which blocks having functions are nodes and dependence between the blocks is a link.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Huang, to include in response to the number of generated sub-graphs is not greater than the number of computing cores of distributed processor threads, performing contraction an merging of different sub-graphs by using different cores; in response to the number of sub-graphs is greater than the number of computing cores of distributed processor threads, performing contraction and merging of different sub-graphs by a serial manner for the purpose of efficiently preprocessing content for parallel processing based on the nature of the content and the number of usable cores or processors, as taught by Shimizu (0017 and 0041). Allowable Subject Matter Claims 3, 4, and 9-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew T McIntosh whose telephone number is (571)270-7790. The examiner can normally be reached M-Th 8:00am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW T MCINTOSH/Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Dec 22, 2022
Application Filed
Feb 27, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602534
Method and System to Display Content from a PDF Document on a Small Screen
2y 5m to grant Granted Apr 14, 2026
Patent 12596757
NATIVE INTEGRATION OF ARBITRARY DATA SOURCES
2y 5m to grant Granted Apr 07, 2026
Patent 12572617
SYSTEM AND METHOD FOR THE GENERATION AND EDITING OF TEXT CONTENT IN WEBSITE BUILDING SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12561191
TRAINING METHOD AND APPARATUS FOR FAULT RECOGNITION MODEL, FAULT RECOGNITION METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12547874
DEPLOYING PARALLELIZABLE DEEP LEARNING MODELS BY ADAPTING TO THE COMPUTING DEVICES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
95%
With Interview (+18.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 511 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month