Prosecution Insights
Last updated: April 19, 2026
Application No. 18/320,531

MACHINE LEARNING FOR SOLVING QUANTUM ANNEALING HARDWARE MINOR EMBEDDING PROBLEMS

Non-Final OA §103
Filed
May 19, 2023
Examiner
WU, NICHOLAS S
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
47%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
18 granted / 38 resolved
-7.6% vs TC avg
Strong +43% interview lift
Without
With
+43.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
44 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
26.7%
-13.3% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a prediction engine component that determines… in claim 1. machine learning model is trained via a model training component… in claim 1. an embedding component that enables… in claim 2. a solution space reduction component that prunes… in claim 8. a physical graph construction component that determines… in claim 8. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Dudash, et al., US Pre-Grant Publication US20230325461A1 (“Dudash”) in view of Yan, et al., Non-Patent Literature “Learning for Graph Matching and Related Combinatorial Optimization Problems” (“Yan”). Regarding claim 1, Dudash discloses: A system, comprising: a memory that stores executable components; and a processor that executes the executable components stored in the memory, wherein the executable components comprise: (Dudash, ⁋111, “In some embodiments, a system for configuring a quantum annealer to solve a QUBO problem may comprise a computer [A system, comprising: a memory that stores executable components; and a processor that executes the executable components stored in the memory, wherein the executable components comprise:].”). a prediction engine component that determines,…associated with a quantum hardware device and based on a logical graph corresponding to logical qubits and logical connections between the logical qubits, a mapping from the logical graph to a physical graph, (Dudash, ⁋87, “Candidate solution generator 504 [a prediction engine component that determines,] may be configured to generate, based on an undirected graph representing a QUBO problem and an undirected graph representing the physical qubit architecture of quantum annealer 514 [,…associated with a quantum hardware device], an initial minor embedding solution [a mapping from the logical graph to a physical graph,].”, and Dudash, ⁋75, “In some embodiments, the undirected graph generated to represent the QUBO problem may be a mathematical representation of a configuration of logical qubits used to solve the QUBO problem. Specifically, the vertices of the undirected graphs may represent the logical qubits while the edges may represent coupling between logical qubits [and based on a logical graph corresponding to logical qubits and logical connections between the logical qubits,].”). the physical graph corresponding to physical qubits of the quantum hardware device and physical connections between the physical qubits, (Dudash, ⁋78, “Undirected graphs may also be used to represent the physical qubit architecture of the quantum annealer. The vertices of an undirected graph representing a physical qubit architecture may represent physical qubits and the edges may represent coupling between physical qubits [the physical graph corresponding to physical qubits of the quantum hardware device and physical connections between the physical qubits,].”). …using minor embedding data, and wherein the minor embedding data is representative of logical to physical qubit mappings previously used by the quantum hardware device. (Dudash, ⁋91, “each thread block 510 may be configured to generate a new minor embedding solution based on the initial minor embedding solution […using minor embedding data, and wherein the minor embedding data is representative of logical to physical qubit mappings previously used by the quantum hardware device.] that was generated by candidate solution generator 504.”). While Dudash teaches a system that finds minor embeddings between a logical graph and a physical graph, Dudash does not explicitly teach: …via a machine learning model… …wherein the machine learning model is trained via a model training component… Yan teaches: …via a machine learning model… (Yan, pg. 1 col. 2, “In contrast, learning based methods are expected to be free from the experts by training (with generic approximation) often in an end-to-end fashion, which allows to model real world problems in a flexible way. For instance, many combinatorial problems are based on graph structure [Bengio et al., 2018], which can be readily modeled by existing graph embedding or network embedding techniques, which embed the graph information into continuous node representation […via a machine learning model…].”). …wherein the machine learning model is trained via a model training component… (Yan, pg. 1 col. 2, “In contrast, learning based methods are expected to be free from the experts by training (with generic approximation) often in an end-to-end fashion, which allows to model real world problems in a flexible way. For instance, many combinatorial problems are based on graph structure [Bengio et al., 2018], which can be readily modeled by existing graph embedding or network embedding techniques, which embed the graph information into continuous node representation […wherein the machine learning model is trained via a model training component…].”). Dudash and Yan are both in the same field of endeavor (i.e. combinatorial optimization problem). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Dudash and Yan to teach the above limitation(s). The motivation for doing so is that a machine learning model can cost effectively solve combinatorial problems (cf. Yan, pg. 1 col. 2, “learned models (or in other forms e.g. meta policy, meta rewards) are known can be transferred to relevant tasks in different ways, hence a trained combinatorial solver or a meta solver may be adapted to new tasks…In general, a specific solver could be more cost-effective than a general-purpose method, and ML can provide a generic way of building such solvers with training data rather than human knowledge.”). Regarding claim 2, Dudash in view of Yan teaches the system of claim 1. Dudash further teaches wherein the executable components further comprise: an embedding component that enables execution of the physical graph via a quantum annealer associated with the quantum hardware device, resulting in the physical qubits of the quantum hardware device being configured according to the physical graph. (Dudash, ⁋78, “Undirected graphs may also be used to represent the physical qubit architecture of the quantum annealer. The vertices of an undirected graph representing a physical qubit architecture may represent physical qubits and the edges may represent coupling between physical qubits [resulting in the physical qubits of the quantum hardware device being configured according to the physical graph.]. As shown in step 108 of method 100, the second step of the minor embedding process may involve embedding the undirected graph representing the QUBO problem as a “minor” of an undirected graph representing the physical qubit architecture of the quantum annealer [an embedding component that enables execution of the physical graph via a quantum annealer associated with the quantum hardware device,].”). Regarding claim 3, Dudash in view of Yan teaches the system of claim 2. Dudash further teaches wherein the system is communicatively coupled to the quantum hardware device via a communication network, and wherein the embedding component transfers the physical graph to the quantum annealer via the communication network. (Dudash, ⁋111, “In some embodiments, a system for configuring a quantum annealer [and wherein the embedding component transfers the physical graph to the quantum annealer] to solve a QUBO problem may comprise a computer [wherein the system is communicatively coupled to the quantum hardware device].”, and Dudash, ⁋117, “Computer 800 may be connected to a network, which can be any suitable type of interconnected communication system. The network can implement any suitable communications [via a communication network,]”). Regarding claim 4, Dudash in view of Yan teaches the system of claim 2. Dudash further teaches wherein the quantum hardware device comprises the system, and wherein the embedding component executes the physical graph on the quantum annealer. (Dudash, ⁋86, “Specifically, FIG. 5 shows a system 500 comprising a quantum annealer 514 and a subsystem 502 configured to find minor embedding solutions…Once this optimized embedding is generated, subsystem 502 may transmit information related to the optimal embedding to quantum annealer 514. Quantum annealer 514 may then be configured according to the optimized embedding [and wherein the embedding component executes the physical graph on the quantum]”, and Dudash, ⁋87, “subsystem 502 may include a computer system [wherein the quantum hardware device comprises the system,].”). Regarding claim 6, Dudash in view of Yan teaches the system of claim 1. Yan teaches the machine learning model as seen in claim 1. Dudash further teaches comprises parameters associated with a hardware topology of the quantum hardware device. (Dudash, ⁋78, “Undirected graphs may also be used to represent the physical qubit architecture of the quantum annealer. The vertices of an undirected graph representing a physical qubit architecture may represent physical qubits and the edges may represent coupling between physical qubits [comprises parameters associated with a hardware topology of the quantum hardware device.].”). Regarding claim 7, Dudash in view of Yan teaches the system of claim 1. Dudash further teaches wherein the prediction engine component determines probability values, representative of likelihoods of respective ones of the physical qubits and the physical connections being in the physical graph. (Dudash, ⁋36, “computing a probability of the candidate local graph replacing the best local current graph based on the evaluation rating of the candidate local graph and based on the evaluation rating of the best local current graph; and determining whether the replacement criteria are met based on the computed probability [wherein the prediction engine component determines probability values, representative of likelihoods of respective ones of the physical qubits and the physical connections being in the physical graph.].”). Regarding claim 8, Dudash in view of Yan teaches the system of claim 7. Dudash further teaches: a solution space reduction component that prunes a solution space represented by the physical qubits based on the probability values, resulting in a pruned solution space comprising candidate physical qubits of the physical qubits, the candidate physical qubits comprising less than all of the physical qubits; (Dudash, ⁋36, “computing a probability of the candidate local graph replacing the best local current graph [based on the probability values,] based on the evaluation rating of the candidate local graph and based on the evaluation rating of the best local current graph; and determining whether the replacement criteria are met based on the computed probability.”, and Dudash, ⁋30, “In some embodiments, modifying the best local current graph copy to form the candidate local graph comprises: selecting a set of existing placements of logical qubits forming a subgraph [a solution space reduction component that prunes a solution space represented by the physical qubits] in the physical qubit architecture of the quantum annealer; and mapping the set of existing placements of the logical qubits to a new set of placements representing one or more vacant physical qubits of the quantum annealer; using a subgraph of an original set of physical qubits is interpreted as pruning (i.e. resulting in a pruned solution space comprising candidate physical qubits of the physical qubits, the candidate physical qubits comprising less than all of the physical qubits;).”). and a physical graph construction component that determines the mapping from the logical graph to the physical graph using the pruned solution space. (Dudash, abstract, “replacing the best local current graph with the candidate local graph. An updated best local current graph may be identified in a local results array as the best global graph. The quantum annealer may be configured based on the best local graph [and a physical graph construction component that determines the mapping from the logical graph to the physical graph using the pruned solution space.].”). Regarding claim 9, Dudash in view of Yan teaches the system of claim 1. Yan teaches the model training component as seen in claim 1. Dudash further teaches wherein the prediction engine component provides,…the mapping from the logical graph to the physical graph, and wherein…supplements the minor embedding data with the mapping. (Dudash, ⁋87, “Candidate solution generator 504 [wherein the prediction engine component provides,] may be configured to generate, based on an undirected graph representing a QUBO problem and an undirected graph representing the physical qubit architecture of quantum annealer 514, an initial minor embedding solution [the mapping from the logical graph to the physical graph, and wherein…supplements the minor embedding data with the mapping.].”). Regarding claim 10, Dudash in view of Yan teaches the system of claim 1. Yan further teaches wherein the machine learning model comprises a graph neural network. (Yan, pg. 3 col. 2, “This form is more general than graph isomorphism or subgraph isomorphism, with wide connection to other problems. As shown in Table1, learning GM often involves i) learning to extract more tailored node features (by e.g. CNN); ii) geometric features of graph (by e.g. GNN) [wherein the machine learning model comprises a graph neural network.]”). Dudash and Yan are all in the same field of endeavor (i.e. combinatorial optimization problem). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Dudash and Yan to teach the above limitation(s). The motivation for doing so is that GNNs improve graph node and graph embeddings for analysis (cf. Yan, pg. 6 col. 2, “To solve this issue, graph neural network (GNN) has become a important way to help embed node structure information into graph node representation, especially for graphs of moderate size.”). Regarding claims 11-14, the claims are similar to claims 1-4 and rejected under the same rationales. Regarding claim 16, Dudash in view of Yan teaches the method of claim 11. Dudash further teaches further comprising: supplementing, by the system, the training data with the mapping data. (Dudash, ⁋91, “each thread block 510 may be configured to generate a new minor embedding solution based on the initial minor embedding solution that was generated by candidate solution generator 504 [further comprising: supplementing, by the system, the training data with the mapping data.].”). Regarding claim 17, Dudash discloses: A non-transitory machine-readable medium comprising computer executable instructions that, when executed by a processor, facilitate performance of operations, the operations comprising: (Dudash, ⁋111, “In some embodiments, a system for configuring a quantum annealer to solve a QUBO problem may comprise a computer [A non-transitory machine-readable medium comprising computer executable instructions that, when executed by a processor, facilitate performance of operations, the operations comprising:].”). receiving problem graph data representative of a problem graph comprising logical qubits and logical connections between the logical qubits; (Dudash, ⁋75, “In some embodiments, the undirected graph generated to represent the QUBO problem [receiving problem graph data representative of a problem graph] may be a mathematical representation of a configuration of logical qubits used to solve the QUBO problem. Specifically, the vertices of the undirected graphs may represent the logical qubits while the edges may represent coupling between logical qubits [comprising logical qubits and logical connections between the logical qubits;].”). and determining,…for a quantum computing device, mapping data representative of a mapping from the problem graph to a physical qubit graph, (Dudash, ⁋87, “Candidate solution generator 504 may be configured to generate, based on an undirected graph representing a QUBO problem and an undirected graph representing the physical qubit architecture of quantum annealer 514, an initial minor embedding solution [and determining,…for a quantum computing device, mapping data representative of a mapping from the problem graph to a physical qubit graph,].”). …using minor embedding data representative of logical to physical qubit mappings previously used by the quantum computing device, (Dudash, ⁋91, “each thread block 510 may be configured to generate a new minor embedding solution based on the initial minor embedding solution […using minor embedding data representative of logical to physical qubit mappings previously used by the quantum computing device,] that was generated by candidate solution generator 504.”). and wherein the physical qubit graph is representative of physical qubits of the quantum computing device and physical connections between the physical qubits. (Dudash, ⁋78, “Undirected graphs may also be used to represent the physical qubit architecture of the quantum annealer. The vertices of an undirected graph representing a physical qubit architecture may represent physical qubits and the edges may represent coupling between physical qubits [and wherein the physical qubit graph is representative of physical qubits of the quantum computing device and physical connections between the physical qubits.].”). While Dudash teaches a system that finds minor embeddings between a logical graph and a physical graph, Dudash does not explicitly teach: …via a neural network generated… …wherein the neural network is trained… Yan teaches: …via a neural network generated… (Yan, abstract, “For graph matching, we show that many learning techniques e.g. convolutional neural networks, graph neural networks, reinforcement learning can be effectively incorporated in the paradigm for extracting the node features, graph structure features, and even the matching engine […via a machine learning model…].”). …wherein the neural network is trained… (Yan, abstract, “For graph matching, we show that many learning techniques e.g. convolutional neural networks, graph neural networks, reinforcement learning can be effectively incorporated in the paradigm for extracting the node features, graph structure features, and even the matching engine […wherein the neural network is trained…].”). Dudash and Yan are both in the same field of endeavor (i.e. combinatorial optimization problem). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Dudash and Yan to teach the above limitation(s). The motivation for doing so is that a neural network model can cost effectively solve combinatorial problems (cf. Yan, pg. 1 col. 2, “learned models (or in other forms e.g. meta policy, meta rewards) are known can be transferred to relevant tasks in different ways, hence a trained combinatorial solver or a meta solver may be adapted to new tasks…In general, a specific solver could be more cost-effective than a general-purpose method, and ML can provide a generic way of building such solvers with training data rather than human knowledge.”). Regarding claims 18-20, the claims are similar to claims 2-4 and are rejected under the same rationales. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Dudash, et al., US Pre-Grant Publication US20230325461A1 (“Dudash”) in view of Yan, et al., Non-Patent Literature “Learning for Graph Matching and Related Combinatorial Optimization Problems” (“Yan”) and further in view of Gong, et al., US Pre-Grant Publication US20230206108A1 (“Gong”). Regarding claim 5, Dudash in view of Yan teaches the system of claim 1. Yan further teaches wherein the machine learning model is a first machine learning model, (Yan, pg. 1 col. 2, “In contrast, learning based methods are expected to be free from the experts by training (with generic approximation) often in an end-to-end fashion, which allows to model real world problems in a flexible way. For instance, many combinatorial problems are based on graph structure [Bengio et al., 2018], which can be readily modeled by existing graph embedding or network embedding techniques, which embed the graph information into continuous node representation [wherein the machine learning model is a first machine learning model,].”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Yan with the teachings of Dudash for the same reasons disclosed in claim 1. Dudash further teaches wherein the minor embedding data is first minor embedding data, wherein the quantum hardware device is a first quantum hardware device, (Dudash, ⁋87, “Candidate solution generator 504 may be configured to generate, based on an undirected graph representing a QUBO problem and an undirected graph representing the physical qubit architecture of quantum annealer 514 [wherein the quantum hardware device is a first quantum hardware device,], an initial minor embedding solution [wherein the minor embedding data is first minor embedding data,].”). While Dudash in view of Yan teaches a system that finds minor embeddings between a logical graph and a physical graph using machine learning, Dudash does not explicitly teach and wherein the model training component further trains a second machine learning model associated with a second quantum hardware device using second minor embedding data associated with the second quantum hardware device, the first quantum hardware device being different from the second quantum hardware device. Gong teaches and wherein the model training component further trains a second machine learning model associated with a second quantum hardware device using second minor embedding data associated with the second quantum hardware device, the first quantum hardware device being different from the second quantum hardware device. (Gong, ⁋82, “The method 800 includes a step 840 to perform the second machine learning process to train one or more second instances of the quantum computer model [and wherein the model training component further trains a second machine learning model associated with a second quantum hardware device]. For example, the one or more second instances of the quantum computer model may include the model 10B or 10C (or the models 10B and 10C collectively) of FIGS. 2-3 discussed above. Individual features of the selected subset of the feature groups are used as inputs for the one or more second instances of the quantum computer model [using second minor embedding data associated with the second quantum hardware device,]. For example, the individual features of the feature groups 60-61 may be used as inputs for the models 10B and/or 10C; multiple instances of a quantum computer are interpreted as different quantum devices (i.e. the first quantum hardware device being different from the second quantum hardware device.).”). Dudash, in view of Yan, and Gong are both in the same field of endeavor (i.e. quantum computing). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Dudash, in view of Yan, and Gong to teach the above limitation(s). The motivation for doing so is that separating machine learning models for different quantum devices improves efficiency (cf. Gong, ⁋90, “For example, existing quantum computer systems, although powerful, may still be limited in the number of qubits that can be received as the input. This is problematic in a real world machine learning context, where the number of trainable features far exceed the number of qubits of the existing quantum computer system.”). Allowable Subject Matter Claim 15, is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for indication of allowable subject matter: Regarding claim 15, Below are the closest cited references, each of which disclose various aspects of the claimed invention: Boothby, et al., US20220391744A1 discloses a hybrid system that finds the minor embedding between a source qubit graph and a target quantum hardware graph using node weighted distances. However, even though Boothby teaches a minor embedding algorithm, Boothby does not explicitly teach a minor embedding machine learning process and thus does not teach the specific loss function as claimed in claim 15. Zbinden, et al., “Embedding Algorithms for Quantum Annealers with Chimera and Pegasus Connection Topologies” discloses two systems based on the D-Wave MinorMiner miner embedding algorithm with both systems taking a problem graph of a QUBO problem and produces a minor embedding of the input graph on a host graph that models the topology of a quantum device. However, even though Zbinden teaches a minor embedding algorithm, Zbinden does not explicitly teach a minor embedding machine learning process and thus does not teach the specific loss function as claimed in claim 15. Sugie, et al., “Minor-embedding heuristics for large-scale annealing processors with sparse hardware graphs of up to 102,400 nodes” discloses a system that develops a new minor embedding algorithm that is scalable in nature and effectively finds minor embeddings for logical graphs that have a large number of qubits or nodes. However, even though Sugie teaches a minor embedding algorithm, Sugie does not explicitly teach a minor embedding machine learning process and thus does not teach the specific loss function as claimed in claim 15. While the above prior arts disclose the aforementioned concepts, however, none of the prior arts, individually or in reasonable combination, discloses all the limitations in the manner recited in claim 15. Specifically, the claim requires a machine learning model that has a loss function that correlates physical qubit graph and logical qubit graph and considers the total number of physical qubits. While the references cited above mention aspects of minor embedding for quantum annealers, they do not recite the specific loss function in claim 15. Therefore, claim 15 is allowable over the prior art. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S WU whose telephone number is (571)270-0939. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.S.W./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

May 19, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12488244
APPARATUS AND METHOD FOR DATA GENERATION FOR USER ENGAGEMENT
2y 5m to grant Granted Dec 02, 2025
Patent 12423576
METHOD AND APPARATUS FOR UPDATING PARAMETER OF MULTI-TASK MODEL, AND STORAGE MEDIUM
2y 5m to grant Granted Sep 23, 2025
Patent 12361280
METHOD AND DEVICE FOR TRAINING A MACHINE LEARNING ROUTINE FOR CONTROLLING A TECHNICAL SYSTEM
2y 5m to grant Granted Jul 15, 2025
Patent 12354017
ALIGNING KNOWLEDGE GRAPHS USING SUBGRAPH TYPING
2y 5m to grant Granted Jul 08, 2025
Patent 12333425
HYBRID GRAPH NEURAL NETWORK
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
47%
Grant Probability
90%
With Interview (+43.1%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month