Prosecution Insights
Last updated: April 19, 2026
Application No. 18/214,101

NODE FUSION METHOD FOR COMPUTATIONAL GRAPH AND DEVICE

Non-Final OA §103
Filed
Jun 26, 2023
Examiner
ABRISHAMKAR, KAVEH
Art Unit
2494
Tech Center
2400 — Computer Networks
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
95%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
797 granted / 1020 resolved
+20.1% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
27 currently pending
Career history
1047
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
39.7%
-0.3% vs TC avg
§102
22.4%
-17.6% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1020 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This action is in response to the communication filed on June 26, 2023. Claims 1-19 were originally received for consideration. No preliminary amendments for the claims have been received. 2. Claims 1-19 are currently pending consideration. Information Disclosure Statement 3. Initialed and dated copies of Applicant’s IDS (form 1449), received on 10/27/2023 and 12/26/2024, are attached to this Office Action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claim(s) 1-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ding et al. (“IOS: Inter-Operator Scheduler for CNN Acceleration) in view of Fang et al. (“Optimizing DNN Computation Graph using Graph Substitutions”). Regarding claim 1, Ding discloses: A node fusion method for a computational graph (page 4, Figure 3) (Page 7, section 5), comprising: converting a first neural network into a first computational graph (page 4, Figure 3(1)); extracting one or more parallelizable branch groups from the first computational graph, wherein the parallelizable branch group indicates that a plurality of sub-branches belonging to a parallelizable branch group support parallel execution, the parallelizable branch group comprises a first parallelizable branch group (Figure 3(1), Figure 3(2), page 4: one parallelizable branch group is extracted from the exemplary first computation graph; the first parallelizable branch group comprises two sub-branches including a first sub-branch and a second sub-branch, wherein the first sub-branch starts at the input node (Figure 3(1)) and goes through the nodes Conv[a] and Conv[b] and ends at node Conv [d]; the second sub-branch starts at the input node and goes through the node Conv[b] and ends at the node Matmu [e]), and the first parallelizable branch group meets at least one of the following conditions: input for all sub-branches in the first parallelizable branch group comes from a same node and output of at least two sub-branches in the first parallelizable branch group is directed to different nodes, output of all sub-branches in the first parallelizable branch group is directed to a same node and input for at least two sub-branches in the first parallelizable branch group comes from different nodes, none of 1.sup.st nodes of sub-branches in the first parallelizable branch group has a parent node, or none of last nodes of sub-branches in the first parallelizable branch group has a child node (Figure 3, page 4: the two sub-branches meet the condition of “none of the 1st nodes of sub-branches in the first parallelizable branch group has a parent node); and fusing a plurality of nodes in each of the one or more parallelizable branch groups to obtain a second computational graph based on the first computational graph, wherein a sub-branch to which each of the plurality of nodes belongs is different from a sub-branch to which any other one of the plurality of nodes belongs (Figure 3(2), page 4: Merged Conv [a&b]). Claim 2 is rejected as applied above in rejecting claim 1. Furthermore, Ding discloses: The method according to claim 1, wherein when there are a plurality of parallelizable branch groups, the parallelizable branch groups further comprise a second parallelizable branch group, and the second parallelizable branch group meets the following conditions: input for all sub-branches in the second parallelizable branch group comes from a same node, output of at least two sub-branches in the second parallelizable branch group is directed to a same node, and each sub-branch in the second parallelizable branch group comprises at least two nodes (page 4, Figure 3(1): node represents a multiplication operation). Claim 3 is rejected as applied above in rejecting claim 1. Furthermore, Ding discloses: The method according to claim 1, wherein the fusing a plurality of nodes in each of the one or more parallelizable branch groups to obtain a second computational graph based on the first computational graph comprises: removing a node that does not support fusion from a target sub-branch in each parallelizable branch group, to obtain a third parallelizable branch group, wherein the target sub-branch is any sub-branch in each parallelizable branch group, the node that does not support fusion comprises a node indicating a specific operation, and the specific operation comprises at least one of the following operations: a matrix multiplication operation and a convolution operation; and fusing a plurality of nodes in the third parallelizable branch group to obtain a fusion node, wherein the second computational graph comprises the fusion node and an unfused node in the first computational graph, and a sub-branch to which each of the plurality of nodes in the third parallelizable branch group belongs is different from a sub-branch to which any other one of the plurality of nodes belongs (page 4, Figure 3(1): node represents a multiplication operation). Claim 4 is rejected as applied above in rejecting claim 3. Furthermore, Ding discloses: The method according to claim 3, further comprising: repeatedly performing the operation of fusing a plurality of nodes in the third parallelizable branch group to obtain a fusion node, until a quantity of unfused nodes in the third parallelizable branch group is less than 2 (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 5 is rejected as applied above in rejecting claim 3. Furthermore, Ding discloses: The method according to claim 3, wherein the fusing a plurality of nodes in the third parallelizable branch group to obtain a fusion node comprises: obtaining m node combinations based on n nodes in the third parallelizable branch group, wherein the n nodes respectively belong to n branches that constitute the third parallelizable branch group, each of the m node combinations comprises at least two nodes, m≥1, n≥2, and 2m≤n; assessing, by using a computing power assessment model, computing power required by each of the m node combinations, to obtain m assessment results, wherein each of the m assessment results represents one of the following cases: computing power resources to be consumed by each of the m node combinations, or computing power resources to be saved by each of the m node combinations; and when a first assessment result meets a preset condition, fusing nodes in a first node combination corresponding to the first assessment result, to obtain one or more first fusion nodes, wherein the first assessment result is one of the m assessment results, and the first node combination is one of the m node combinations (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 6 is rejected as applied above in rejecting claim 5. Furthermore, Ding discloses: The method according to claim 5, wherein that the first assessment result meets the preset condition comprises at least one of the following cases: when each of the m assessment results represents computing power resources to be consumed by each of the m node combinations, the first assessment result meets a computing power requirement of a module (device) that is in acceleration hardware and that specifically performs a computational task; when each of the m assessment results represents computing power resources to be saved by each of the m node combinations, the first assessment result is optimal among the m assessment results; or when each of the m assessment results represents computing power resources to be saved by each of the m node combinations, the first assessment result is optimal among x assessment results, wherein the x assessment results are at least two of the m assessment results (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 7 is rejected as applied above in rejecting claim 1. Furthermore, Ding discloses: The method according to claim 1, wherein the extracting one or more parallelizable branch groups from the first computational graph comprises: searching the first computational graph for a plurality of first branches that have a common first parent node, and obtaining a parallelizable branch group based on the plurality of first branches, wherein the first parent node is any parent node in the first computational graph; or searching the first computational graph for a plurality of second branches that have a common first child node, and obtaining a parallelizable branch group based on the plurality of second branches, wherein the first child node is any child node in the first computational graph (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 8 is rejected as applied above in rejecting claim 7. Furthermore, Ding discloses: The method according to claim 7, wherein the obtaining a parallelizable branch group based on the plurality of first branches comprises: searching each first branch downward by using the first parent node as a start point, until a common second parent node or a common second child node is found during downward searching, to obtain a parallelizable branch group corresponding to the plurality of first branches, wherein the parallelizable branch group comprises first sub-branches respectively corresponding to the plurality of first branches, and a node comprised in each first sub-branch is a node obtained during downward searching of each first sub-branch (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 9 is rejected as applied above in rejecting claim 7. Furthermore, Ding discloses: The method according to claim 7, wherein the obtaining a parallelizable branch group based on the plurality of second branches comprises: searching each second branch upward by using the first child node as a start point, until a common third parent node or a common third child node is found during upward searching, to obtain a parallelizable branch group corresponding to the plurality of second branches, wherein the parallelizable branch group comprises second sub-branches respectively corresponding to the plurality of second branches, and a node comprised in each second sub-branch is a node obtained during upward searching of each second sub-branch (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 10 is rejected as applied above in rejecting claim 1. Furthermore, Ding discloses: The method according to claim 1, wherein the extracting one or more parallelizable branch groups from the first computational graph further comprises: searching the first computational graph for a plurality of third branches, and obtaining a parallelizable branch group based on the plurality of third branches, wherein a first node in each of the plurality of third branches has no parent node; or searching the first computational graph for a plurality of fourth branches, and obtaining a parallelizable branch group based on the plurality of fourth branches, wherein a last node in each of the fourth branches has no child node (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 11 is rejected as applied above in rejecting claim 10. Furthermore, Ding discloses: The method according to claim 10, wherein the obtaining a parallelizable branch group based on the plurality of third branches comprises: searching each third branch downward by using the first node in each third branch as a start point, until a same parent node or a same child node is found during downward searching, to obtain a parallelizable branch group corresponding to the plurality of third branches, wherein the parallelizable branch group comprises third sub-branches respectively corresponding to the plurality of third branches, and anode comprised in each third sub-branch is a node obtained during downward searching of each third sub-branch (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 12 is rejected as applied above in rejecting claim 10. Furthermore, Ding discloses: The method according to claim 10, wherein the obtaining a parallelizable branch group based on the plurality of fourth branches comprises: searching each fourth branch upward by using the last node in each fourth branch as a start point, until a same parent node or a same child node is found during upward searching, to obtain a parallelizable branch group corresponding to the plurality of fourth branches, wherein the parallelizable branch group comprises fourth sub-branches respectively corresponding to the plurality of fourth branches, and a node comprised in each fourth sub-branch is a node obtained during upward searching of each fourth sub-branch (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 13 is rejected as applied above in rejecting claim 1. Furthermore, Ding discloses: The method according to claim 1, wherein the extracting one or more parallelizable branch groups from the first computational graph further comprises: when a target node is not a node that does not support fusion, simplifying a local structure around the target node to obtain a fifth branch, wherein the target node is a node that is in the first computational graph and that does not belong to any parallelizable branch group; and when there are a plurality of fifth branches, obtaining a parallelizable branch group based on the plurality of fifth branches (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 14 is rejected as applied above in rejecting claim 1. Furthermore, Ding discloses: The method according to claim 1, further comprising: compiling a fusion node in the second computational graph to obtain an operator kernel corresponding to the fusion node (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 15 is rejected as applied above in rejecting claim 14. Furthermore, Ding discloses: The method according to claim 14, wherein the fusion node is obtained by fusing p nodes, and the compiling the fusion node in the second computational graph to obtain an operator kernel corresponding to the fusion node comprises: separately scheduling the p nodes to obtain p sub-intermediate-representations (IRs) respectively corresponding to the p nodes; fusing the p sub-IRs to obtain a total IR; and compiling the total IR to obtain the operator kernel corresponding to the fusion node (page 4, Figure 3(1): nodes are fused together until a criteria is met). Claim 16 is rejected as applied above in rejecting claim 1. Furthermore, Ding discloses: The method according to claim 1, wherein the deep learning framework is one of: MindSpore, TensorFlow, TensorNetwork, PyTorch, MXNet, Caffe, or Theano (Section 2: TensorFlow). Regarding claim 17, Ding discloses: A node fusion system for a computational graph, comprising: at least one processor (Section 6.3: processor); and at least one processor memory coupled to the at least one processor to store program instructions (Section 6.3: each thread block on a SM is partitioned and executes in a single instruction multiple thread fashion), which when executed by the at least one processor, cause the at least one processor to: convert a first neural network into a first computational graph (page 4, Figure 3(1)); extract one or more parallelizable branch groups from the first computational graph, wherein a parallelizable branch group indicates that a plurality of sub-branches belonging to a parallelizable branch group support parallel execution, the parallelizable branch group comprises a first parallelizable branch group (Figure 3(1), Figure 3(2), page 4: one parallelizable branch group is extracted from the exemplary first computation graph; the first parallelizable branch group comprises two sub-branches including a first sub-branch and a second sub-branch, wherein the first sub-branch starts at the input node (Figure 3(1)) and goes through the nodes Conv[a] and Conv[b] and ends at node Conv [d]; the second sub-branch starts at the input node and goes through the node Conv[b] and ends at the node Matmu [e]), and the first parallelizable branch group meets at least one of the following conditions: input for all sub-branches in the first parallelizable branch group comes from a same node and output of at least two sub-branches in the first parallelizable branch group is directed to different nodes, output of all sub-branches in the first parallelizable branch group is directed to a same node and input for at least two sub-branches in the first parallelizable branch group comes from different nodes, none of 1.sup.st nodes of sub-branches in the first parallelizable branch group has a parent node, or none of last nodes of sub-branches in the first parallelizable branch group has a child node (Figure 3, page 4: the two sub-branches meet the condition of “none of the 1st nodes of sub-branches in the first parallelizable branch group has a parent node); and fuse a plurality of nodes in each of the one or more parallelizable branch groups to obtain a second computational graph based on the first computational graph, wherein a sub-branch to which each of the plurality of nodes belongs is different from a sub-branch to which any other one of the plurality of nodes belongs (Figure 3(2), page 4: Merged Conv [a&b]). Claim 18 is rejected as applied above in rejecting claim 17. Furthermore, Ding discloses: The system according to claim 17, wherein when there are a plurality of parallelizable branch groups, the parallelizable branch groups further comprise a second parallelizable branch group, and the second parallelizable branch group meets the following conditions: input for all sub-branches in the second parallelizable branch group comes from a same node, output of at least two sub-branches in the second parallelizable branch group is directed to a same node, and each sub-branch in the second parallelizable branch group comprises at least two nodes (page 4, Figure 3(1): node represents a multiplication operation). Regarding claim 19, Ding discloses: A non-transitory computer-readable storage medium, storing one or more instructions that, when executed by at least one processor, cause the at least one processor to: convert a first neural network into a first computational graph (page 4, Figure 3(1)); extract one or more parallelizable branch groups from the first computational graph, wherein a parallelizable branch group indicates that a plurality of sub-branches belonging to a parallelizable branch group support parallel execution, the parallelizable branch group comprises a first parallelizable branch group (Figure 3(1), Figure 3(2), page 4: one parallelizable branch group is extracted from the exemplary first computation graph; the first parallelizable branch group comprises two sub-branches including a first sub-branch and a second sub-branch, wherein the first sub-branch starts at the input node (Figure 3(1)) and goes through the nodes Conv[a] and Conv[b] and ends at node Conv [d]; the second sub-branch starts at the input node and goes through the node Conv[b] and ends at the node Matmu [e]), and the first parallelizable branch group meets at least one of the following conditions: input for all sub-branches in the first parallelizable branch group comes from a same node and output of at least two sub-branches in the first parallelizable branch group is directed to different nodes, output of all sub-branches in the first parallelizable branch group is directed to a same node and input for at least two sub-branches in the first parallelizable branch group comes from different nodes, none of 1.sup.st nodes of sub-branches in the first parallelizable branch group has a parent node, or none of last nodes of sub-branches in the first parallelizable branch group has a child node (Figure 3(2), page 4: Merged Conv [a&b]); and fuse a plurality of nodes in each of the one or more parallelizable branch groups to obtain a second computational graph based on the first computational graph, wherein a sub-branch to which each of the plurality of nodes belongs is different from a sub-branch to which any other one of the plurality of nodes belongs (Figure 3(2), page 4: Merged Conv [a&b]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAVEH ABRISHAMKAR whose telephone number is (571)272-3786. The examiner can normally be reached M-F 9-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jung Kim can be reached at 571-272-3804. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KAVEH ABRISHAMKAR/ 01/28/2026Primary Examiner, Art Unit 2494
Read full office action

Prosecution Timeline

Jun 26, 2023
Application Filed
Jan 28, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598086
TOKENIZED INDUSTRIAL AUTOMATION SOFTWARE
2y 5m to grant Granted Apr 07, 2026
Patent 12598216
SMALL-FOOTPRINT ENDPOINT DATA LOSS PREVENTION
2y 5m to grant Granted Apr 07, 2026
Patent 12585761
SYSTEM AND METHOD FOR COMBINING CYBER-SECURITY THREAT DETECTIONS AND ADMINISTRATOR FEEDBACK
2y 5m to grant Granted Mar 24, 2026
Patent 12585771
LEARNED CONTROL FLOW MONITORING AND ENFORCEMENT OF UNOBSERVED TRANSITIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12579280
SYSTEMS AND METHODS FOR VULNERABILITY SCANNING OF DEPENDENCIES IN CONTAINERS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
95%
With Interview (+16.9%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 1020 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month