Prosecution Insights
Last updated: April 18, 2026
Application No. 18/295,306

DISTRIBUTED TRAINING OF GRAPH NEURAL NETWORKS (GNN) BASED KNOWLEDGE GRAPH EMBEDDING MODELS

Non-Final OA §101§102
Filed
Apr 04, 2023
Examiner
HASTY, NICHOLAS
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
51%
Grant Probability
Moderate
1-2
OA Rounds
4y 8m
To Grant
83%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
178 granted / 348 resolved
-3.9% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
31 currently pending
Career history
379
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
68.5%
+28.5% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
1.4%
-38.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 348 resolved cases

Office Action

§101 §102
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. This action is responsive to communications: Application filed on 4 / 4 /20 23 . Claims 1-20 are pending. Claims 1, 8, and 15 are independent. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to FILLIN "Identify whether the claim(s) are directed to a law of nature; a natural phenomenon; or an abstract idea." \* MERGEFORMAT abstract idea without significantly more. Step 1: Claims 1-7, 8-14, and 15-20 are directed towards a method, a system, and a computer program product respectively. Step 2A Prong 1 : Claim 1 recites: partitioning the knowledge graph into a plurality of partitions ( deciding how to partition a knowledge graph is appears to be practically implementable in the human mind and is understood to be a recitation of a mental process ) ; expanding at least one partition of the plurality of partitions ( determining how to expand a partition appears to be practically implementable in the human mind and is understood to be a recitation of a mental process ) ; forming, for each training process, an edge mini batch ( identifying a mini batch appears to be practically implementable in the human mind and is understood to be a recitation of a mental process ) ; and for each edge mini batch, generating a computational graph ( generating a computational graph appears to be practically implementable in the human mind and is understood to be a recitation of a mental process ). Step 2A prong 2: Claims 1, 8, and 15: receiving a knowledge graph of a data set ( limitation appears to be directed to receiving information which is understood to be insignificant extra solution activity ) ; launching a training process for each partition of the plurality of partitions, wherein, during a training epoch, a respective training process samples positive and negative samples from a respective partition ( this limitation recites using a neural network as a tool to perform an abstract idea, which is not indicative of integration into a practical application ) ; Step 2B: Claims 1, 8, and 15: receiving a knowledge graph of a data set ( limitation appears to be directed to receiving information which is understood to be insignificant extra solution activity such as data gathering. MPEP 2106.05(g) ) ; launching a training process for each partition of the plurality of partitions, wherein, during a training epoch, a respective training process samples positive and negative samples from a respective partition ( this limitation recites using a neural network as a tool to perform an abstract idea, which taken alone or in combination fail to amount to significantly more than the judicial exception . MPEP 2106.05(f) ) ; Dependent Claims: Claims 2-7, 9-14, and 16-20: These claims only recite further abstract ideas (mental processes) and thus are ineligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102 FILLIN "Insert either \“(a)(1)\” or \“(a)(2)\” or both. If paragraph (a)(2) of 35 U.S.C. 102 is applicable, use form paragraph 7.15.01.aia, 7.15.02.aia or 7.15.03.aia where applicable." \d "[ 2 ]" (a)(1) as being FILLIN "Insert either—clearly anticipated—or—anticipated—with an explanation at the end of the paragraph." \d "[ 3 ]" anticipated by FILLIN "Insert the prior art relied upon." \d "[ 4 ]" Nouri et al. ( Scalable Graph Embedding Learning On A Single GPU ) . In regards to claim 1 , Nouri et al. discloses a computer-implemented method comprising: receiving a knowledge graph of a data set ( Nouri et al. pg4 section II.A para3 , when it trains a knowledge graph, it stores the node embedding parameters in CPU memory) ; partitioning the knowledge graph into a plurality of partitions ( Nouri et al. pg5 section III para1 , Uniform partitioning is used to split up node embedding parameters into n disjoint partitions that are calculated based on the available GPU memory and then store them on a disk. ) ; expanding at least one partition of the plurality of partitions ( Nouri et al. pg4 section II.A para2 , uses random walks to expand the neighborhood of a vertex) ; launching a training process for each partition of the plurality of partitions, wherein, during a training epoch, a respective training process samples positive and negative samples from a respective partition ( Nouri et al. pg5 section III para1 , each epoch in our training involves iterating over all edges in a partition. After performing the training on each bucket, the node embedding related to the next bucket will be swapped in GPU memory ) ; forming, for each training process, an edge mini batch ( Nouri et al. pg4 section II.A para3 , it constructs a sub-graph, moves all data in the sub-graph to GPU memory, and performs many mini-batch training steps on the sub-graph ) ; and for each edge mini batch, generating a computational graph ( Nouri et al. pg6 section III.A.4 para1 , collaborative training allows GPU to train node embedding efficiently, with only synchronization required after training each partition of graph ) . In regards to claim 2 , Nouri et al. discloses t he computer-implemented method of claim 1, further comprising determining a gradient according to each respective computational graph ( Nouri et al. pg4 section II.A para3 , all workers need to communicate with the parameter server, including synchronously sending the gradients and receiving the average gradient) . In regards to claim 3, Nouri et al. discloses t he computer-implemented method of claim 2, further comprising sharing a determined gradient across two or more partitions of the plurality of partitions ( Nouri et al. pg4 section II.A para3 , all workers need to communicate with the parameter server, including synchronously sending the gradients and receiving the average gradient) . In regards to claim 4, Nouri et al. discloses t he computer-implemented method of claim 3, further comprising determining an updated embedding model according to an average of the gradients ( Nouri et al. pg4 section II.A para3 , all workers need to communicate with the parameter server, including synchronously sending the gradients and receiving the average gradient ) . In regards to claim 5, Nouri et al. discloses t he computer-implemented method of claim 1, wherein partitioning the knowledge graph comprises partitioning the knowledge graph into P disjoint subsets (N ouri et al. pg5 section III.A.1 para1 , the node embeddings will be partitioned into a p-disjoint set of vertices) . In regards to claim 6, Nouri et al. discloses t he computer-implemented method of claim 1, wherein expanding the at least one partition comprises adding n-hops of neighbors of each vertex in the respective partition, where n is equal to a number of graph convolutional layers in an embedding model ( N ouri et al. pg5 section III.A.1 para1 , we have two sets of vertices for one-hop and two-hop connectivity ) . In regards to claim 7, Nouri et al. discloses t he computer-implemented method of claim 1, wherein a number of the plurality of partitions equals a number of available computing nodes ( Nouri et al. pg 5 section II I para 1 , uniform partitioning is used to split up node embedding parameters into partitions that are calculated based on available GPU memory ) . Claims 8-14 recite substantially similar limitations to claims 1-7. Thus claims 8-14 are rejected along the same rationale as claims 8-14. Claims 15-20 recite substantially similar limitations to claims 1-6. This claims 15-20 are rejected along the same rationale as claims 15-20. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhu et al. ( AliGraph : A Comprehensive Graph Neural Network Platform ) teaches a novel caching strategy for generating knowledge graphs . Zhao et al. ( US2023/0351153 ) teaches using gradient decent to update model parameters. Dai et al. ( US2023/0289626 ) teaches using negative samples in generating knowledge graph. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT NICHOLAS HASTY whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-7775 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday 8:30am-5:00pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Matt Ell can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571)270-3264 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.H/ Examiner, Art Unit 2141 /TAN H TRAN/ Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Apr 04, 2023
Application Filed
Apr 01, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579517
AUTOMATED DESCRIPTION GENERATION FOR JOB POSTING
2y 5m to grant Granted Mar 17, 2026
Patent 12578840
Devices, Methods, and Graphical User Interfaces for Navigating, Displaying, and Editing Media Items with Multiple Display Modes
2y 5m to grant Granted Mar 17, 2026
Patent 12561605
USER INTERFACE MANAGEMENT FRAMEWORK
2y 5m to grant Granted Feb 24, 2026
Patent 12547291
Tree Frog Computer Navigation System for the Hierarchical Visualization of Data
2y 5m to grant Granted Feb 10, 2026
Patent 12536468
MODEL TRAINING METHOD, SHORT MESSAGE AUDITING MODEL TRAINING METHOD, SHORT MESSAGE AUDITING METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
51%
Grant Probability
83%
With Interview (+32.3%)
4y 8m
Median Time to Grant
Low
PTA Risk
Based on 348 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month