Prosecution Insights
Last updated: April 19, 2026
Application No. 17/161,093

HYBRID GRAPH NEURAL NETWORK

Non-Final OA §103
Filed
Jan 28, 2021
Examiner
GODO, MORIAM MOSUNMOLA
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
5 (Non-Final)
44%
Grant Probability
Moderate
5-6
OA Rounds
4y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
30 granted / 68 resolved
-10.9% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
47 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§103
DETAILED ACTION 1. This office action is in response to the Application No. 17161093 filed on 11/17/2025. Claims 1-20 are presented for examination and are currently pending. Applicant’s arguments have been carefully and respectfully considered. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 3. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 11/17/2025 has been entered. Response to Arguments 4. The Examiner is withdrawing the rejections in the previous Office action because Applicant’s amendment necessitated the new grounds of rejection presented in this Office action. It is noted that the arguments has been considered but are moot in light of the newly added reference. Li in view of Cervantes now teaches the limitations of the independent claims. In response to the withdrawal of the 103 rejection and the patentability of the claims, it is noted that Li in view of Cervantes now teaches the limitations of the independent claims. As a result, the independent claims are not allowable. Furthermore, dependent claims 2-13, 15-18 and 20, which depend directly or indirectly from claims 1, 14 and 19 are not allowable for similar reasons argued above regarding the independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1, 4-6, 14, 15, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. ("Neural inductive matrix completion with graph convolutional networks for miRNA-disease association prediction." Bioinformatics 36.8 (2020): 2538-2546, Advance Access Publication Date: 6 January 2020) in view of Cervantes et al. (US20220179882 filed 12/09/2020) Regarding claim 1, Li teaches a computer implemented method (All experiments are carried on Windows 10 operation system with a Dell Precision T5820 workstation computer of an intel W-2145 8 cores, 3.7GHz CPU and 64G memory, pg. 2543, left col., section 3.1) comprising: generating a first concept representation ( PNG media_image1.png 36 86 media_image1.png Greyscale Fig. 2, 2543) based on a first portion (Disease similarity graph, Fig. 2) of a first knowledge base (Three databases were used in the experiments. The disease semantic similarity data was derived from the MeSH disease descriptor database … The miRNA functional similarity data was derived from the MISIM database … The known human miRNA-disease association data was obtained from the HMDD v2.0 database, pg. 2543, section 3.1, Fig. 2) using a first processing path (Disease similarity graph→ GCN (graph convolutional network) encoder for diseases → neural projection →, Fig. 2) of a first neural network (Specifically, let Gm and Gd be the miRNA functional similarity network and disease semantic similarity network, respectively, pg. 2542, left col., first para.; We present a novel method of neural inductive matrix completion with graph convolutional network (NIMCGCN) for predicting miRNA-disease association (abstract). The Examiner notes the upper and lower GCN encoder in Fig. 2 is the first neural network), the first processing path (Disease similarity graph→ GCN (graph convolutional network) encoder for diseases → neural projection →, Fig. 2; We obtained a hierarchical directed acyclic graph (DAG) directly from MeSH, pg. 2540, left col., section 2.3 Disease–disease similarity) being configured to alternately operate within a Euclidean space and a hyperbolic space (Disease semantic similarity matrix Ad ϵ Rnxn is Euclidean space → GCN encoder for disease encodes the hierarchical directed acyclic graph (DAG) in hyperbolic space → Neural projection operates the matrix in Euclidean space; GCN transforms the embeddings Xt-1 into X(t) in each layer, Similarly, PNG media_image2.png 64 480 media_image2.png Greyscale PNG media_image3.png 34 228 media_image3.png Greyscale , pg. 2542, right col., first para. The Examiner notes GCN transforms the embeddings Xt-1 into X(t) in each layer reads on Applicant’s Fig. 5 and 6 of embedding and graph convolutional layer as separate representation domains), the first processing path (Disease similarity graph→ GCN (graph convolutional network) encoder for diseases → neural projection →, Fig. 2) comprising a first hyperbolic graph convolution layer (GCN encoder for diseases in Fig. 2 comprises a first hyperbolic graph convolution layer because it captures the hierarchical characteristics in DAG which is similar to the disclosure of Applicant of what a hyperbolic graph convolution layer is; The Applicant discloses in instant specification that “the OntoGNN employs hyperbolic graph convolution layers, to encode parent and child concepts of each concept in the hyperbolic space, capturing the hierarchical characteristics in an ontology” [0026]; Specifically, the hierarchical facets are fed into the one or more hyperbolic graph convolutional layers or one or more hyperbolic graph convolutional layers [0027]; Compared to Euclidean space, hyperbolic space better captures the hierarchical characteristic of ontologies [0082]), the first portion (disease semantic similarity graph, Fig. 2) of the first knowledge base (Three databases were used in the experiments. The disease semantic similarity data was derived from the MeSH disease descriptor database … The miRNA functional similarity data was derived from the MISIM database … The known human miRNA-disease association data was obtained from the HMDD v2.0 database, pg. 2543, section 3.1, Fig. 2) comprising a first key concept (middle node of disease semantic similarity graph in Fig. 2 represents first key concept), the generating comprising: embedding a set of nodes (We obtained a hierarchical directed acyclic graph (DAG) directly from MeSH, where each node represents a disease and each directed edge in the DAG is from a general disease term to a specific disease term. pg. 2540, left col., section 2.3 Disease–disease similarity) of the first portion in the Euclidean space to form a set of node embeddings (The semantic similarity scores between different diseases were calculated based on disease DAG (pg. 2540, left col., section 2.3 Disease–disease similarity); Am was considered the similarity adjacent matrix for miRNAs and Ad for diseases, pg. 2544, left col., third para., Fig. 2; PNG media_image4.png 144 484 media_image4.png Greyscale pg. 2540, right col., The Examiner notes the nodes in the directed acyclic graph (DAG) were embedded into Euclidean space of Ad disease semantic similarity matrix Ad ϵ Rnxn which comprises similarity scores); mapping the Euclidean space to the hyperbolic space (In Fig. 2, disease semantic similarity matrix Ad ϵ Rnxn is a Euclidean space → GCN encoder for diseases encodes the hierarchical directed acyclic graph (DAG) in hyperbolic space); projecting the set of node embeddings from the Euclidean space to the hyperbolic space (row matrices in Euclidean space are projected to GCN encoder for diseases, Fig. 2); aggregating, to form an aggregated embedding, the set of node embeddings in the hyperbolic space (four row matrices in GCN encoder are aggregated into a single matrix at output, Fig. 2); and projecting the aggregated embedding from the hyperbolic space back to the Euclidean space (GCN encoder for diseases encodes the hierarchical directed acyclic graph (DAG) in hyperbolic space → Neural projection operates the matrix in Euclidean space, Fig. 2); generating a second concept representation ( PNG media_image5.png 34 96 media_image5.png Greyscale , Fig. 2, pg. 2543) based on a second portion (miRNA functional similarity graph, Fig. 2) of the first knowledge base (Three databases were used in the experiments. The disease semantic similarity data was derived from the MeSH disease descriptor database … The miRNA functional similarity data was derived from the MISIM database … The known human miRNA-disease association data was obtained from the HMDD v2.0 database, pg. 2543, section 3.1, Fig. 2) using a second processing path (miRNA functional similarity graph → GCN (graph convolutional network) encoder for miRNA → neural projection →, Fig. 2) of the first neural network (Specifically, let Gm and Gd be them iRNA functional similarity network and disease semantic similarity network, respectively (pg. 2542, left col., first para.); We present a novel method of neural inductive matrix completion with graph convolutional network (NIMCGCN) for predicting miRNA-disease association (abstract). The Examiner notes the upper and lower GCN encoder in Fig. 2 is the first neural network), the second processing path (miRNA functional similarity graph → GCN (graph convolutional network) encoder for miRNA → neural projection →, Fig. 2) comprising a first heterogenous graph convolution layer (GCN encoder for miRNA in Fig. 2 comprises a first heterogeneous graph convolution layer because it captures the non-hierarchical characteristics in the miRNA functional similarity graph which is an undirected graph), the second portion (miRNA functional similarity graph, Fig. 2) of the first knowledge base (Three databases were used in the experiments. The disease semantic similarity data was derived from the MeSH disease descriptor database … The miRNA functional similarity data was derived from the MISIM database … The known human miRNA-disease association data was obtained from the HMDD v2.0 database, pg. 2543, section 3.1, Fig. 2) comprising the first key concept (middle node of disease semantic similarity graph in Fig. 2 represents first key concept); generating a first unified concept representation (Matrix Completion, Fig. 2) including concatenating the first concept representation ( PNG media_image6.png 30 96 media_image6.png Greyscale , Fig. 2) with the second concept representation ( PNG media_image7.png 30 78 media_image7.png Greyscale , Fig. 2); and generating a prediction score using a using a predictive matching module (Prediction scores of the test sample and all candidate samples (those miRNA-disease pairs without association evidences, i.e. the ‘0’ entries in interaction matrix) could be obtained by NIMCGCN. Then, the test sample was ranked with all candidate samples based on their scores, and if the rank was higher than the specific threshold, the test sample was successfully predicted, pg. 2544, right col., third para.), wherein the prediction score (Prediction scores of the test sample and all candidate samples (those miRNA-disease pairs without association evidences, i.e. the ‘0’ entries in interaction matrix) could be obtained by NIMCGCN, pg. 2544, right col., third para.) is based at least in part on a connection between the first key concept (middle node of disease semantic similarity graph in Fig. 2 represents first key concept) and a top-level concept (Once the specific disease prediction results of NIMCGCN were obtained, we removed the miRNAs with value equal to 1 in the original miRNA-disease matrix. Then, the predicted results of the remaining new miRNAs were descending sorted by prediction scores. Thus, we obtained the top 50 associated miRNAs, pg. 2545, left col., fourth para.), and the prediction score (Prediction scores of the test sample and all candidate samples (those miRNA-disease pairs without association evidences, i.e. the ‘0’ entries in interaction matrix) could be obtained by NIMCGCN, pg. 2544, right col., third para.) being indicative of an extent of a match between the first unified concept representation (Matrix Completion, Fig. 2) Li does not explicitly teach a second unified concept representation from a second knowledge base. Cervantes teaches the predictive score being indicative of an extent of a match (The embodiments of the machine learning approach to entity matching and relationship prediction advantageously results in trained machine learning models 121 that are effective at generalizing to new data [0050]; In step 813, the output module 211 provides the node embedding as an output (e.g., for entity matching and/or relationship prediction according to the embodiments described herein) [0123]) between the first unified concept representation (first 5 rows of 703, Fig. 7 as first concept representation and the next 5 rows of 703, Fig. 7 as second concept representation are concatenated into first unified concept representation) and a second unified concept representation (The intermediate representation 703 (with additional explicit features 707) [0085]. The Examiner notes the additional explicit features, Fig. 7 as a second unified concept representation) from a second knowledge base (the relationship module 209 interacts with the graph module 201 to create a knowledge graph of a plurality of location entities of a database [0077]; In one embodiment, the relationship module 209 can optionally add additional explicit features (e.g., encoded node type, additional location data, etc.) to either the intermediate representation (if generated) [0080]. The Examiner notes relationship module 209 is a second knowledge base). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li to incorporate the teachings of Cervantes for the benefit of predicting relationships between entities or other related data attributes given minimal additional data (Cervantes [0048]) Regarding claim 4, Li and Cervantes teaches the method of claim 1, Cervantes teaches wherein the first knowledge base comprises a relational database (Knowledge Graph (KG) or digital map data of a geographic database 107 provided by a service provider mapping platform 109) by matching similar entities of multiple data sources 111 (e.g., internal data 101, location graph data 105, geographic database 107, etc.) and discovering new relationships [0038]). The same motivation to combine independent claim 1 applies here. Regarding claim 5, Li and Cervantes teaches the method of claim 4, Cervantes teaches further comprising generating a source ontology from the relational database (The relationships types of a data source represent the ontology of the corresponding KG. Each different data source and/or domain under which the data source falls can use different ontologies to describe the relationships between entities of a KG [0039]). The same motivation to combine independent claim 1 applies here. Regarding claim 6, Li and Cervantes teaches the method of claim 1, Cervantes teaches wherein the first knowledge base (internal data 101 and geographic database 107, Fig. 4 are both the first knowledge base) comprises a source ontology (The organization of the nodes and relationships may be stored in an ontology (e.g., in the internal data 101, location graph data 105, the geographic database 107, or external data source) [0058]). The same motivation to combine independent claim 1 applies here. Regarding claim 14, claim 14 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further, Cervantes teaches a computer usable program product for cognitive analysis of a project description (FIG. 1, as shown, the system 100 includes a machine learning system 119 for providing a generalizable semantic-aware location representation for machine learning tasks such as entity matching for combining data sources and/or relationship prediction according to the various embodiments described herein … the machine learning system 119 includes or is otherwise associated with one or more machine learning models 121 (e.g., neural networks or other equivalent network) for generating node embeddings. The machine learning models 121 can also be used as part of a computer vision system for detecting new or updated places through image analysis [0136]), the computer program product comprising one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions executable by a processor to cause the processor to perform operations (According to another embodiment, a non-transitory computer-readable storage medium carries one or more sequences of one or more instructions which, when executed by one or more processors, cause, at least in part, an apparatus to generate a first context-aware vector representation of a first location entity in a first data source and a second context-aware vector representation of a second location entity in a second data source [0005]) comprising: It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li to incorporate the teachings of Cervantes for the benefit of predicting relationships between entities or other related data attributes given minimal additional data (Cervantes [0048]) Regarding claim 15, Li and Cervantes teaches the computer usable program product of claim 14, Cervantes teaches wherein the stored program instructions are stored in a computer readable storage device in a data processing system (Computer system 1600 also includes a memory 1604 coupled to bus 1610. The memory 1604, such as a random access memory (RANI) or other dynamic storage device, stores information including processor instructions for combining location data sources. Dynamic memory allows information stored therein to be changed by the computer system 1600 [0172]), and wherein the stored program instructions are transferred over a network from a remote data processing system (other components of the system 100 using … protocols. In this context, a protocol includes a set of rules defining how the network nodes within the communication network 113 interact with each other based on information sent over the communication links. The protocols are effective at different layers of operation within each node, from generating and receiving physical signals of various types, to selecting a link for transferring those signals, to the format of information indicated by those signals, to identifying which software application executing on a computer system sends or receives the information. The conceptually different layers of protocols for exchanging information over a network are described in the Open Systems Interconnection (OSI) Reference Model. [0145]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li to incorporate the teachings of Cervantes for the benefit of predicting relationships between entities or other related data attributes given minimal additional data (Cervantes [0048]) Regarding claim 19, claim 19 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further, Cervantes teaches a computer system comprising a processor and one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions executable by the processor to cause the processor to perform operations (FIG. 16 illustrates a computer system 1600 upon which an embodiment of the invention may be implemented. Computer system 1600 is programmed (e.g., via computer program code or instructions) to combine location data sources as described herein and includes a communication mechanism such as a bus 1610 for passing information between other internal and external components of the computer system 1600 [0169]) comprising: It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li to incorporate the teachings of Cervantes for the benefit of predicting relationships between entities or other related data attributes given minimal additional data (Cervantes [0048]) 6. Claims 2, 3, 7-10, 17, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. ("Neural inductive matrix completion with graph convolutional networks for miRNA-disease association prediction." Bioinformatics 36.8 (2020): 2538-2546, Advance Access Publication Date: 6 January 2020) in view of Cervantes et al. (US20220179882 filed 12/09/2020) and further in view of Chami et al. ("Hyperbolic graph convolutional neural networks." Advances in neural information processing systems 32 (2019) Regarding claim 2, Li and Cervantes teaches the method of claim 1, Cervantes teaches further comprising identifying a first related concept (neighboring location entities 417 a-417 e, Fig. 4) from the first knowledge base (internal data 101 and geographic database 107, Fig. 4) as part of the first portion (subgraph 413, Fig. 4) of the first knowledge base (internal data 101 and geographic database 107, Fig. 4) based on the first related concept (neighboring location entities 417 a-417 e, Fig. 4) having with the first key concept (location entity 403 of interest [0062], Fig. 4. The Examiner notes the location entity of interest is the first key concept). Chami teaches having a hierarchical relationship (HGCN can successfully model these asymmetric and hierarchical relationships with hyperbolic attention and improves performance over all baselines, pg. 9, first para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li and Cervantes to incorporate the teachings of Chami for the benefit of transformation between different hyperbolic spaces at different layers allows HGCN to find the best geometry of hidden layers to achieve low distortion and high separation of class labels (Chami, pg. 2, fourth para.) Regarding claim 3, Li, Cervantes and Chami teaches the method of claim 2, Cervantes teaches further comprising identifying a second related concept (neighboring location entities 417 a-417 e, Fig. 4) from the first knowledge base (internal data 101 and geographic database 107, Fig. 4) as part of the second portion (subgraph 415, Fig. 4) of the first knowledge base (internal data 101 and geographic database 107, Fig. 4) based on the second related concept (neighboring location entities 419 a-419 e [0062], Fig. 4) having a non-hierarchical relationship with the first key concept (location entity 407 of interest [0062], Fig. 4. The Examiner notes the location entity of interest is the first key concept and there is a non-hierarchical relationship because subgraph 415 is an undirected graph). The same motivation to combine dependent claim 2 applies here. Regarding claim 7, Li and Cervantes teaches the method of claim 1, Chami teaches further comprising generating a first embedding of the first key concept in hyperbolic space (We derive GCNs (Graph convolutional neural networks) operations in the hyperboloid model of hyperbolic space and map Euclidean input features to embeddings in hyperbolic spaces with different trainable curvature at each layer, abstract). The same motivation to combine dependent claim 2 applies here. Regarding claim 8, Li, Cervantes and Chami teaches the method of claim 7, Cervantes teaches wherein the generating of the first concept representation using the first processing path comprises aggregating embeddings of related concepts (By way of example, the vector representation or node embedding is a fixed-length, real-valued vector that encodes one or more attributes of the location … For example, if there are 300 neurons in the hidden layer, then the vector will be 300 elements in length with each element corresponding to the aggregation function value of a respective neuron [0122]) from Chami teaches the first portion of the first knowledge base using a hyperbolic attention-based aggregation algorithm (We introduce a hyperbolic attention-based aggregation scheme that captures hierarchical structure of networks; (3) At different layers of HGCN we apply feature transformations in hyperbolic spaces of different trainable curvatures to learn low-distortion hyperbolic embeddings, pg. 2, third para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Cervantes to incorporate the teachings of Chami for the benefit of transformation between different hyperbolic spaces at different layers allows HGCN to find the best geometry of hidden layers to achieve low distortion and high separation of class labels (pg. 2, fourth para.) Regarding claim 9, Li, Cervantes and Chami teaches the method of claim 7, Cervantes teaches further comprising generating a second embedding of the first key concept in Euclidean space (represent location entities using enriched (e.g., text-enriched) graph or node embeddings [0077]; FIG. 5 can be applied to the subgraph 415 of location entity 407 to generate the respective context-aware vector representation 405 [0063]. The Examiner notes vectors are in Euclidean space). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li and Cervantes to incorporate the teachings of Chami for the benefit of transformation between different hyperbolic spaces at different layers allows HGCN to find the best geometry of hidden layers to achieve low distortion and high separation of class labels (Chami, pg. 2, fourth para.) Regarding claim 10, Li, Cervantes and Chami teaches the method of claim 9, Cervantes teaches wherein the generating of the second embedding comprises: identifying a global context in the first knowledge base for the first key concept (context-aware vector representation 401 of location entity 403 of the internal data 101 [0056]. The Examiner notes the context-aware representation is the global context); processing the global context to generate a numerical value as a global context feature of the first key concept (The vector representation 401 is sparse because the one-hot encoding results in a vector containing a value of 0 for the vast majority of the vector elements [0063]); and applying the global context feature to a feature embedding model that maps the global context feature (In other words, the graph module 201 creates the semantic-aware location representations (e.g., node embedding data) using a hypergraph: e.g., a temporary, auxiliary graph that connects nodes of the knowledge graph 901 for the purpose of creating node embeddings [0101]; FIG. 8 is a flowchart of a process for providing a semantic-aware or context-aware location representation, according to one embodiment;) into the Euclidean space (In one embodiment, geographic features (e.g., two-dimensional or three-dimensional features) are represented using polylines and/or polygons (e.g., two-dimensional features) or polygon extrusions (e.g., three-dimensional features) [0148]). Regarding claim 17, claim 17 is similar to claim 2. It is rejected in the same manner and reasoning applying. Regarding claim 18, claim 18 is similar to claim 3. It is rejected in the same manner and reasoning applying. Regarding claim 20, Li, Cervantes and Chami teaches the computer system of claim 19, Cervantes teaches further comprising: identifying a first related concept (neighboring location entities 417 a-417 e, Fig. 4) from the first knowledge base (internal data 101 and geographic database 107, Fig. 4) as part of the first portion (subgraph 413, Fig. 4) of the first knowledge base (internal data 101 and geographic database 107, Fig. 4) based on the first related concept (neighboring location entities 417 a-417 e, Fig. 4) having with the first key concept (location entity 403 of interest [0062], Fig. 4. The Examiner notes the location entity of interest is the first key concept). and identifying a second related concept (neighboring location entities 417 a-417 e, Fig. 4) from the first knowledge base (internal data 101 and geographic database 107, Fig. 4) as part of the second portion (subgraph 415, Fig. 4) of the first knowledge base (internal data 101 and geographic database 107, Fig. 4) based on the second related concept (neighboring location entities 419 a-419 e [0062], Fig. 4) having a non-hierarchical relationship with the first key concept (location entity 407 of interest [0062], Fig. 4. The Examiner notes the location entity of interest is the first key concept and there is a non-hierarchical relationship because subgraph 415 is an undirected graph). Chami teaches having a hierarchical relationship (HGCN can successfully model these asymmetric and hierarchical relationships with hyperbolic attention and improves performance over all baselines, pg. 9, first para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li and Cervantes to incorporate the teachings of Chami for the benefit of transformation between different hyperbolic spaces at different layers allows HGCN to find the best geometry of hidden layers to achieve low distortion and high separation of class labels (Chami, pg. 2, fourth para.) 7. Claims 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. ("Neural inductive matrix completion with graph convolutional networks for miRNA-disease association prediction." Bioinformatics 36.8 (2020): 2538-2546, Advance Access Publication Date: 6 January 2020) in view of Cervantes et al. (US20220179882 filed 12/09/2020) and further in view of Collomosse (US20210397942 filed 06/17/2020) Regarding claim 11, Li and Cervantes teaches the method of claim 1, Cervantes teaches a second heterogeneous graph convolution layer (Subgraph 415 (which is an undirected graph) comprises neighboring location entities 419a-419e (which is also an undirected graph) [0062], Fig. 4); Location entities 701 can be learned by applying one or more convolution 705 [0083], Fig. 7; convolution layers 705, Fig. 7 receives location entities data which are non-hierarchical data based on neighboring location entities 419a-419e which is an undirected graph. Examiner notes third convolution layer for convolution 705 is a second heterogenous graph convolution layer because it receives non-hierarchical data). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li to incorporate the teachings of Cervantes for the benefit of predicting relationships between entities or other related data attributes given minimal additional data (Cervantes [0048]) Li and Cervantes does not explicitly teach further comprising: generating a third concept representation of a second key concept from the second knowledge base using a third processing path of a second neural network, the third processing path comprising a second hyperbolic graph convolution layer; generating a fourth concept representation of the second key concept from the second knowledge base using a fourth processing path of the second neural network, the fourth processing path comprising a second convolution layer; and generating a second unified concept representation including concatenating the third concept representation with the fourth concept representation. Collomosse teaches generating a third concept representation (For example, the set of node embeddings output by the GCN may be represented as χn={xn 1 ,n 2 , . . . , n κ } [0046]. The Examiner notes X3 is a third concept representation) of a second key concept from the second knowledge base using a third processing path of a second neural network (the neural network 106 may be a graph convolutional network (GCN). GCNs are specialized convolutional neural networks (CNN) designed to analyze non-Euclidean data for deep learning [0030]; The training engine 1012 can teach, guide, tune, and/or train one or more neural networks [0070]; In addition, and as mentioned directly above, the neural network manager 1008 can manage the training and the use of various neural networks [0071]; In some embodiments, the neural network may be trained using one or more training data repositories that include labeled UX layouts. For example, the method may include training the neural network using a training layout repository, the training layout repository including a plurality of labeled GUI layouts [0084]. The Examiner notes non-Euclidean refers to hyperbolic space and the second knowledge base as data repositories), the third processing path comprising a second hyperbolic graph convolution layer (The GCN 500 may include multiple layers, including one or more layers for processing node features and one or more layers for processing relationship features … and gn( ) represents the layers of GCN that process the node features [0045]; GCNs are specialized convolutional neural networks (CNN) designed to analyze non-Euclidean data for deep learning [0030]. The Examiner notes non-Euclidean refers to hyperbolic space); generating a fourth concept representation (For example, the set of node embeddings output by the GCN may be represented as χn={xn 1 ,n 2 , . . . , n κ } [0046]. The Examiner notes X4 is a fourth concept representation) of the second key concept from the second knowledge base using a fourth processing path of the second neural network, the fourth processing path comprising a second graph convolution layer (Graph representation 416 can include the node and relationship features determined in the node/relation representation 402. For example, each node represents node features n0, n1, n2, . . . , nm … The resulting graph representation 416 can be input to a GCN to map the graph representation to a search embedding [0044]; The graph representation may include nodes and edges, where each node corresponds to a different component [0005]); and generating a second unified concept representation including concatenating the third concept representation with the fourth concept representation (The semantic embedding can be concatenated with the geometric features 408, as described above with respect to FIG. 2, for that component [0043]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li and Cervantes to incorporate the teachings of Collomosse for the benefit of using a GCN graph to identify features of a graph by using the node features and edge features (Collomosse [0045]) Regarding claim 12, Li, Cervantes and Collomosse teaches the method of claim 11, Cervantes teaches wherein the second knowledge base (the relationship module 209 interacts with the graph module 201 to create a knowledge graph of a plurality of location entities of a database [0077]; In one embodiment, the relationship module 209 can optionally add additional explicit features (e.g., encoded node type, additional location data, etc.) to either the intermediate representation (if generated) [0080]. The Examiner notes relationship module 209 is a second knowledge base) comprises a target ontology (The relation can be any relation or relationship type specified in a location-based ontology [0081]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li to incorporate the teachings of Cervantes for the benefit of predicting relationships between entities or other related data attributes given minimal additional data (Cervantes [0048]) Regarding claim 13, Li, Cervantes and Collomosse teaches the method of claim 12, Cervantes teaches wherein the first knowledge base comprises a source ontology (Each different data source and/or domain under which the data source falls can use different ontologies to describe the relationships between entities of a KG (knowledge graph) [0039]); and wherein the method further comprises updating (In this example, the neural network 1301 is trained meaning that the weights and coefficients of the neurons and connection between the neurons of the different layers 1303, 1307, and 1311 have been adjusted to make accurate predictions [0120]), responsive to the predictive score being indicative of the first unified concept matching the second unified concept (machine learning tasks such as entity matching for combining data sources and/or relationship prediction [0136]; The new database 115 (combined database) will then contain data records of the multiple data sources 111 such that matched location entities between the multiple data sources 111 are marked as being the same entity [0045]), a mapping file that maps concepts of the target ontology to concepts of the source ontology (mapping service provide data (e.g., location graph data 105 such as a Knowledge Graph (KG) [0038]; Moreover, the applications of a location-based ontology are diverse [0045]), the updating comprising adding an indication that the second key concept maps to the first key concept (The geographic database 107 can be a master geographic database stored in a format that facilitates updating, maintenance, and development [0166]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li to incorporate the teachings of Cervantes for the benefit of predicting relationships between entities or other related data attributes given minimal additional data (Cervantes [0048]) 8. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al. ("Neural inductive matrix completion with graph convolutional networks for miRNA-disease association prediction." Bioinformatics 36.8 (2020): 2538-2546, Advance Access Publication Date: 6 January 2020) in view of Cervantes et al. (US20220179882 filed 12/09/2020) and further in view of Fagundes et al. (US20180096252) Regarding claim 16, Li and Cervantes teaches the computer usable program product of claim 14, Cervantes does not explicitly teach wherein the stored program instructions are stored in a computer readable storage device in a server data processing system, and wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system, and wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system, further comprising: program instructions to meter use of the computer usable code associated with the request; and program instructions to generate an invoice based on the metered use. Fagundes teaches wherein the stored program instructions are stored in a computer readable storage device in a server data processing system (The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer [0075]), and wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system ( program code 307 may also be automatically or semi-automatically deployed into computer system 301 by sending program code 307 to a central server (e.g., computer system 301) or to a group of central servers. Program code 307 may then be downloaded into client computers (not shown) that will execute program code 307 [0088]; Alternatively, program code 307 may be sent directly to the client computer via e-mail. Program code 307 may then either be detached to a directory on the client computer or loaded into a directory on the client computer by an e-mail option that selects a program that detaches program code 307 into the directory [0089), further comprising: program instructions to meter use of the computer usable code associated with the request (cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts) [0051]); and program instructions to generate an invoice based on the metered use (Metering and Pricing 82 provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources [0066]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li and Cervantes to incorporate the teachings of Fagundes for the benefit of a process for supporting computer infrastructure, integrating, hosting, maintaining, and deploying computer-readable code into the computer system 301, wherein the code in combination with the computer system 301 is capable of performing a method for a database-management system with artificially intelligent database administration (Fagundes [0085]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.G./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jan 28, 2021
Application Filed
Jun 28, 2024
Non-Final Rejection — §103
Oct 01, 2024
Applicant Interview (Telephonic)
Oct 01, 2024
Examiner Interview Summary
Oct 07, 2024
Response Filed
Jan 08, 2025
Final Rejection — §103
Mar 04, 2025
Request for Continued Examination
Mar 10, 2025
Response after Non-Final Action
Mar 22, 2025
Non-Final Rejection — §103
May 19, 2025
Applicant Interview (Telephonic)
May 19, 2025
Examiner Interview Summary
May 20, 2025
Response Filed
Sep 14, 2025
Final Rejection — §103
Nov 17, 2025
Response after Non-Final Action
Jan 09, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602586
SUPERVISORY NEURON FOR CONTINUOUSLY ADAPTIVE NEURAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12530583
VOLUME PRESERVING ARTIFICIAL NEURAL NETWORK AND SYSTEM AND METHOD FOR BUILDING A VOLUME PRESERVING TRAINABLE ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Jan 20, 2026
Patent 12511528
NEURAL NETWORK METHOD AND APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12367381
CHAINED NEURAL ENGINE WRITE-BACK ARCHITECTURE
2y 5m to grant Granted Jul 22, 2025
Patent 12314847
TRAINING OF MACHINE READING AND COMPREHENSION SYSTEMS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
44%
Grant Probability
78%
With Interview (+33.4%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month