Prosecution Insights
Last updated: April 19, 2026
Application No. 18/142,897

GRAPH EMBEDDING METHOD AND SYSTEM THEREOF

Non-Final OA §101§103
Filed
May 03, 2023
Examiner
GODO, MORIAM MOSUNMOLA
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
4y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
30 granted / 68 resolved
-10.9% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
47 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§101 §103
DETAILED ACTION 1. This office action is in response to the Application No. 18142897 filed on 05/03/2023. Claims 1-17 are presented for examination and are currently pending. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 3. Claims 1-17 are rejected under 35 U.S.C 101 because the claimed invention is directed towards an abstract idea without significantly more. Step 1 Independent claim 1 is directed to a method, and falls into one of the four statutory categories. Step 2A, Prong 1 Claim 1 recites the following abstract ideas: changing the second embedding representation by reflecting a specific value into the second embedding representation (Mental process directed to changing the second embedding using a specific value that is reflected on the second embedding. This process can be done by using a value to reflect the embedding which can be done with pen and paper); and generating an integrated embedding representation by aggregating the first embedding representation and the changed second embedding representation (Mathematical concepts directing to aggregating a first embeddings and a changed second embedding.). Step 2A, Prong 2 Claim 1 recites the following additional elements: a graph embedding method performed by at least one computing device, the graph embedding method comprising (this limitation is directed to mere instruction to apply an exception using a computer. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)): acquiring a first embedding representation and a second embedding representation of a target graph (this limitation is directed to insignificant extra solution activity of mere data gathering. This limitation does not integrate the abstract idea into a practical application. See 2106.05(g)); Step 2B Claim 1 recites the following additional elements: a graph embedding method performed by at least one computing device, the graph embedding method comprising (this limitation is directed to mere instruction to apply an exception using a computer. This does not amount to significantly more than judicial exception .See MPEP 2106.05(f)): acquiring a first embedding representation and a second embedding representation of a target graph (this limitation is directed insignificant extra solution activity of mere data gathering and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i); 4. Dependent claim 2 is directed to a method, and falls into one of the four statutory categories. Claim 2 recite the following abstract ideas: wherein one of the first embedding representation and the second embedding representation is generated by an embedding method that aggregates information of neighbor nodes that form the target graph (Mathematical concepts directing to aggregating neighbor nodes information that form the target graph), and the other one of the first embedding representation and the second embedding representation is generated by an embedding method that reflects topology information of the target graph (Mental process directed to generating an embedding method that reflects the topology of the target graph. This can be done by observing the first or second embedding that is generated and making a judgement whether the embedding reflects the topology of the target graph). Claim 2 do not recite any additional elements. 5. Dependent claim 3 is directed to a method, and falls into one of the four statutory categories. Claim 3 do not recite any abstract ideas. Claim 3 recites the following additional elements: wherein the first embedding representation and the second embedding representation are generated by embedding the target graph via different graph neural networks (GNNs) (this limitation is directed to linking the use of a judicial exception to a particular technological environment or field of use. This limitation does not integrate the abstract idea into a practical application and does not amount to significantly more than judicial exception. See MPEP 2106.05(g)) 6. Dependent claim 4 is directed to a method, and falls into one of the four statutory categories. Claim 4 do not recite any abstract ideas. Claim 4 recites the following additional elements: wherein the specific value is an irrational number (this limitation is directed to a particular type or source of data, which is field of use. This limitation does not integrate the abstract idea into a practical application and does not amount to significantly more than judicial exception. See MPEP 2106.05(h)). 7. Dependent claim 5 is directed to a method, and falls into one of the four statutory categories. Claim 5 do not recite any abstract ideas. Claim 5 recites the following additional elements: wherein the specific value is a value based on a learnable parameter (this limitation is directed to a particular type or source of data, which is field of use. This limitation does not integrate the abstract idea into a practical application and does not amount to significantly more than judicial exception. See MPEP 2106.05(h)), and wherein the graph embedding method further comprises: predicting a label for a predefined task based on the integrated embedding representation (this limitation is directed to linking the use of a judicial exception to a particular technological environment or field of use. This limitation does not integrate the abstract idea into a practical application and does not amount to significantly more than judicial exception. See MPEP 2106.05(g)); and updating a value of the learnable parameter based on a result of the predicting (this limitation is directed to mere instruction to apply an exception. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f)). 8. Dependent claim 6 is directed to a method, and falls into one of the four statutory categories. Claim 6 recite the following abstract ideas: wherein the reflecting comprises reflecting the specific value into the second embedding representation based on a multiplication operation (Mathematical concepts directing to reflecting a value into the second embedding using a multiplication operation), and the aggregating comprises aggregating the first embedding representation and the changed second embedding representation based on an addition operation (Mathematical concepts directing to aggregating the first embedding using an addition operation). Claim 6 do not recite any additional elements. 9. Dependent claim 7 is directed to a method, and falls into one of the four statutory categories. Claim 7 recite the following abstract ideas: acquiring the first embedding representation and the second embedding representation by performing a resizing operation on at least one of the first embedding matrix and the second embedding matrix (Mental process directed to performing a resizing operation on the embeddings. This can be carried out with the use of a pen and paper). Claim 7 recite the following additional elements: wherein the acquiring the first embedding representation and the second embedding representation comprises: acquiring a first embedding matrix and a second embedding matrix of the target graph (this limitation is directed to insignificant extra solution activity of mere data gathering. This limitation does not integrate the abstract idea into a practical application. See 2106.05(g)), the first embedding matrix and the second embedding matrix having different sizes (this limitation is directed to a particular type or source of data, which is field of use. This limitation does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)); and Claim 7 recite the following additional elements: wherein the acquiring the first embedding representation and the second embedding representation comprises: acquiring a first embedding matrix and a second embedding matrix of the target graph (this limitation is directed insignificant extra solution activity of mere data gathering and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i), the first embedding matrix and the second embedding matrix having different sizes (this limitation is directed to a particular type or source of data, which is field of use. This does not amount to significantly more than judicial exception. See MPEP 2106.05(h)); and 10. Dependent claim 8 is directed to a method, and falls into one of the four statutory categories. Claim 8 do not recite any abstract ideas. Claim 8 recite the following additional elements: wherein the resizing operation is implemented by a multilayer perceptron (this limitation is directed to mere instruction to apply an exception. This limitation does not integrate the abstract idea into a practical application. See 2106.05(f)). Claim 8 recite the following additional elements: wherein the resizing operation is implemented by a multilayer perceptron (this limitation is directed to mere instruction to apply an exception. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f)). 11. Dependent claim 9 is directed to a method, and falls into one of the four statutory categories. Claim 9 recites the following abstract ideas: wherein the generating the integrated embedding representation comprises: generating a first embedding vector by performing a pooling operation on the first embedding representation (Mental process directed to generating embedding based on a pooling operation. This pooling can be performed with the use of a pen and paper to generate an embedding); generating a second embedding vector, which has the same dimension quantity as the first embedding vector, by performing a pooling operation on the changed second embedding representation (Mental process directed to generating embedding based on a pooling operation. This pooling can be performed with the use of a pen and paper to generate an embedding); and generating a vector-type integrated embedding representation based on the first embedding vector and the second embedding vector (Mental process directed to generating an embedding representation vector based on the first embedding vector and second embedding vector). Claim 9 do not recite any additional elements. 12. Dependent claim 10 is directed to a method, and falls into one of the four statutory categories. Claim 10 do not recite any abstract ideas. Claim 10 recite the following additional elements: wherein the acquiring the first embedding representation and the second embedding representation comprises: acquiring the first embedding representation via a neighbor node information aggregation scheme-based GNN (this limitation is directed to insignificant extra solution activity of data transmission. This limitation does not integrate the abstract idea into a practical application. See 2106.05(g)); and acquiring the second embedding representation by extracting topology information of the target graph using the first embedding representation (this limitation is directed to insignificant extra solution activity of data transmission. This limitation does not integrate the abstract idea into a practical application. See 2106.05(g)). Claim 10 recite the following additional elements: wherein the acquiring the first embedding representation and the second embedding representation comprises: acquiring the first embedding representation via a neighbor node information aggregation scheme-based GNN (this limitation is directed insignificant extra solution activity of data transmission and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i)); and acquiring the second embedding representation by extracting topology information of the target graph using the first embedding representation (this limitation is directed insignificant extra solution activity of data transmission and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i)). 13. Dependent claim 11 is directed to a method, and falls into one of the four statutory categories. Claim 11 recites the following abstract ideas: wherein the first embedding representation is a three-dimensional (3D) embedding matrix generated by aggregating a feature matrix for a node tuple (Mental process directed to aggregating a feature matrix. This can be performed with a use of pen and paper), and extracting the topology information of the target graph by analyzing the 2D embedding matrix (Mental process directed to analyzing a 2D matrix to extract information. This can be done by observing the matrix and making a judgement of what to extract from the matrix). Claim 11 recites the following additional elements: wherein the generating the second embedding representation comprises: generating a two-dimensional (2D) embedding matrix by extracting diagonal elements of the 3D embedding matrix (this limitation is directed insignificant extra solution activity of data gathering. This limitation does not integrate the abstract idea into a practical application. See MPEP 2106.05(g)); and Claim 11 recites the following additional elements: wherein the generating the second embedding representation comprises: generating a two-dimensional (2D) embedding matrix by extracting diagonal elements of the 3D embedding matrix (this limitation is directed insignificant extra solution activity of data gathering and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i)); and 14. Dependent claim 12 is directed to a method, and falls into one of the four statutory categories. Claim 12 recites the following abstract ideas: wherein the extracting the topology information of the target graph is performed by calculating a persistence diagram (Mathematical relationship directed to calculating a persistence diagram to extract information). Claim 12 do not recite any additional elements. 15. Dependent claim 13 is directed to a method, and falls into one of the four statutory categories. Claim 13 recites the following abstract ideas: wherein the generating the integrated embedding representation comprises: performing a pooling operation on the first embedding representation and the changed second embedding representation (Mental process directed to performing a pooling operation on the embeddings to generate integrated embeddings. This can be carried out with the use of a pen and paper); and generating the integrated embedding representation by aggregating results of the pooling operation (Mental process directed to summing the results of the pooling operation. This can be done with the use of a pen and paper). Claim 13 do not recite any additional elements. 16. Dependent claim 14 is directed to a method, and falls into one of the four statutory categories. Claim 14 recites the following abstract ideas: changing the third embedding representation by reflecting another specific value into the third embedding representation (Mental process directed to changing the embeddings by reflecting a specific value. This can be done with a pen and paper); and generating the integrated embedding representation by aggregating the first embedding representation, the changed second embedding representation, and the changed third embedding representation (Mental process directed to summing the embeddings. This can be carried out with the use of pen and paper). Claim 14 recites the following additional elements: wherein the generating the integrated embedding representation comprises: acquiring a third embedding representation of the target graph (this limitation is directed insignificant extra solution activity of data gathering. This limitation does not integrate the abstract idea into a practical application. See MPEP 2106.05(g))); Claim 14 recites the following additional elements: wherein the generating the integrated embedding representation comprises: acquiring a third embedding representation of the target graph (this limitation is directed to insignificant extra solution activity of data gathering and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i); 17. Dependent claim 15 is directed to a method, and falls into one of the four statutory categories. Claim 15 recites the following abstract ideas: changing the third embedding representation through the k-th embedding representation by reflecting another specific value into the third embedding representation through the k-th embedding representation (Mental process directed to changing the embeddings by reflecting a specific value through embedding representation. This can be done with a pen and paper); and generating the integrated embedding representation by aggregating the first embedding representation, the changed second embedding representation, and the changed third embedding representation through k-th embedding representation (Mental process directed to summing the embeddings through embedding representation. This can be carried out with the use of pen and paper). Claim 15 recites the following additional elements: wherein the generating the integrated embedding representation comprises: acquiring a third embedding representation through a k-th embedding representation (k being a natural number of 3 or greater) (this limitation is directed insignificant extra solution activity of data gathering. This limitation does not integrate the abstract idea into a practical application. See MPEP 2106.05(g)); Claim 15 recites the following additional elements: wherein the generating the integrated embedding representation comprises: acquiring a third embedding representation through a k-th embedding representation (k being a natural number of 3 or greater) (this limitation is directed to insignificant extra solution activity of data gathering and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i); 18. Independent claim 16 is directed to a system, and falls into one of the four statutory categories. With regards to claim 16, it is substantially similar to claim 1, and is rejected in the same manner and reasoning applying. Claim 16 further recites “at least one processor” and “a memory configured to store program code executable by the at least one processor, the program code comprising: acquiring code configured to cause the at least one processor to”, these limitations are directed to a generic computer component. These limitations do not integrate the abstract idea into a practical application and do not amount to significantly more. See MPEP 2106.05(f) 19. Independent claim 17 is directed to a method, and falls into one of the four statutory categories. With regards to claim 17, it is substantially similar to claim 1, and is rejected in the same manner and reasoning applying. Claim 17 further recites “a non-transitory computer-readable recording medium storing program code executable by at least one processor, the program code comprising: acquiring code configured to cause the at least one processor to, limitations are directed to a generic computer component. These limitations do not integrate the abstract idea into a practical application and do not amount to significantly more. See MPEP 2106.05(f). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 20. Claims 1-4, 6, 8, 10, 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Mao et al. (Relational reflection entity alignment. InProceedings of the 29th ACM international conference on information & knowledge management 2020 Oct 19 (pp. 1095-1104)) in view of Manolache et al. (US20220327108 filed 04/09/2021) Regarding claim 1, Mao teaches a graph embedding method performed by at least one computing device, the graph embedding method comprising (Furthermore, we propose a novel GNNs-based method, Relational Reflection Entity Alignment (RREA). RREA leverages Relational Reflection Transformation to obtain relation specific embeddings for each entity in a more efficient way, abstract): acquiring a first embedding representation and a second embedding representation of a target graph (To create a global-aware graph representation, we stack multiple layers of GNNs to capture multi-hop neighborhood information. The embeddings from different layers … PNG media_image1.png 42 380 media_image1.png Greyscale pg. 1101, right col., second para. The Examiner notes that in Fig. 4b, he1 is the first embedding representation and he2 is the second embedding representation); changing the second embedding representation (we design a new transformation operation, Relational Reflection Transformation, which fulfills these two criteria. This new operation is able to reflect entity embeddings along different relational hyperplanes to construct relation specific embeddings, pg. 1096, left col., second para.) by reflecting a specific value into the second embedding representation (It is easy to derive that the reflection of entity embedding 𝒉𝑒 along the relational hyperplane 𝑷𝑟 can be computed by 𝑴𝑟𝒉𝑒, pg. 1101, left col., section 5.1. The Examiner notes that in Fig. 4b, 𝑴𝑟𝒉𝑒2 is the changed second embedding representation, and he2 Fig. 4b has been reflected across the y-axis, and this indicates the x-coordinate value of the point he2 as the specific value has been multiplied by -1); and generating an integrated embedding representation by aggregating the first embedding representation and the changed second embedding representation (… concatenate the summation of the relation embeddings with entity embeddings to get dual-aspect embeddings. In this paper, we adopt dual-aspect embeddings, pg. 1101, right col., third para.). Mao does not explicitly teach a computing device. Manolache a computing device (computing appliance is a personal computer[0058]; Exemplary embedding transformations include nudging a vector by a small amount ε along one of the axes or along a transformation-specific predetermined direction. Other exemplary transformations may comprise a rotation and a reflection about a pre-determined plane [0040]. The Examiner notes small amount ε is a specific value reflected into an embedding) Since Mao as primary reference teaches reflection operation about a plane in Fig. 4, and Manolache as secondary reference discloses transformation may comprise a reflection about a pre-determined plane [0040], then, It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao to incorporate the teachings of Manolache for the benefit of increasing the number of distinct transformations of a point ([0069], Fig. 6) in graph neural networks (GNN) configured such that each hidden unit receives an input (e.g., embedding vector) characterizing a respective token (Manolache [0047]) Regarding claim 2, Mao and Manolache teaches the graph embedding method of claim 1, Mao and Manolache teaches the wherein one of the first embedding representation and the second embedding representation is generated by an embedding method that aggregates information of neighbor nodes that form the target graph (Many GNNs in entity alignment task contains the following equations: PNG media_image2.png 88 452 media_image2.png Greyscale where N𝑒𝑒𝑖 represents the set of neighboring nodes around 𝑒𝑖 , 𝑾𝑙 is the transformation matrix of layer 𝑙. Equation 2 is responsible for aggregating information from the neighboring nodes while Equation 3 transforms the node embeddings into better ones, pg. 1098, right col., first para.), and the other one of the first embedding representation and the second embedding representation is generated by an embedding method that reflects topology information of the target graph (we design a new transformation operation, Relational Reflection Transformation. Let relation embedding 𝒉𝑟 be a normal vector, there is one and only one hyperplane 𝑷𝑟 and only one corresponding reflection matrix 𝑴r, pg. 1101, left col., section 5.1). Regarding claim 3, Mao and Manolache teaches the graph embedding method of claim 1, Mao teaches the wherein the first embedding representation and the second embedding representation are generated by embedding the target graph via different graph neural networks (GNNs) (Furthermore, we propose a novel GNNs-based method, Relational Reflection Entity Alignment (RREA). RREA leverages Relational Reflection Transformation to obtain relation specific embeddings for each entity in a more efficient way, abstract. The Examiner notes GNNs indicates a plurality of Graph Neural Network). Regarding claim 4, Mao and Manolache graph embedding method of claim 1, Manolache teaches wherein the specific value is an irrational number (Exemplary embedding transformations include nudging a vector by a small amount ε along one of the axes or along a transformation-specific predetermined direction. Other exemplary transformations may comprise a rotation and a reflection about a pre-determined plane [0040]; The Examiner notes small amount ε is a specific value reflected into an embedding, and instant specification of the Applicant discloses that “Here, the specific value of + ε may be reflected into the second embedding representation” [0087]). Regarding claim 6, Mao and Manolache teaches the graph embedding method of claim 1, Mao teaches wherein the reflecting comprises reflecting the specific value into the second embedding representation based on a multiplication operation (It is easy to derive that the reflection of entity embedding he along the relational hyperplane Pr can be computed by Mrhe, pg. 1101, left col., section 5.1. The Examiner notes that in Fig. 4b, Mrhe is the changed second embedding representation, and he2 in Fig. 4b has been reflected across the y-axis, and this indicates x-coordinate value of the point he2 as the specific value, has been multiplied by -1), and the aggregating comprises aggregating the first embedding representation and the changed second embedding representation based on an addition operation (concatenate the summation of the relation embeddings with entity embeddings to get dual-aspect embeddings. In this paper, we adopt dual-aspect embeddings, pg. 1101, right col., third para.). Regarding claim 10, Mao, Manolache and Huang teaches the graph embedding method of claim 1, Mao teaches wherein the acquiring the first embedding representation and the second embedding representation comprises: acquiring the first embedding representation via a neighbor node information aggregation scheme-based GNN (Many GNNs in entity alignment task contains the following equations: PNG media_image2.png 88 452 media_image2.png Greyscale where N𝑒𝑒𝑖 represents the set of neighboring nodes around 𝑒𝑖 , 𝑾𝑙 is the transformation matrix of layer 𝑙. Equation 2 is responsible for aggregating information from the neighboring nodes while Equation 3 transforms the node embeddings into better ones, pg. 1098, right col., first para.); and acquiring the second embedding representation by extracting topology information of the target graph using the first embedding representation (Dual-Aspect Embedding: … entity embeddings generated by GNNs only contain the topological information … Therefore, they concatenate the summation of the relation embeddings with entity embeddings to get dual-aspect embeddings. In this paper, we adopt dual-aspect embeddings, pg.1101, right col., third para.). Regarding claim 16, claim 16 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further, Manolache teaches at least one processor; and a memory configured to store program code executable by the at least one processor, the program code comprising: acquiring code configured to cause the at least one processor to (Processor(s) 72 comprise a physical device (e.g. microprocessor, multi-core integrated circuit formed on a semiconductor substrate) configured to execute computational and/or logical operations with a set of signals and/or data. Such signals or data may be encoded and delivered to processor(s) 72 in the form of processor instructions, e.g., machine code [0058]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao to incorporate the teachings of Manolache for the benefit of increasing the number of distinct transformations of a point ([0069], Fig. 6) in graph neural networks (GNN) configured such that each hidden unit receives an input (e.g., embedding vector) characterizing a respective token (Manolache [0047]) Regarding claim 17, claim 17 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further, Manolache teaches a non-transitory computer-readable recording medium storing program code executable by at least one processor, the program code comprising: acquiring code configured to cause the at least one processor to (a non-transitory computer-readable medium stores instructions which, when executed by at least one hardware processor of a computer system, cause the computer system [0006]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao to incorporate the teachings of Manolache for the benefit of increasing the number of distinct transformations of a point ([0069], Fig. 6) in graph neural networks (GNN) configured such that each hidden unit receives an input (e.g., embedding vector) characterizing a respective token (Manolache [0047]) 21. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Mao et al. (Relational reflection entity alignment. InProceedings of the 29th ACM international conference on information & knowledge management 2020 Oct 19 (pp. 1095-1104)) in view of Manolache et al. (US20220327108 filed 04/09/2021) and further in view of Yu et al. ("Knowledge embedding based graph convolutional network." Proceedings of the web conference 2021, April 19 - 23, 2021) Regarding claim 5, Mao and Manolache teaches the graph embedding method of claim 1, Mao and Manolache does not explicitly teach the limitations of claim 5. Yu teaches wherein the specific value is a value based on a learnable parameter (Wl the matrix of model parameters to be learned by the GCN, pg. 1621, left col., section 3.1), and wherein the graph embedding method further comprises: predicting a label for a predefined task based on the integrated embedding representation (Entity Classification is the task of predicting the labels of entities in a given knowledge graph (pg. 1624, left col., last para.); … both entity embeddings and relation embeddings in our model are used to enforce optimization of each other in a recursive aggregation process, pg. 1620, left col. third para. ); and updating a value of the learnable parameter based on a result of the predicting (Analogous to Equation 3, if we denote hlv the embedding of entity v at layer l, the entity updating rules are: PNG media_image3.png 166 448 media_image3.png Greyscale … The relation updating rules can be defined in a similar manner:). PNG media_image4.png 106 422 media_image4.png Greyscale pg. 1621, section 3.2) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao and Manolache to incorporate the teachings of Yu for the benefit of capturing the rich semantics of heterogeneous relations and learning better context-based relation embeddings (Yu, pg. 1622, right col., first para.) 22. Claims 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Mao et al. (Relational reflection entity alignment. InProceedings of the 29th ACM international conference on information & knowledge management 2020 Oct 19 (pp. 1095-1104)) in view of Manolache et al. (US20220327108 filed 04/09/2021) and further in view of Huang et al. ("Long-short graph memory network for skeleton-based action recognition." proceedings of the IEEE/CVF winter conference on applications of computer vision. 2020). Regarding claim 7, Mao and Manolache teaches the graph embedding method of claim 1, Mao and Manolache do not teach the limitations of claim 7. Huang teaches wherein the acquiring the first embedding representation and the second embedding representation comprises (We build our model based on Bi-LSGM, then combine the forward hidden states → S and the backward hidden states ← S together, pg. 648, right col., third para. The Examiner notes the forward hidden states is the first embedding representation and backward hidden states is the second embedding representation): acquiring a first embedding matrix and a second embedding matrix of the target graph (It is worth noticing that the hidden states and memory cells in the LSTM store data in the form of graph matrices (pg. 646, right col., first para.); we aim to improve the ability of LSTM to extract spatial information by embedding the GCN layer in the LSTM cell, pg. 647, right co., section 3. Methodology), the first embedding matrix and the second embedding matrix having different sizes; and acquiring the first embedding representation and the second embedding representation by performing a resizing operation on at least one of the first embedding matrix and the second embedding matrix (To calculate the feature map of LSGM, we compress the feature dimension and resize S via 2 FC layers (pg. 648, right col., fifth para.); The Examiner notes resizing S indicates there are different sizes for both the forward hidden states which is the first embedding representation and backward hidden states which is the second embedding representation). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao and Manolache to incorporate the teachings of Huang for the benefit of extracting temporal and spatial features and capturing high-level nodes representations (Huang, pg. 651, conclusion) Regarding claim 8, Mao, Manolache and Huang teaches the graph embedding method of claim 7, Huang teaches wherein the resizing operation is implemented by a multilayer perceptron (Therefore, we embed the graph convolution layer into the LSTM cell to capacitate it to extract spatial features, which is our LSGM cell (pg. 648, left col., second to the last para.); To calculate the feature map of LSGM, we compress the feature dimension and resize S via 2 FC layers (pg. 648, right col., fifth para. The Examiner notes the fully connected (FC) layers is multilayer perceptron). The same motivation to combine dependent claim 7 applies here. 23. Claims 9 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Mao et al. (Relational reflection entity alignment. InProceedings of the 29th ACM international conference on information & knowledge management 2020 Oct 19 (pp. 1095-1104)) in view of Manolache et al. (US20220327108 filed 04/09/2021) and further in view of Gong et al. (US20240028631 PCT filed 10/05/2021) Regarding claim 9, Mao, Manolache and Huang teaches the graph embedding method of claim 7, Mao, Manolache and Huang does not explicitly teach the limitations of claim 9. Gong teaches wherein the generating the integrated embedding representation comprises: generating a first embedding vector by performing a pooling operation on the first embedding representation (After getting the sentence embedding 210 and the entity embedding 216 … as described above, two additional pooling layers 218, 220 can be applied, respectively, to compress the sentence and entity embedding lengths further [0053]. The Examiner notes sentence embedding 210 is the first embedding representation); generating a second embedding vector, which has the same dimension quantity as the first embedding vector (different entity types have the same embedding [0015]), by performing a pooling operation on the changed second embedding representation (After getting the sentence embedding 210 and the entity embedding 216 … as described above, two additional pooling layers 218, 220 can be applied, respectively, to compress the sentence and entity embedding lengths further [0053]. The Examiner notes sentence embedding 216 is the second embedding representation); and generating a vector-type integrated embedding representation based on the first embedding vector and the second embedding vector (In a next step, it may be provided that both the pooled sentence embedding 222 and the pooled entity embedding 224 are given as input to a text aggregator 226 to aggregate the textual embedding [0054]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao, Manolache and Huang to incorporate the teachings of Gong for the benefit of improving and further developing a system that similar documents are identified in a highly efficient and objective way (Gong [0009]) Regarding claim 13, Mao, Manolache and Huang teaches the graph embedding method of claim 1, Mao, Manolache and Huang does not explicitly teach the limitations of claim 13. Gong teaches the wherein the generating the integrated embedding representation comprises: performing a pooling operation on the first embedding representation and the changed second embedding representation (After getting the sentence embedding 210 and the entity embedding 216 … as described above, two additional pooling layers 218, 220 can be applied, respectively, to compress the sentence and entity embedding lengths further [0053]); and generating the integrated embedding representation by aggregating results of the pooling operation (n a next step, it may be provided that both the pooled sentence embedding 222 and the pooled entity embedding 224 are given as input to a text aggregator 226 to aggregate the textual embedding [0054]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao, Manolache and Huang to incorporate the teachings of Gong for the benefit of improving and further developing a system that similar documents are identified in a highly efficient and objective way (Gong [0009]) 24. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Mao et al. (Relational reflection entity alignment. InProceedings of the 29th ACM international conference on information & knowledge management 2020 Oct 19 (pp. 1095-1104)) in view of Manolache et al. (US20220327108 filed 04/09/2021) and further in view of Dalli et al. (US20210232915) Regarding claim 11, Mao, Manolache and Huang teaches the graph embedding method of claim 10, Mao, Manolache and Huang does not explicitly teach the limitations of claim 11. Dalli teaches wherein the first embedding representation is a three-dimensional (3D) embedding matrix generated by aggregating a feature matrix for a node tuple (An exemplary embodiment of a CNN-XNN reconstruction application in medical imaging may be used to denoise MRI or PET scans and additionally reconstruct a 3D model from one or more 2D image slices [0072]), and wherein the generating the second embedding representation comprises: generating a two-dimensional (2D) embedding matrix by extracting diagonal elements of the 3D embedding matrix (diagonal elements are extracted matrix 317, Fig. 3B); and extracting the topology information of the target graph by analyzing the 2D embedding matrix (extract the most pertinent information and track multiple events and objects across time and space for the whole corpus being analyzed [0120]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao, Manolache and Huang to incorporate the teachings of Dalli in order to implement a practical and resource efficient transfer to multi-dimensional data (Dalli [0047]) 25. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Mao et al. (Relational reflection entity alignment. InProceedings of the 29th ACM international conference on information & knowledge management 2020 Oct 19 (pp. 1095-1104)) in view of Manolache et al. (US20220327108 filed 04/09/2021) and further in view of Neill (US20210374143) Regarding claim 12, Mao, Manolache and Huang teaches the graph embedding method of claim 10, Mao, Manolache and Huang does not explicitly teach the limitations of claim 12. Neill teaches the wherein the extracting the topology information of the target graph is performed by calculating a persistence diagram (The compute module 114 subsequently stores sub-graphs of the CPS-G dataset (in-memory 118 or persistent storage database 128) [0115]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao, Manolache and Huang to incorporate the teachings of Neill for the benefit of performing real-time data analytics and analysis including efficient implementation and execution of Graph Neural Network (GNN) algorithms (Neill [0007]) 26. Claims 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Mao et al. (Relational reflection entity alignment. InProceedings of the 29th ACM international conference on information & knowledge management 2020 Oct 19 (pp. 1095-1104)) in view of Manolache et al. (US20220327108 filed 04/09/2021) and further in view of Iyer at al. (US20210073632 filed 11/18/2020) Regarding claim 14, Mao, Manolache and Huang teaches the graph embedding method of claim 1, Mao, Manolache and Huang does not explicitly teach the limitations of claim 14. Iyer teaches wherein the generating the integrated embedding representation (aggregates the block embedding(s) to generate semantic embedding(s) [0076]) comprises: acquiring a third embedding representation of the target graph (generate a third block embedding based on the third code block [0111]); changing the third embedding representation by reflecting another specific value into the third embedding representation (The third semantic embedding 528 can additionally or alternatively be associated with code snippets of other semantic concepts of computation, such as multiplication, exponents, arithmetic transformations, etc [0053]); and generating the integrated embedding representation by aggregating the first embedding representation, the changed second embedding representation, and the changed third embedding representation (first block embedding corresponding to the first code block and the second block embedding corresponding to the second code block [0101]; The third semantic embedding 528 can additionally or alternatively be associated with code snippets of other semantic concepts of computation, such as multiplication, exponents, arithmetic transformations, etc [0053]; The semantic embedding of the sum operation can be an aggregation of one or more block embeddings corresponding to code blocks of the sum operation [0038]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao, Manolache and Huang to incorporate the teachings of Iyer for the benefit of using graphs as effective representations for graph neural networks (GNNs) used to learn latent features and/or semantic information [0022] and improving the efficiency of using a computing device by generating code semantics for input in using self-supervised learning techniques (Iyer [0092]) Regarding claim 15, Mao, Manolache and Huang teaches the graph embedding method of claim 1, Mao, Manolache and Huang does not explicitly teach the limitations of claim 15. Iyer teaches wherein the generating the integrated embedding representation (aggregates the block embedding(s) to generate semantic embedding(s) [0076]) comprises: acquiring a third embedding representation through a k-th embedding representation (k being a natural number of 3 or greater) (For example, the third semantic embedding 528 is associated with code snippets of a third semantic concept [0053]; For example, the fourth semantic concept may be arrays [0056]; semantic concept representations (e.g., semantic embeddings) [0045]); changing the third embedding representation through the k-th embedding representation by reflecting another specific value into the third embedding representation through the k-th embedding representation (The third semantic embedding 528 can additionally or alternatively be associated with code snippets of other semantic concepts of computation, such as … arithmetic transformations, etc. [0053]); and generating the integrated embedding representation by aggregating the first embedding representation, the changed second embedding representation, and the changed third embedding representation through k-th embedding representation (first block embedding corresponding to the first code block and the second block embedding corresponding to the second code block [0101]; The third semantic embedding 528 can additionally or alternatively be associated with code snippets of other semantic concepts of computation, such as … arithmetic transformations, etc. [0053]; The semantic embedding of the sum operation can be an aggregation of one or more block embeddings corresponding to code blocks of the sum operation [0038]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Mao, Manolache and Huang to incorporate the teachings of Iyer for the benefit of using graphs as effective representations for graph neural networks (GNNs) used to learn latent features and/or semantic information [0022] and improving the efficiency of using a computing device by generating code semantics for input in using self-supervised learning techniques (Iyer [0092]) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.G./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

May 03, 2023
Application Filed
Mar 19, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602586
SUPERVISORY NEURON FOR CONTINUOUSLY ADAPTIVE NEURAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12530583
VOLUME PRESERVING ARTIFICIAL NEURAL NETWORK AND SYSTEM AND METHOD FOR BUILDING A VOLUME PRESERVING TRAINABLE ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Jan 20, 2026
Patent 12511528
NEURAL NETWORK METHOD AND APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12367381
CHAINED NEURAL ENGINE WRITE-BACK ARCHITECTURE
2y 5m to grant Granted Jul 22, 2025
Patent 12314847
TRAINING OF MACHINE READING AND COMPREHENSION SYSTEMS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
78%
With Interview (+33.4%)
4y 8m
Median Time to Grant
Low
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month