Prosecution Insights
Last updated: April 19, 2026
Application No. 18/054,073

SIGN-AWARE RECOMMENDATION APPARATUS AND METHOD USING GRAPH NEURAL NETWORK

Final Rejection §101§103§112
Filed
Nov 09, 2022
Examiner
BOSTWICK, SIDNEY VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Industry-Academic Cooperation Foundation Yonsei University
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
71 granted / 136 resolved
-2.8% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
68 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This Office Action is responsive to Applicants' Amendment filed on December 29, 2025, in which claims 1-3, 5-13, and 15-19 are currently amended. Claims 4 and 14 are canceled. Claims 1-3, 5-13, and 15-19 are currently pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on November 8, 2019 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant’s arguments with respect to rejection of claims 1-3, 5-13, and 15-19 under 35 U.S.C. 101 based on amendment have been considered, however, are not persuasive. With respect to Applicant’s arguments on pp. 15-16 of the Remarks submitted 12/29/2025 that the series of steps “obtains positive embedding vectors and negative embedding vectors - obtains concatenation embedding vectors - recommends an item to each user - acquires a signed graph including signed edges - partitions the signed graph into the positive graph - obtaining the positive embedding vectors are implemented as a graph neural network (GNN) […] not achievable as a purely mental process in the human mind”, Examiner respectfully disagrees. Examiner asserts that the exact series of steps outlined by Applicant on p. 15 of the Remarks which largely summarizes independent claims 1 and 11 in the instant, can readily be performed entirely in the human mind with or without the assistance of tools such as pen and paper. The human mind is exceptionally capable of creating graphs, with examples of graphs created without computers dating back as far as 1736 with Euler’s Seven Bridges of Königsberg problem. The instant claims are wholly directed towards generation of a graph for item recommendation, wherein item recommendation is also seen as a mental process. Examiner further notes (MPEP 2106.07(a)(II) "employing well-known computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not integrate the exception into a practical application"). For at least these reasons and those further detailed below, Examiner asserts that it is reasonable and appropriate to maintain the rejection under 35 U.S.C. 101. Applicant’s arguments with respect to rejection of claims 1-3, 5-13, and 15-19 under 35 U.S.C. 103 based on amendment have been considered. With respect to Applicant’s arguments on p. 18 of the Remarks submitted 12/29/2025 that “the present application generates two graphs that share the same nodes […] inferring the claimed configuration of two graphs with opposite edge polarities from Tian […] is an unwarranted and hindsight-based assumption”, Examiner respectfully disagrees. The instant claims explicitly recite partitioning a single bipartite graph into negative and positive edges “generates a graph partitioned into a positive graph having the positive valued edges and a negative graph having the negative valued edges” which clearly is synonymous with using different subgraphs with different node types (positive vs. negative value). Partitioning graphs into subgraphs is routine in the art as is explicitly reinforced by Tian who partitions the graph into subgraphs by edge type. The remaining arguments are moot in view of a new ground of rejection set forth below. Claim Objections Claims 1, 6 objected to because of the following informalities: Regarding claim 1, "couple to the processor" should read "coupled to the processor". Regarding claims 1 and 11, "connecting each user nodes and each item nodes" should read "connecting each user node and each item node". Regarding claim 1, "vectors are concatenated, thereby determines" should read "vectors are concatenated, thereby determining". Regarding claim 1, "code, executable by the processor, the processor, when executed" should read "code, when executed by the processor, causes the processor to". Regarding claim 6, "and setting the evaluation scores" should read "and sets the evaluation scores". Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-3, 5-13, and 15-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, "the processor, when executed, in a bipartite graph [...] classifies [...] obtains" is indefinite. It would be unclear to one of ordinary skill in the art how a processor could be executed in a bipartite graph. It’s grammatically unclear what “when executed” modifies. In the interest of further examination the claim is interpreted as “the processor classifies using a bipartite graph”. Regarding claims 1 and 11, "obtains [...] embedding vectors vectorized by performing a neural network operation" is indefinite. One of ordinary skill in the art would expect vectors to already be vectorized such that "embedding vectors vectorized" appears self-contradictory. It’s further unclear what the relationship between the neural network operation and the vectorization (both actions) is. In the interest of further examination the claim is interpreted as simply “obtains […] embedding vectors”. Regarding claims 1 and 11, "a plurality of edges connecting each user nodes and each item nodes" is indefinite. It would be unclear what the graph structure is from the claim language because the claim does not clearly identify what one edge connects to what. For example, is this describing a fully connected graph where an edge connects one user node to one item node, edges connect every user node to every item node, or it might be read as saying edges connect the two sets in some unspecified way. Since these interpretations are contradictory the scope of the claim cannot be reasonably determined. In the interest of further examination the claim is interpreted as "a plurality of edges, each edge connecting a user node of the plurality of user nodes and an item node of the plurality of item nodes". Regarding claims 1 and 11, "wherein obtaining the positive embedding vectors are implemented as a graph neural network(GNN)" is indefinite. Obtaining [...] are implemented as" does not agree in plurality and does not clearly identify what is implemented as the GNN: for example is the obtaining step implemented by the GNN, is the structure of the GNN obtained by the embedding vectors, or is it something else altogether. Since these interpretations are contradictory the scope of the claim cannot be reasonably determined. In the interest of further examination the claim is interpreted as "obtaining the positive embedding vectors are based on a graph neural network". Regarding claims 3 and 13, "obtaining the negative embedding vectors are implemented as a multi-layer perceptron (MLP)" is indefinite. The claim does not clearly identify what is implemented as the MLP: for example is the obtaining step implemented by the MLP, is the structure of the MLP obtained by the embedding vectors, or is it something else altogether. Since these interpretations are contradictory the scope of the claim cannot be reasonably determined. In the interest of further examination the claim is interpreted as "obtaining the positive embedding vectors are based on a graph neural network". Regarding claim 10, "The computer implemented recommendation apparatus according to claim 1, in recommending an item, the processor" is grammatically indefinite. It's unclear what "in recommending an item" is modifying and what is responsible for performing the calculating. In the interest of further examination the claim is interpreted as ""The computer implemented recommendation apparatus according to claim 1, wherein the recommendation further comprises: calculating". The remaining claims are rejected with respect to their dependence on the rejected claims. Claim Rejections - 35 USC § 101 101 Rejection 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-13, and 15-19 are rejected under 35 USC § 101 because the claimed invention is directed to non-statutory subject matter. Regarding Claim 1: Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 1 is directed to an apparatus, which is directed to a product, one of the statutory categories. Step 2A Prong One Analysis: Claim 1 under its broadest reasonable interpretation is a series of mental processes and mathematical calculations and relationships. For example, but for the generic computer components language, the above limitations in the context of this claim encompass neural network processing, including the following: in a bipartite graph composed of a plurality of user nodes corresponding to each of a plurality of users, a plurality of item nodes respectively representing items, and a plurality of edges connecting each user nodes and each item nodes, each edge having a weight corresponding to an evaluation score given by a user for an item, classifies and partitions the plurality of edges into positive valued edges and negative valued edges according to the weight, and generates a graph partitioned into a positive graph having the positive valued edges and a negative graph having the negative valued edges; (observation, evaluation, and judgement), obtains concatenation embedding vector in which the positive embedding vector and the negative embedding vectors are concatenated, thereby determines embedding positions of the plurality of user nodes and the plurality of item nodes in a virtual common embedding space; and (observation, evaluation, and judgement) recommends an item to each user based on a distance of each of the plurality of item nodes to each of the plurality of user nodes in the embedding space; (observation, evaluation, and judgement) wherein, in generating the partitioned graph, the processor: acquires a signed graph including signed edges, by determining whether a weight of each of a plurality of edges in the bipartite graph is greater than or equal to a predetermined reference weight, setting edges having a weight greater than or equal to a reference weight as the positive valued edges (observation, evaluation, and judgement) and setting edges whose weight is less than the reference weight as the negative valued edges (observation, evaluation, and judgement) and partitions the signed graph into the positive graph composed of the plurality of user nodes, the plurality of item nodes and the positive valued edges, and the negative graph composed of the plurality of user nodes, the plurality of item nodes and the negative valued edges (observation, evaluation, and judgement) Therefore, claim 1 recites an abstract idea which is a judicial exception. Step 2A Prong Two Analysis: Claim 1 recites additional elements “A computer implemented recommendation apparatus comprising: a processor, and a non-transitory computer readable medium couple to the processor, the computer readable medium comprising code, executable by the processor, the processor, when executed” and “wherein obtaining the positive embedding vectors are implemented as a graph neural network(GNN)”. However, these additional features are computer components recited at a high-level of generality, such that they amount to no more than mere instructions to apply the judicial exception using a generic computer component. An additional element that merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, does not integrate the judicial exception into a practical application (See MPEP 2106.05(f)). Claim 1 also recites additional elements “obtains positive embedding vectors and negative embedding vectors, vectorized by performing a neural network operation on each of the positive graph and the negative graph using a pre-trained artificial neural network” which amounts to gathering data which is considered insignificant extra-solution activity (See MPEP 2106.05(g)). Therefore, claim 1 is directed to a judicial exception. Step 2B Analysis: Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the lack of integration of the abstract idea into a practical application, the additional elements recited in claim 1 amount to no more than mere instructions to apply the judicial exception using a generic computer component and insignificant extra-solution activity. The gathering and outputting of data is considered well-understood, routine, and conventional in the art (See MPEP 2106.05(d)(II)(i)). For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claim 11, which recites a method, as well as to dependent claims 2-10 and 12-20. The additional limitations of the dependent claims are addressed briefly below: Dependent claims 2 and 12 recite additional insignificant extra-solution activity of gathering and outputting data “wherein in embedding, the processor: obtains the positive embedding vectors, by receiving the positive graph indicating preference of the plurality of users for the plurality of items” as well as additional instructions to apply the judicial exception using generic computer components “performing neural network operations according to a pre-trained method, and vectorizing each of the plurality of user nodes and the plurality of item nodes,” and additional observation, evaluation, and judgement “obtains the negative embedding vectors, by receiving the negative graph indicating non-preference of the plurality of users for the plurality of items, performing neural network operations according to a pre-trained method, and vectorizing each of the plurality of user nodes and the plurality of item nodes, obtains the concatenation embedding vectors, by performing neural network operations according to a pre-trained method, estimating positive importance and negative importance corresponding to the positive embedding vectors and the negative embedding vectors, respectively, and weighting the positive embedding vectors and the negative embedding vectors with the positive importance and the negative importance” Dependent claims 3 and 13 recite additional instructions to apply the judicial exception using generic computer components “obtaining the negative embedding vectors are implemented as a multi-layer perceptron (MLP).” Dependent claims 5 and 15 recites additional observation, evaluation, and judgement “wherein the reference weight is set to the median value of the evaluation scores of the plurality of users for each item” Dependent claims 6 and 16 recite additional insignificant extra-solution activity of gathering and outputting data (See MPEP 2106.05(g)) “wherein in generating the partitioned graph, the processor, prior to the step of acquiring the signed graph, acquired the bipartite graph by receiving evaluation data including evaluation scores evaluated on a plurality of items by the plurality of users,” which is well-understood, routine, and conventional in the art (See MPEP 2106.05(d)(II)(i)) as well as additional observation, evaluation, and judgement “creates, from the evaluation data, a plurality of edges connecting user nodes and item nodes according to the plurality of user nodes corresponding to each of the plurality of users, the plurality of item nodes corresponding to the plurality of items and whether each user evaluated each item, and setting the evaluation scores as weights of the created edges, thereby acquires the bipartite graph.” Dependent claims 7 and 17 recite additional mathematical calculations and relationships “trains the artificial neural network, and for training, the processor acquires a plurality of batches by acquiring a plurality of triplet samples composed of related item nodes, which are item nodes connected by edges to each of a plurality of user nodes, and unrelated item nodes, which are item nodes that are not connected by edges, in a signed graph in which edges of the bipartite graph are signed with the positive valued edges and the negative valued edges; and calculates a sign-aware loss as a sum of a sign-aware Bayesian personalized ranking (BPR) loss calculated as a relationship in the common embedding space for the user nodes and, related item nodes and unrelated item nodes, respectively, according to the sign of the related item nodes in the acquired triplet samples” Dependent claim 8 recites additional mathematical calculations and relationships “the sign-aware training unit calculates a predicted preference ({circumflex over (r)}ui, {circumflex over (r)}uj) for each of the related item node (i) and the unrelated item node (j) of the user node (u) in the triplet sample (u, i, j) by, in the concatenation embedding vector, an inner product between a user embedding vector (zu) corresponding to the user node (u) and a related embedding vector (zi) for the related item node (i) and an inner product between the user embedding vector (zu) and an unrelated embedding vector (zj) for the unrelated item node (j), and according to the sign of the weight (w) of the edge connecting the user node (u) and the related item node (i), based on a ternary relation (>u) defined by >u(i,j,w) {(i,j,w)|{circumflex over (r)} ui >{circumflex over (r)} uj if w>0 and −{circumflex over (r)} ui >{circumflex over (r)} uj otherwise}  Equation calculates a likelihood (p( )) according to the ternary relation (>u) by Equation p(>u(i,j,w s ui)|Θ)Figure US20230267317A1-20230824-P00030σ(sgn(w s ui){circumflex over (r)} ui −{circumflex over (r)} uj) (wherein, sgn( ) is a sign function, σ( ) is a sigmoid function calculated as σ ⁡ ( x ) = 1 1 + exp ⁡ ( - x ) . Θ is a model parameter set obtained by training in the positive embedding part, the negative embedding part and the integration emphasis embedding part implemented by an artificial neural network), thereby calculates the sign-aware BPR loss according to Equation ℒ 0 = - ∑ ( u , i , j ) ∈ D s ′ log ⁢ p ( > u ( i , j , w ui s ) | ⊖ )” Dependent claims 9 and 18 recite additional mathematical calculations and relationships “the sign-aware training unit obtains the sign-aware loss according to 0+λreg∥Θ∥2 (wherein, ∥ ∥2 is the L2 regularization function, and λreg is a hyperparameter for adjusting the regularization strength) and backpropagates the sign-aware loss” Dependent claims 10 and 19 recite additional observation, evaluation, and judgement “the recommendation unit calculates a distance between each of the plurality of user nodes and a plurality of item nodes in the common embedding space, selects a predetermined number of item nodes in an adjacent order for each of the plurality of user nodes, and recommends items to each user” Therefore, when considering the elements separately and in combination, they do not add significantly more to the inventive concept. Accordingly, claims 1-3, 5-13, and 15-19 are rejected under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5, 10-13, 15, and 19 are rejected under U.S.C. §103 as being unpatentable over the combination of Huang (“Signed Bipartite Graph Neural Networks”, 2021) and He (“LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation”, 2020). PNG media_image1.png 620 630 media_image1.png Greyscale FIG. 1 of Huang PNG media_image2.png 564 1548 media_image2.png Greyscale FIG. 2 of Huang Regarding claim 1, Huang teaches in a bipartite graph composed of a plurality of user nodes corresponding to each of a plurality of users, ([p. 1 Abstract] "we propose a novel Signed Bipartite Graph Neural Networks (SBGNNs) to learn node embeddings for signed bipartite networks" [p. 3 §3.1] "Users can purchase products from a seller and rate the seller with “Positive”, “Neutral”, or “Negative” scores. In this dataset, U represents the buyers […] According to the definition of [5], we use the notation to denote a signed butterfly isomorphism class that represents the links between U and V" U nodes interpreted as user nodes. See also FIG. 1 and 2. Links explicitly taught as synonymous with edges between nodes in a bipartite graph) a plurality of item nodes respectively representing items, and a plurality of edges connecting each user nodes and each item nodes, ([p. 3 §3.1] "Users can purchase products from a seller and rate the seller with “Positive”, “Neutral”, or “Negative” scores. In this dataset, U represents the buyers, and V represents the sellers" V nodes interpreted as item nodes (explicitly correlated with products/papers/votes all of which are interpreted as items). See also FIG. 1 and 2) each edge having a weight corresponding to an evaluation score given by a user for an item,([p. 3] "Users can purchase products from a seller and rate the seller with “Positive”, “Neutral”, or “Negative” scores […] Reviewers U can give “SA” (Strong Accept), “A” (Accept), “WA” (Weak Accept), “WR” (Weak Reject), “R” (Reject), and “SR” (Strong Reject) to papers V after reviewing. We regard “SA”, “A”, and “WA” as positive links and “SR”, “R” and “WR” as negative links" Seller/reviewer score interpreted as evaluation score corresponding to edge weight (positive/neutral/negative). For example Strong Accept "SA" is interpreted as a score corresponding to positive edge weight.) classifies and partitions the plurality of edges into positive valued edges and negative valued edges according to the weight, and generates a graph partitioned into a positive graph having the positive valued edges and a negative graph having the negative valued edges;([p. 3 §3.1] "Reviewers U can give “SA” (Strong Accept), “A” (Accept), “WA” (Weak Accept), “WR” (Weak Reject), “R” (Reject), and “SR” (Strong Reject) to papers V after reviewing. We regard “SA”, “A”, and “WA” as positive links and “SR”, “R” and “WR” as negative links" [p. 4 §4] "Consider a signed bipartite network, G= (U,V,E), where U={𝑢1,𝑢2,...,𝑢|U|} and V={𝑣1,𝑣2,...,𝑣|V|} represent two sets of nodes with the number of nodes |U| and |V|. E⊂U×V is the edges between U and V. E=E+ E− is the set of edges between the two sets of nodes U and V where E+ E− =∅, E+ and E− represent the sets of positive and negative edges" See also FIG. 1 and 2. Subgraph containing negative valued edges interpreted as negative graph. Subgraph containing positive valued edges interpreted as positive graph.) obtains positive embedding vectors and negative embedding vectors, vectorized by performing a neural network operation on each of the positive graph and the negative graph using a pre-trained artificial neural network([p. 7 Algorithm 1] "Input: Signed Bipartite Graph G(U,V,E); Encoder Aggregators 𝐸𝑛𝑐; […] Prepare original node embeddings […] for epoch = 1, … , T do […] for 𝑙 = 0...𝐿 − 1 do 𝑧𝑙+1 𝑢𝑖 ←𝐸𝑛𝑐 […] back propagation, update parameters" See Algorithm 1 which shows explicit vectorization to embedding vectors from the positive and negative partitioned bipartite graph. model interpreted as pre-trained after epoch 1.) obtains concatenation embedding vector in which the positive embedding vector and the negative embedding vectors are concatenated, thereby determines embedding positions of the plurality of user nodes and the plurality of item nodes in a virtual common embedding space; and([p. 6] "For our vertex update function, we concatenate the messages from different neighborhoods with origin node features and apply it to an Mlp to get the final node representation" See Eqn. 8 which shows positive and negative embedding vector concatenation.) wherein, in generating the partitioned graph, the processor: acquires a signed graph including signed edges, ([p. 4 §4] "Consider a signed bipartite network, G= (U,V,E), where U={𝑢1,𝑢2,...,𝑢|U|} and V={𝑣1,𝑣2,...,𝑣|V|} represent two sets of nodes with the number of nodes |U| and |V|. E⊂U×V is the edges between U and V. E=E+ E− is the set of edges between the two sets of nodes U and V where E+ E− =∅, E+ and E− represent the sets of positive and negative edges") by determining whether a weight of each of a plurality of edges in the bipartite graph is greater than or equal to a predetermined reference weight, setting edges having a weight greater than or equal to a reference weight as the positive valued edges, ([p. 3 §3.1] "“Positive”, “Neutral”, or “Negative” scores […] Reviewers U can give “SA” (Strong Accept), “A” (Accept), “WA” (Weak Accept), “WR” (Weak Reject), “R” (Reject), and “SR” (Strong Reject) to papers V after reviewing. We regard “SA”, “A”, and “WA” as positive links and “SR”, “R” and “WR” as negative links" Setting edges to positive weight is interpreted as determining whether a weight of each of the plurality of edges in the graph is greater than a predetermined reference weight (zero/neutral by mathematical definition).) and setting edges whose weight is less than the reference weight as the negative valued edges; ([p. 3 §3.1] "“Positive”, “Neutral”, or “Negative” scores […] Reviewers U can give “SA” (Strong Accept), “A” (Accept), “WA” (Weak Accept), “WR” (Weak Reject), “R” (Reject), and “SR” (Strong Reject) to papers V after reviewing. We regard “SA”, “A”, and “WA” as positive links and “SR”, “R” and “WR” as negative links" Setting edges to negative weight is interpreted as determining whether a weight of each of the plurality of edges in the graph is less than a predetermined reference weight (zero/neutral by mathematical definition).) and partitions the signed graph into the positive graph composed of the plurality of user nodes, the plurality of item nodes and the positive valued edges, and the negative graph composed of the plurality of user nodes, the plurality of item nodes and the negative valued edges,([p. 5 §4] "Consider a signed bipartite network, G= (U,V,E), where U={𝑢1,𝑢2,...,𝑢|U|} and V={𝑣1,𝑣2,...,𝑣|V|} represent two sets of nodes with the number of nodes |U| and |V|. E⊂U×V is the edges between U and V. E=E+ E− is the set of edges between the two sets of nodes U and V where E+ E− =∅, E+ and E− represent the sets of positive and negative edges" See also FIG. 1 and 2. Subgraph containing negative valued edges interpreted as negative graph. Subgraph containing positive valued edges interpreted as positive graph.) wherein obtaining the positive embedding vectors are implemented as a graph neural network(GNN).([Abstract] "Guided by these two perspectives, we propose a novel Signed Bipartite Graph Neural Networks (SBGNNs) to learn node embeddings for signed bipartite networks"). However, Huang does not explicitly teach A computer implemented recommendation apparatus comprising: a processor, and a non-transitory computer readable medium couple to the processor, the computer readable medium comprising code, executable by the processor, the processor, when executed, recommends an item to each user based on a distance of each of the plurality of item nodes to each of the plurality of user nodes in the embedding space;. He, in the same field of endeavor, teaches A computer implemented recommendation apparatus comprising: a processor, and a non-transitory computer readable medium couple to the processor, the computer readable medium comprising code, executable by the processor, the processor, when executed,([p. 1] "Our implementations are available in both TensorFlow and PyTorch" Tensorflow and Pytorch are computer instruction libraries that must necessarily be stored in a non-transitory computer readable medium coupled to a processor for executing the instructions.) recommends an item to each user based on a distance of each of the plurality of item nodes to each of the plurality of user nodes in the embedding space;([p. 1 Abstract] "In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation" [p. 4] "the model prediction is defined as the inner product of user and item final representations"). Huang as well as He are directed towards bipartite graph neural networks. Therefore, Huang as well as He are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Huang with the teachings of He by using the SBGNN in Huang for a recommendation system. He provides as additional motivation for combination ([p. 3 §3] “we set the goal of developing a light yet effective model by including the most essential ingredients of GCN for recommendation. The advantages of being simple are several fold — more interpretable, practically easy to train and maintain, technically easy to analyze the model behavior and revise it towards more effective directions [...] We then provide an in-depth analysis of LightGCN to show the rationality behind its simple design. Lastly, we describe how to do model training for recommendation”). Regarding claim 2, the combination of Huang and He teaches The computer implemented recommendation apparatus according to claim 1, wherein in embedding, the processor: obtains the positive embedding vectors, by receiving the positive graph indicating preference of the plurality of users for the plurality of items, performing neural network operations according to a pre-trained method, and vectorizing each of the plurality of user nodes and the plurality of item nodes,(Huang [p. 5 §4] "Consider a signed bipartite network, G= (U,V,E), where U={𝑢1,𝑢2,...,𝑢|U|} and V={𝑣1,𝑣2,...,𝑣|V|} represent two sets of nodes with the number of nodes |U| and |V|. E⊂U×V is the edges between U and V. E=E+ E− is the set of edges between the two sets of nodes U and V where E+ E− =∅, E+ and E− represent the sets of positive and negative edges" [p. 7 Algorithm 1] "Input: Signed Bipartite Graph G(U,V,E); Encoder Aggregators 𝐸𝑛𝑐; […] Prepare original node embeddings […] for epoch = 1, … , T do […] for 𝑙 = 0...𝐿 − 1 do 𝑧𝑙+1 𝑢𝑖 ←𝐸𝑛𝑐 […] back propagation, update parameters") obtains the negative embedding vectors, by receiving the negative graph indicating non-preference of the plurality of users for the plurality of items, performing neural network operations according to a pre-trained method, and vectorizing each of the plurality of user nodes and the plurality of item nodes,(Huang [p. 5 §4] "Consider a signed bipartite network, G= (U,V,E), where U={𝑢1,𝑢2,...,𝑢|U|} and V={𝑣1,𝑣2,...,𝑣|V|} represent two sets of nodes with the number of nodes |U| and |V|. E⊂U×V is the edges between U and V. E=E+ E− is the set of edges between the two sets of nodes U and V where E+ E− =∅, E+ and E− represent the sets of positive and negative edges" [p. 7 Algorithm 1] "Input: Signed Bipartite Graph G(U,V,E); Encoder Aggregators 𝐸𝑛𝑐; […] Prepare original node embeddings […] for epoch = 1, … , T do […] for 𝑙 = 0...𝐿 − 1 do 𝑧𝑙+1 𝑢𝑖 ←𝐸𝑛𝑐 […] back propagation, update parameters") obtains the concatenation embedding vectors, by performing neural network operations according to a pre-trained method, estimating positive importance and negative importance corresponding to the positive embedding vectors and the negative embedding vectors, respectively, and weighting the positive embedding vectors and the negative embedding vectors with the positive importance and the negative importance(Huang [p. 5 §4] "Consider a signed bipartite network, G= (U,V,E), where U={𝑢1,𝑢2,...,𝑢|U|} and V={𝑣1,𝑣2,...,𝑣|V|} represent two sets of nodes with the number of nodes |U| and |V|. E⊂U×V is the edges between U and V. E=E+ E− is the set of edges between the two sets of nodes U and V where E+ E− =∅, E+ and E− represent the sets of positive and negative edges" [p. 7 Algorithm 1] "Input: Signed Bipartite Graph G(U,V,E); Encoder Aggregators 𝐸𝑛𝑐; […] Prepare original node embeddings […] for epoch = 1, … , T do […] for 𝑙 = 0...𝐿 − 1 do 𝑧𝑙+1 𝑢𝑖 ←𝐸𝑛𝑐 […] back propagation, update parameters" [p. 8] "For signed or bipartite model (i.e., SiNE and BiNE), the information for sign and bipartite network structure can contribute to the node representation learning. SiNE is more effective than BiNE (e.g., AUC in Bonanza (0.6088 > 0.6026)), indicating that link sign information may be more important than the link relationship"). Regarding claim 3, the combination of Huang and He teaches The computer implemented recommendation apparatus according to claim 2, wherein obtaining the negative embedding vectors are implemented as a multi-layer perceptron (MLP).(Huang [p. 6] "For our vertex update function, we concatenate the messages from different neighborhoods with origin node features and apply it to an Mlp to get the final node representation"). Regarding claim 5, the combination of Huang and He teaches The computer implemented recommendation apparatus according to claim 1, wherein the reference weight is set to a median value of the evaluation scores of the plurality of users for each item.(Huang [p. 6 §5.1.3] "5.1.3 Aggregation Design. For message aggregation, it is commonly a differentiable, permutation invariant set function (e.g., mean, max, and sum) that take a countable message set {𝑚𝑣(𝑢𝑖)|𝑢𝑖 ∈ N(𝑣)} as input; and output a message vector𝑚𝑣. In this paper, we use mean (Mean) and graph attention (Gat) aggregation functions. For the Mean aggregation function, we get𝑚𝑙(𝑢𝑖) and𝑚𝑙(𝑣𝑖) by Mean the message of neighborhoods [...] where 𝑢 is a relationship of links" Links interpreted as synonymous with edges). Regarding claim 10, the combination of Huang and He teaches The computer implemented recommendation apparatus according to claim 1, in recommending at item, the processor calculates a distance between each of the plurality of user nodes and a plurality of item nodes in the common embedding space, (He [p. 1 Abstract] "In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation" [p. 2] "NGCF obtains L+1 embeddings to describe a user [...] an item. It then concatenates these L+1 embeddings to obtain the final user embedding and item embedding, using inner product to generate the prediction score" [p. 4] "the model prediction is defined as the inner product of user and item final representations") selects a predetermined number of item nodes in an adjacent order for each of the plurality of user nodes, and recommends items to each user.(He [p. 4] "The model prediction is defined as the inner product of user and item final representations: yui=eTu*ei which is used as the ranking score for recommendation generation" eTu*ei are nodes in an adjacent order for each of the plurality of user nodes.). Regarding claims 11-13, 15, and 19, claims 11-13, 15, and 19 are directed towards the method performed by the apparatus of claims 1-3, 5, and 10, respectively. Therefore, the rejections applied to claims 1-3, 5, and 10 also apply to claims 11-13, 15, and 19. Claims 6 and 16 are rejected under U.S.C. §103 as being unpatentable over the combination of Huang and He and in further view of Tian (“Exploiting Group Information for Personalized Recommendation with Graph Neural Networks”, 2021). Regarding claim 6, the combination of Huang and He teaches The computer implemented recommendation apparatus according to claim 1. However, the combination of Huang and He doesn't explicitly teach, wherein in generating the partitioned graph, the processor, prior to the step of acquiring the signed graph, acquired the bipartite graph by receiving evaluation data including evaluation scores evaluated on a plurality of items by the plurality of users, creates, from the evaluation data, a plurality of edges connecting user nodes and item nodes according to the plurality of user nodes corresponding to each of the plurality of users, the plurality of item nodes corresponding to the plurality of items and whether each user evaluated each item, and setting the evaluation scores as weights of the created edges, thereby acquires the bipartite graph. Tian, in the same field of endeavor, teaches in generating the partitioned graph, the processor, prior to the step of acquiring the signed graph, acquired the bipartite graph by receiving evaluation data including evaluation scores evaluated on a plurality of items by the plurality of users, ([p. 7 §3.1] "In order to better characterize the relationship between users, products, and groups, first construct a user-product bipartite graph GU I , a group-product bipartite graph GC I , and a user-group bipartite graph GU C, as shown in Figure 1. User-product bipartite graph GU I contains all the relationships between users and products, and AU I is the adjacency matrix of GU I" [p. 13 §4] "Since the Douban Moviedata set is scoring data, in order to adapt to the experimental environment of this chapter, it is transformed into implicit feedback data. The experiment only uses user scores greater than 3 as positive samples") creates, from the evaluation data, a plurality of edges connecting user nodes and item nodes according to the plurality of user nodes corresponding to each of the plurality of users, the plurality of item nodes corresponding to the plurality of items and whether each user evaluated each item, and setting the evaluation scores as weights of the created edges, thereby acquires the bipartite graph.([p. 6] "GraphSAGE [15] used a batch training method, combined with random sampling of neighbor nodes, to control the number of nodes required for each calculation within a certain range, making distributed training of large-scale graph data possible. When aggregating features of neighbor nodes, the graph attention network (GAT) [51] used the attention mechanism to determine the weight of each neighbor node in the node, and it can adaptively control the contribution of neighbor nodes to the target node"). The combination of Huang and He as well as Tian are directed towards graph neural networks for collaborative filtering. Therefore, the combination of Huang and He as well as Tian are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Huang and He with the teachings of Tian by filtering negative and positive samples based on a threshold weight value such as a user movie rating. Tian provides as additional motivation for combination that this helps to ([p. 4 §2.2] "to model users’ social relationships to improve the effectiveness of the recommendation system"). This motivation for combination also applies to the remaining claims which depend on this combination. Regarding claim 16, claim 16 is directed towards the method performed by the apparatus of claim 6. Therefore, the rejection applied to claim 6 also applies to claim 16. Claims 7 and 17 are rejected under U.S.C. §103 as being unpatentable over the combination of Huang and He and in further view of Ying (“Graph Convolutional Neural Networks for Web-Scale Recommender Systems”, 2018) and Sun (“A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks”, 2020). Regarding claim 7, the combination of Huang and He teaches The computer implemented recommendation apparatus according to claim 2. However, the combination of Huang and He doesn't explicitly teach, wherein the processor trains the artificial neural network, and for training, the processor acquires a plurality of batches by acquiring a plurality of triplet samples composed of related item nodes, which are item nodes connected by edges to each of a plurality of user nodes, and unrelated item nodes, which are item nodes that are not connected by edges, in a signed graph in which edges of the bipartite graph are signed with the positive valued edges and the negative valued edges; and calculates a sign-aware loss as a sum of a sign-aware [Bayesian personalized ranking (BPR) loss] calculated as a relationship in the common embedding space for the user nodes and, related item nodes and unrelated item nodes, respectively, according to the sign of the related item nodes in the acquired triplet samples, Bayesian personalized ranking (BPR) loss and a regularization loss according to regularization. Ying, in the same field of endeavor, teaches wherein the processor trains the artificial neural network, and for training, the processor acquires a plurality of batches by acquiring a plurality of triplet samples composed of related item nodes, which are item nodes connected by edges to each of a plurality of user nodes, and unrelated item nodes, which are item nodes that are not connected by edges, in a signed graph in which edges of the bipartite graph are signed with the positive valued edges and the negative valued edges; and calculates a sign-aware loss as a sum of a sign-aware [Bayesian personalized ranking (BPR) loss] calculated as a relationship in the common embedding space for the user nodes and, related item nodes and unrelated item nodes, respectively, according to the sign of the related item nodes in the acquired triplet samples,(See loss eqn. 1 on p. 5 which explicitly relies on zq, zi, and znk where zq is the query item, zi is a related positive example, and zn is a negative example. The equation is a derivation of the triplet loss formula L(zu, zi, zn)=max(delta+s(zu,zn)-s(zu,zi)) where s is an inner product.). The combination of Huang and He as well as Ying are directed towards graph neural networks for collaborative filtering. Therefore, the combination of Huang and He as well as Ying are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Huang and He with the teachings of Ying by using a triplet loss for training. While this would be an obvious design choice for one of ordinary skill in the art before the effective filing date of the claimed invention, Ying provides as additional motivation for combination ([p. 3] “We also introduce a number of new training techniques to improve performance and a MapReduce inference pipeline to scale up to graphs with billions of nodes”). However, the combination of Huang, He, and Ying does not explicitly teach Bayesian personalized ranking (BPR) loss and a regularization loss according to regularization. Sun, in the same field of endeavor, teaches Bayesian personalized ranking (BPR) loss([p. 2032] "We base our model on a Bayesian formulation that incorporates graph uncertainty, the generative graph model of node copying, and the Bayesian Personalized Ranking loss" See also Eqn. 1) and a regularization loss according to regularization, ([p. 2036 §5] "we utilize self connections as an additional regularization term in the loss function"). The combination of Huang, He, and Ying as well as Sun are directed towards graph neural networks for collaborative filtering. Therefore, the combination of Huang, He, and Ying as well as Sun are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Huang, He, and Ying with the teachings of Sun by substituting the marginal triplet loss in Ying with the Bayesian Personalized Ranking loss in Sun. Both types of loss functions belong to the same category of triplet loss functions and it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute one for the other. This is explicitly reinforced by Sun who explicitly replaces the marginal triplet loss in Ying with BPR and provides as additional motivation for combination ([p. 2036 §6.1] "In Figure 4, we can view the relative improvement of our proposed model over the best preforming baselines (BPR, NGCF, PinSage-lstm) for different groups of users based on sparsity. The overall trend is that the algorithm offers the most relative improvement over the alternative methods for users with less items, hence for sparser users"). Regarding claim 17, claim 17 is directed towards the method performed by the apparatus of claim 7. Therefore, the rejection applied to claim 7 also applies to claim 17. Allowable Subject Matter Claims 8, 9, and 18 are allowable over the prior art. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124 /MIRANDA M HUANG/Supervisory Patent Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Nov 09, 2022
Application Filed
Aug 27, 2025
Non-Final Rejection — §101, §103, §112
Nov 29, 2025
Interview Requested
Dec 09, 2025
Examiner Interview Summary
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 29, 2025
Response Filed
Mar 09, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561604
SYSTEM AND METHOD FOR ITERATIVE DATA CLUSTERING USING MACHINE LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12547878
Highly Efficient Convolutional Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12536426
Smooth Continuous Piecewise Constructed Activation Functions
2y 5m to grant Granted Jan 27, 2026
Patent 12518143
FEEDFORWARD GENERATIVE NEURAL NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12505340
STASH BALANCING IN MODEL PARALLELISM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
90%
With Interview (+38.2%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month