Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
Claims 1-7 and 9-20 remain pending in the application.
Claims 1, 9, and 15 have been amended.
Claim 8 has been canceled.
The amendment filed 1/29/2026 is sufficient to overcome the 35 U.S.C. 101 rejections of claims 1-7 and 9-20. The previous rejections have been withdrawn.
Argument 1, regarding the 101 rejections, applicant argues that the claims integrate the judicial exceptions into the practical application of generating diversified content recommendations while maintaining recommendation accuracy using the GNN model with the improved embedding generation. Examiner agrees and the 101 rejections have been withdrawn.
Argument 2, regarding the prior art rejections, applicant argues that none of the cited art teaches “assigning attention weights to embeddings generated by a plurality of layers of the GNN model to mitigate over-smoothing of the GNN model, wherein the embeddings are generated based on information associated with the aggregated subset of neighbors for each GNN item node;… and generating diversified content recommendations while maintaining recommendation accuracy using the GNN model with the improved embedding generation”.
Examiner respectfully disagrees because Gupta teaches assigning attention weights to embeddings generated by a plurality of layers of the GNN model to mitigate over-smoothing of the GNN model, wherein the embeddings are generated based on information associated with the aggregated subset of neighbors for each GNN item node (“Normalized classifiers e.g., based on cosine or capsules squashing which apply normalization on the weights of the final layer and the final representation of the input”, P0035. This cosine similarity is determined between item embeddings and normalized weighted embeddings, P0029-P0030. Weights based on a relevance score are assigned to normalized item embeddings, with weights based on a relevance score being interpreted as attention weights, P0043. GNN generates item embeddings based on information associated with each item including relevance scores, causal interference, recommended item details, as well as information related to similarity between each group of items, P0043, P0050);… and generating diversified content recommendations while maintaining recommendation accuracy using the GNN model with the improved embedding generation (computed loss corresponding to weights of the model based on relevance scores of items is used to train the GNN model for improved accuracy, P0008. “The goal of Session based Recommending System (SRS) is to recommend a list of most relevant items to a user based on the sequence of previously clicked items in the session…Various backbone architectures such as recurrent neural networks, graph neural networks (GNNs), … have been successfully used for developing SRS”, P0032. Implementation of SRS includes reducing the bias of recommending more popular items, resulting in a more diverse set of recommendations, P0003).
Applicant argues that Ding does not teach “selecting a subset of neighbors for each GNN item node on an embedding space for aggregation, wherein the subset of neighbors comprises diverse items and represents an entire set of neighbors of the GNN item node” because the cited portion of Ding does not include embedding-space-based subset selection that enforces diverse and representative selection. Examiner respectfully disagrees because Ding teaches GNNs with a neighborhood aggregation scheme, and a network encoder selecting node representations by recursively aggregating and compressing node features from diverse neighborhoods, P0038, P0064. Examiner also notes that the teachings of Ding may include an embedding space as described in P0063 of Ding. Examiner also notes that the rejection is over Gupta in view of Ding, and Gupta is clearly directed towards graph neural networks comprising embeddings corresponding to items (see Gupta P0006).
The full prior art rejections are outlined below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 6, 7, 9, 13-15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al (Pub. No.: US 20230169569 A1), hereafter Gupta in view of Ding et al (Pub. No.: 20230117980 A1), hereafter Ding.
Regarding claims 1, 9, and 15, Gupta teaches a method for diversifying recommendations by improving embedding generation of a Graph Neural Network (GNN) model, a system, comprising: at least one processor; and at least one memory comprising computer-readable instructions that upon execution by the at least one processor cause the system to perform operations, and a non-transitory computer-readable storage medium, storing computer-readable instructions that upon execution by a processor cause the processor to implement operations, the operation comprising (method, system, and computer readable storage media including at least one processor are directed towards recommending items with the use of a GNN, P0004, P0014, P0032):… assigning attention weights to embeddings generated by a plurality of layers of the GNN model to mitigate over-smoothing of the GNN model, wherein the embeddings are generated based on information associated with the aggregated subset of neighbors for each GNN item node (“Normalized classifiers e.g., based on cosine or capsules squashing which apply normalization on the weights of the final layer and the final representation of the input”, P0035. This cosine similarity is determined between item embeddings and normalized weighted embeddings, P0029-P0030. Weights based on a relevance score are assigned to normalized item embeddings, with weights based on a relevance score being interpreted as attention weights, P0043. GNN generates item embeddings based on information associated with each item including relevance scores, causal interference, recommended item details, as well as information related to similarity between each group of items, P0043, P0050); performing loss reweighting by adjusting weight for each sample item during training the GNN model based on a category of the sample item to focus on learning of long-tail categories (“Another approach includes Long-tail classification: Normalized classifiers e.g., based on cosine or capsules squashing which apply normalization on the weights of the final layer and the final representation of the input”, P0035); and generating diversified content recommendations while maintaining recommendation accuracy using the GNN model with the improved embedding generation (computed loss corresponding to weights of the model based on relevance scores of items is used to train the GNN model for improved accuracy, P0008. “The goal of Session based Recommending System (SRS) is to recommend a list of most relevant items to a user based on the sequence of previously clicked items in the session…Various backbone architectures such as recurrent neural networks, graph neural networks (GNNs), … have been successfully used for developing SRS”, P0032. Implementation of SRS includes reducing the bias of recommending more popular items, resulting in a more diverse set of recommendations, P0003).
Gupta does not appear to explicitly teach selecting a subset of neighbors for each GNN item node on an embedding space for aggregation, wherein the subset of neighbors comprises diverse items and represents an entire set of neighbors of the GNN item node.
Ding teaches selecting a subset of neighbors for each GNN item node on an embedding space for aggregation, wherein the subset of neighbors comprises diverse items and represents an entire set of neighbors of the GNN item node (“In general, GNNs follow a neighborhood aggregation scheme, and the network encoder module 104 determines the set of node representations 212 by recursively aggregating and compressing node features from local neighborhoods”, P0038. Neighborhoods may be diverse, P0064).
Accordingly, it would have been obvious to a person having ordinary skill in the
art before the effective filing date of the claimed invention, having the teachings of
Gupta and Ding before them, to include Ding’s specific teaching of a GNN following a neighborhood aggregation scheme being used to determine node representations in Gupta’s system of Recommending Items and Controlling an Associated Bias. One would have been motivated to make such a combination of a GNN following a neighborhood aggregation scheme being used to determine node representations (see Ding P0038, P0064) and using a graph neural network backbone architecture to recommend a list of most relevant items to a user (see Gupta P0032).
Regarding claims 6, 13, and 19, Gupta in view of Ding teaches the limitations of claims 1, 9, and 15 as outlined above. Ding further teaches learning the attention weights for the plurality of layers of the GNN model by an attention mechanism to optimize a loss function (“During training, the meta-learning framework 30 minimizes the above loss function to learn a generic classifier and adjust various parameters of the network encoder module 106 for a specific meta-training task”, P0052. A shared attention mechanism is used to determine attention weights, P0043).
Regarding claims 7, 14, and 20, Gupta in view of Ding teaches the limitations of claims 1, 9, and 15 as outlined above. Gupta further teaches wherein the performing loss reweighting by adjusting weight for each sample item during training the GNN model based on a category of the sample item further comprises: increasing weights for sample items belonging to the long-tail categories (relevance scores are increased for less popular items, with long-tail classification being implemented to normalize weights of the final layer. This is to mitigate conformity bias, the bias of more popular items being selected by users and recommending less popular items found in long-tail data, P0033-P0035).
Claims 2-3, 10-11, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta in view of Ding and further in view of Shen et al (Pub. No.: US 20150170295 A1), hereafter Shen.
Regarding claims 2, 10, and 16, Gupta in view of Ding teaches the limitations of claims 1, 9, and 15 as outlined above. Gupta does not appear to explicitly teach “selecting the subset of neighbors by maximizing a submodular function”.
Shen teaches selecting the subset of neighbors by maximizing a submodular function (“A different strategy is to use a submodular function to perform a greedy search for n nodes… The strategy acquires the n nodes by selecting one node at a time, each time choosing a node that provides the largest marginal increase to the influence level”, P0029).
Accordingly, it would have been obvious to a person having ordinary skill in the
art before the effective filing date of the claimed invention, having the teachings of
Gupta, Ding, and Shen before them, to include Shen’s specific teaching of using a submodular function to perform a greedy search for n nodes in Gupta’s system of Recommending Items and Controlling an Associated Bias. One would have been motivated to make such a combination of using a submodular function to perform a greedy search for n nodes (see Shen P0029) and creating a graph comprising nodes detailing a user’s interest in items to determine which items are most relevant to the user (see Gupta P0045, P0047).
Regarding claims 3, 11, and 17, Gupta in view of Ding and further in view of Shen teaches the limitations of claims 1, 9, and 15 as outlined above. Shen further teaches approximating a maximum of the submodular function using a greedy algorithm, wherein the approximating a maximum of the submodular function using a greedy algorithm further comprises: adding an item with a largest marginal gain to the subset of neighbors every step of a greedy neighbor selection (“performing a greedy selection process to identify a node that maximizes a marginal gain of influence level over the initial node set”, P0010. During each iteration, nodes are given an opportunity to activate neighbor nodes. Nodes are only added as neighbors if they are activated, P0026. Nodes to be activated are selected based on a greedy algorithm dependent upon which node provides the largest marginal increase to the influence level, P0029); and performing a predetermined number of steps of the greedy neighbor selection to obtain the subset of neighbors, wherein the subset of neighbors is constrained to have items no greater than the predetermined number (greedy search algorithm runs n times, n being the number of nodes, to select n neighbors. Nodes are selected as neighbors only when providing the highest marginal increase to the influence level, P0029).
Claims 4, 5, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta in view of Ding and Shen and further in view of Fatemi et al (Pub. No.: US 20220101103 A1), hereafter Fatemi.
Regarding claims 4, 12, and 18, Gupta in view of Ding and Shen teaches the limitations of claims 2, 10, and 16 as outlined above. Gupta does not appear to explicitly teach “evaluating a diversity of the subset of neighbors by identifying a most similar item in the subset to every item in the entire set of neighbors and determining a sum of similarity values”.
Fatemi teaches evaluating a diversity of the subset of neighbors by identifying a most similar item in the subset to every item in the entire set of neighbors and determining a sum of similarity values (K nearest neighbor algorithm is used to identify the vectors with nodes on a graph that are most similar to each other for each index on the graph. The vectors result in a matrix detailing each vector of nodes that are most similar to each other, P0075).
Accordingly, it would have been obvious to a person having ordinary skill in the
art before the effective filing date of the claimed invention, having the teachings of
Gupta, Ding, Shen, and Fatemi before them, to include Fatemi’s specific teaching of determining a matrix detailing each vector of nodes most similar to each other with a K nearest neighbor algorithm in Gupta’s system of Recommending Items and Controlling an Associated Bias. One would have been motivated to make such a combination of determining a matrix detailing each vector of nodes most similar to each other with a K nearest neighbor algorithm (see Fatemi P0075) and creating a graph comprising nodes detailing a user’s interest in items to determine which items are most relevant to the user and normalizing item groups based on a similarity value (see Gupta P0045, P0047).
Regarding claim 5, Gupta in view of Ding and Shen and further in view of Fatemi teaches the limitations of claim 4 as outlined above. Fatemi further teaches wherein the evaluating a diversity of the subset of neighbors is performed based on a facility location function defined as:
PNG
media_image1.png
47
297
media_image1.png
Greyscale
wherein Su represents a subset of neighbors associated with a GNN item node u, Nu, represents an entire set of neighbors of the GNN item node u, and sim (i, i') represents a similarity between a most similar item i' in the subset of neighbors to every item i in an entire set of neighbors of the GNN item node (K nearest neighbor algorithm is used to identify the vectors with nodes on a graph that are most similar to each other for each index on the graph. The vectors result in a matrix detailing each vector of nodes that are most similar to each other, P0075. The graph may be one processed by a GNN, P0073).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHAN MOUNDI whose telephone number is (703)756-1547. The examiner can normally be reached 8:30 A.M. - 5 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/I.M./ Examiner, Art Unit 2141
/TAN H TRAN/Primary Examiner, Art Unit 2141