DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The amendments were received on 2/12/2026. Claims 1-3, 6-18, 20, and 21 are pending where claims 1-3, 6-18, 20, and 21 were previously presented and claims 4, 5, and 19 were cancelled.
Claim Objections
The applicant amended the claims to address the claim objection. In view of the amendments, the objections to the claim have been withdrawn.
Claim 14 is objected to for the variable ‘n’. The summation makes use of the term but it is unclear what the term is meant to represent (is this n related to the size of the n-dimensional unit vector or is the n meant to represent a different value).
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 10 and 15, are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 10 recites:
PNG
media_image1.png
43
183
media_image1.png
Greyscale
Applicant amended additional limitations to claim 10 to provide clarity to various variables of the equation including expanding what weighted summation represents. However, the expansion of weighted summation utilizes various variables that are not described. With the equation for weighted summation, the summation symbol indicates it takes a triple of h, r, and t with an explanation for h and t; however, there is no description of r. Please indicate what ‘r’ represents. Additionally, the equation ends with witi where only t is discussed and no explanation of wi is. The next line has a ‘ω’ symbol, however, no explanation of where this is utilized or what it represents. The equation discusses ‘p’ but does not discuss ‘pT’; the Examiner notes that a claim 15 defines a CT as the transpose of C which appears to be a similar situation for claim 10; however, clarification on what the variable represents is required. Additionally, the equation includes exp(..) which appears to be a function call but its unclear what function applicant intends it to be; although the Examiner can guess at the function, clarity is requested. Lastly, the added summation in the last equation indicates a exp(pTRh) where it is unclear if both summations of the weighted summation and the ‘ωi’ equation are over the same range (i greater than 0 and less than or equal to H) or different ranges. Additionally, it is unclear what ‘h’ represents; although it mentions a head entity, the majority of the equation indicates hi to indicate which head entity is being evaluated; however, with a plurality of head entities, the recitation of a singular head entity is unclear what head entity the equation is meant to be referring to.
Claim 15 recites:
PNG
media_image2.png
69
546
media_image2.png
Greyscale
where the claim does not define all the variables of the equation including ‘n’. Additionally, at the amendments added a few variables that do not appear in the equation and is unclear if the equation is missing those variables such as C and CT and the sigmoid activation function. As such, without defining what the variables are and including variables seemingly not the respective claim limitation fails to particularly point out and distinctly claim the respective claim features.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 6, 7, 12-16, and 18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
With regard to claim 1:
Step 2A, Prong One:
The claim recites the following limitations which are drawn towards an abstract idea:
constructing an initial goods knowledge graph based on a first type of triples and a second type of triples, wherein a format of the first type of triples is head entity-relation-tail entity, and a format of the second type of triples is entity-attribute-attribute value (recites mental process steps of mentally associating information together, e.g. a clerk at a movie rental store can associate repeat customers with particular items/movies and can also associate attributes about the movie, such as actor/actresses and genre, and be able to mentally link or associate the repeat customer to particular item attributes);
wherein the training the initial goods knowledge graph based on the graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph comprises: performing initialization encoding on the head entities, the relations and the tail entities to obtain the embedding vectors of the first type of triples; performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples (recites mental process step of performing calculations or mappings to convert data from one form to another, such as encryption or encoding);
wherein the performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples comprises: randomly initializing an n-dimensional unit vector with a modulus of 1, wherein n is a positive integer (recites mental process steps of randomly coming up with values for a vector, as noted above, the random process can be deterministic in that the user can randomly choose between some known sets of values for the unit vector);
multiplying a first attribute value of a digital type in attribute values of the second type of triples by the unit vector to obtain an embedding vector of the first attribute value (recites mental process steps of evaluation between vectors using mathematical functions such as vector multiplication);
performing initialization encoding on entities and attributes in the second type of triples to obtain embedding vectors of the entities and the attributes (recites mental process step of performing calculations or mappings to convert data from one form to another, such as encryption or encoding);
As seen from above, the identified limitations recite concepts associated with an abstract idea and thus the respective claim recites a judicial exception (see 2106.04(a)) and thus requires further analysis as discussed below.
Step 2A, Prong Two:
The following limitations have been identified as being additional elements as discussed below.
A method for training a goods knowledge graph, comprising: training the initial goods knowledge graph based on a graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph (recites apply-it limitations illustrating the usage of the computer as a tool to implement the abstract idea such as generic high-level training a machine learning model steps so that the machine learning model can convert information from one form to another, see MPEP 2106.05(f));
and inputting the embedding vectors of the first type of triples and the embedding vectors of the second type of triples into the graph embedding model alternately, and training the initial goods knowledge graph to obtain the embedding vectors of the entities in the trained goods knowledge graph (recites apply-it limitations illustrating the usage of the computer as a tool to implement the abstract idea such as generic high-level training a machine learning model steps so that the machine learning model can convert information from one form to another, see MPEP 2106.05(f));
and obtaining embedding vectors of the second type of triples based on the embedding vector of the first attribute value and the embedding vectors of the entities and the attributes (recites insignificant extrasolution activity of receiving information, see MPEP 2106.05(g)).
As seen from the above discussion, the identified limitations did not integrate the judicial exception into a practical application (see MPEP 2106.04(d)). This judicial exception is not integrated into a practical application because the additional element merely recites training a machine learning model at a high-level of generality that relates to apply-it type limitations of using a computer as a tool to perform the judicial exception as well as receive output/information from the computer program/tool which converts the data from one form into another form.
Step 2B:
Below is the analysis of the claims:
A method for training a goods knowledge graph, comprising: … training the initial goods knowledge graph based on a graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph (recites apply-it limitations illustrating the usage of the computer as a tool to implement the abstract idea such as generic high-level training a machine learning model steps so that the machine learning model can convert information from one form to another, see MPEP 2106.05(f));
and inputting the embedding vectors of the first type of triples and the embedding vectors of the second type of triples into the graph embedding model alternately, and training the initial goods knowledge graph to obtain the embedding vectors of the entities in the trained goods knowledge graph (recites apply-it limitations illustrating the usage of the computer as a tool to implement the abstract idea such as generic high-level training a machine learning model steps so that the machine learning model can convert information from one form to another, see MPEP 2106.05(f));
and obtaining embedding vectors of the second type of triples based on the embedding vector of the first attribute value and the embedding vectors of the entities and the attributes (recites well-understood, routine, and conventional activity of receiving information, see MPEP 2106.05(d)).
As seen from above, the respective claim elements taken individually do not amount to significantly more than the judicial exception. When taken as a whole (in combination), the claim also does not amount to significantly more than the abstract idea because the additional elements merely recites training a machine learning model at a high-level of generality that relates to apply-it type limitations of using a computer as a tool to perform the judicial exception as well as receive output/information from the computer program/tool which converts the data from one form into another form.
With regard to claim 2, this claim recites wherein the training the initial goods knowledge graph based on the graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph comprises: performing initialization encoding on head entities, relations and tail entities to obtain embedding vectors of the first type of triples (recites mental process step of performing calculations or mappings to convert data from one form to another, such as encryption or encoding);
and inputting the embedding vectors of the first type of triples into the graph embedding model, and training the initial goods knowledge graph to obtain the embedding vectors of the entities in the trained goods knowledge graph (recites apply-it limitations illustrating the usage of the computer as a tool to implement the abstract idea such as generic high-level training a machine learning model steps so that the machine learning model can convert information from one form to another, see MPEP 2106.05(f)).
With regard to claim 3, this claim recites wherein the training the initial goods knowledge graph based on the graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph comprises: performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples (recites mental process step of performing calculations or mappings to convert data from one form to another, such as encryption or encoding);
and inputting the embedding vectors of the second type of triples into the graph embedding model, and training the initial goods knowledge graph to obtain the embedding vectors of the entities in the trained goods knowledge graph (recites apply-it limitations illustrating the usage of the computer as a tool to implement the abstract idea such as generic high-level training a machine learning model steps so that the machine learning model can convert information from one form to another, see MPEP 2106.05(f)).
With regard to claim 6, this claim recites wherein the performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples comprises: performing initialization encoding on a second attribute value of a text type in attribute values of the second type of triples based on a self-attention model to obtain an initialization- encoded result (recites a machine learning model such as an encoder or autoencoder to be a tool to perform the judicial exception, see MPEP 2106.05(f));
performing dimension reduction on the initialization-encoded result to obtain a dimension-reduced result, and taking the dimension-reduced result as an embedding vector of the second attribute value (recites mental process steps of evaluating and deciding to ignore particular traits/dimensions from future evaluations/judgements);
performing initialization encoding on entities and attributes in the second type of triples to obtain embedding vectors of the entities and the attributes (recites mental process step of performing calculations or mappings to convert data from one form to another, such as encryption or encoding);
and obtaining embedding vectors of the second type of triples based on the embedding vector of the second attribute value and the embedding vectors of the entities and the attributes (recites insignificant extrasolution activity of receiving information which amounts to well-understood, routine, and conventional activity of receiving information, see MPEP 2106.05(d)).
With regard to claim 7, this claim recites wherein the performing dimension reduction on the initialization-encoded result to obtain the dimension-reduced result comprises: performing dimension reduction on the initialization-encoded result by means of an intermediate layer network structure of an autoencoder to obtain the dimension-reduced result (recites a machine learning model such as an encoder or autoencoder to be a tool to perform the judicial exception, see MPEP 2106.05(f)).
With regard to claim 8, the claim recites the usage of the trained goods knowledge graph of claim 1 where, upon review of the specification and applicant’s arguments, the respective 35 USC 101 rejections have been withdrawn in view of the claims integrating the judicial exception into a practical application.
Claims 9-11 depend upon claim 8 and have their respective rejections withdrawn for similar reasons as discussed above.
With regard to claim 12, this claim recites a method for training a model, comprising: determining a goods embedding vector of goods according to a trained goods knowledge graph, wherein the trained goods knowledge graph is obtained by training an initial goods knowledge graph through the method according claim 1 (recites merely using a computer as a tool to converting data format from one form to another, see MPEP 2106.05(f));
and inputting the goods embedding vector of the goods into a graph convolution network model to be trained, and training the graph convolution network model to be trained to obtain a graph convolution network model, wherein the graph convolution network model to be trained is constructed based on the trained goods knowledge graph (recites merely using the computer as a tool to train a model which amounts to merely apply it limitations, see MPEP 2106.05(f)).
With regard to claim 13, wherein the inputting the goods embedding vector of the goods into the graph convolution network model to be trained, and the training the graph convolution network model to be trained to obtain the graph convolution network model comprises: determining a similarity between the goods embedding vector of the goods and a head entity of an adjacent triple in a relation space, wherein the adjacent triple comprises at least one triple (recites mental process steps of determining how similar or related another user is to the various products/goods, e.g. for a video store, there might be multiple repeat customers where the video clerk can remember interests of one customer is similar to the current customer as well as what their rating or thoughts were of various movies/items that might be of interest to the current customer);
weighting and summing all tail entities of the adjacent triple to obtain a first-order embedding vector of a user; taking the above operation as one propagation, and obtaining a final embedding vector of the user after H propagations of the graph convolution network model to be trained, wherein H is a positive integer (recites mental process steps involving mathematical calculations including multiplication and addition);
multiplying the final embedding vector of the user by the goods embedding vector of the goods to obtain a prediction probability of the user for the goods (recites mental process steps of matrix multiplication or dot product calculation);
computing a loss value according to the prediction probability and an actual probability of the user for the goods (recites mental process steps of comparing two values together, potentially with mathematical operations);
and updating parameters of the graph convolution network model to be trained by using the loss value to obtain the graph convolution network model (recites training/backpropagation of a machine learning model which amounts to using the computer as a tool to implement the abstract idea, see MPEP 2106.05(f)).
With regard to claim 14, this claim recites an equation for computing loss which
recites mental process steps of performing calculations/evaluations using a mathematical formula.
With regard to claim 15, this claim recites an equation for computing loss which recites mental process steps of performing calculations/evaluations using a mathematical formula.
With regard to claim 16, this claim is substantially similar to claim 1 and is rejected for similar reasons as claim 1 as discussed above. The main difference between claims 1 and 16, is that claim 16 recites a first memory and a first processor (which recites generic computer hardware at a high-level of generality to perform generic computer functions, see MPEP 2106.05(f)).
With regard to claim 17, this claim is substantially similar to claim 8 and the respective rejections have been withdrawn for similar reasons as discussed above with regard to claim 8.
With regard to claim 18, this claim is substantially similar to claim 12 and is rejected for similar reasons as claim 12 as discussed above. The main difference between claims 12 and 18, is that claim 18 recites a third memory and a third processor (which recites generic computer hardware at a high-level of generality to perform generic computer functions, see MPEP 2106.05(f)).
Claims 20 and 21 depend upon claim 17 and have their respective rejections withdrawn for similar reasons as discussed above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 12, 13, 16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al [US 2023/0153579 A1] in view of Bhatia et al [US 2020/0104395 A1], Liu et al [US 2021/0365818 A1], McCallie, Jr et al [US 11,526,508], and Bharathy et al [US 2022/0179910 A1].
With regard to claim 1, Sun teaches a method for training a goods knowledge graph, comprising: constructing an initial goods knowledge graph based on a first type
obtain embedding vectors of entities in the trained goods knowledge graph (see paragraph [0039]; the system can obtain embedding vectors of the entities of both user and items).
Sun teaches relationships between nodes of the graph but does not appear to explicitly teach:
wherein a format of the first type of triples is head entity-relation-tail entity, and a format of the second type of triples is entity-attribute-attribute value;
and training the initial goods knowledge graph based on a graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph;
wherein the training the initial goods knowledge graph based on the graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph comprises:
performing initialization encoding on the head entities, the relations and the tail entities to obtain the embedding vectors of the first type of triples;
performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples;
and inputting the embedding vectors of the first type of triples and the embedding vectors of the second type of triples into the graph embedding model alternately, and training the initial goods knowledge graph to obtain the embedding vectors of the entities in the trained goods knowledge graph;
wherein the performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples comprises: randomly initializing an n-dimensional unit vector with a modulus of 1, wherein n is a positive integer;
multiplying a first attribute value of a digital type in attribute values of the second type of triples by the unit vector to obtain an embedding vector of the first attribute value;
performing initialization encoding on entities and attributes in the second type of triples to obtain embedding vectors of the entities and the attributes;
and obtaining embedding vectors of the second type of triples based on the embedding vector of the first attribute value and the embedding vectors of the entities and the attributes.
Bhatia teaches training the initial goods knowledge graph based on a graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph (see paragraphs [0162], [0177]-[0179]; the system can train the model to generate/obtain embedding vectors).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the machine learning model usage of Sun by including means to train the respective model based on loss functions as taught by Bhatia in order to determine how accurate the actual model being used is as well as having means to adjust/tune the parameters to the model until the loss is minimized thereby helping ensure that the resulting encodings/embeddings are accurate representations of the input which, by extension, helps to garner user trust in the system’s processes since the user trusts that the system’s data is an accurate representation.
Sun in view of Bhatia do not appear to explicitly teach:
wherein a format of the first type of triples is head entity-relation-tail entity, and a format of the second type of triples is entity-attribute-attribute value;
wherein the training the initial goods knowledge graph based on the graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph comprises:
performing initialization encoding on the head entities, the relations and the tail entities to obtain the embedding vectors of the first type of triples;
performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples;
and inputting the embedding vectors of the first type of triples and the embedding vectors of the second type of triples into the graph embedding model alternately, and training the initial goods knowledge graph to obtain the embedding vectors of the entities in the trained goods knowledge graph;
wherein the performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples comprises: randomly initializing an n-dimensional unit vector with a modulus of 1, wherein n is a positive integer;
multiplying a first attribute value of a digital type in attribute values of the second type of triples by the unit vector to obtain an embedding vector of the first attribute value;
performing initialization encoding on entities and attributes in the second type of triples to obtain embedding vectors of the entities and the attributes;
and obtaining embedding vectors of the second type of triples based on the embedding vector of the first attribute value and the embedding vectors of the entities and the attributes.
Liu teaches wherein a format of the first type of triples is head entity-relation-tail entity, and a format of the second type of triples is entity-attribute-attribute value (see Figures 5, 6A, 6B, and 7; paragraph [0074]; the knowledge graph is created via user and item links, i.e. head and tail based on an “observed interaction”/relation and additional the items are linked to an entity/attribute node based on having a value for that attribute).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the user-item and item-attribute information from having item nodes with feature vectors of Sun in view of Bhatia by utilizing a knowledge graph that has links to the items attributes as taught by Liu in order to be able to represent the interactions between users and goods/items and links/facts describing attributes of the items so that graph analysis of similarities between users and items and items with attributes can be compared in the same graph space thus allowing for particular analyses to discern correlations/affinities of users without having perform additional calculations on the entire feature vector of the item which can bias the similarity/interest to the item nor having to analyze and extract the desired attributes from the vector.
Sun in view of Bhatia and Liu teach wherein the training the initial goods knowledge graph based on the graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph comprises: performing initialization encoding on the head entities, the relations and the tail entities to obtain the embedding vectors of the first type of triples; performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples (see Liu, paragraph [0074]; see Sun, paragraphs [0044]-[0046] and [0088]; Bhatia, paragraphs [0153], [0173], [0176], and [0144]; the system can generate vectors of the respective user-items and item-attribute knowledge);
and inputting the embedding vectors of the first type of triples and the embedding vectors of the second type of triples into the graph embedding model alternately, and training the initial goods knowledge graph to obtain the embedding vectors of the entities in the trained goods knowledge graph (see Sun, paragraph [0080]; the system can utilize the user embedding and item embedding as a pair with means to determine loss which is utilized in the training to obtain the final embeddings vectors).
Sun in view of Bhatia and Liu do not appear to explicitly teach:
wherein the performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples comprises: randomly initializing an n-dimensional unit vector with a modulus of 1, wherein n is a positive integer;
multiplying a first attribute value of a digital type in attribute values of the second type of triples by the unit vector to obtain an embedding vector of the first attribute value;
performing initialization encoding on entities and attributes in the second type of triples to obtain embedding vectors of the entities and the attributes;
and obtaining embedding vectors of the second type of triples based on the embedding vector of the first attribute value and the embedding vectors of the entities and the attributes.
Bharathy teaches performing initialization encoding on entities and attributes in the second type of triples to obtain embedding vectors of the entities and the attributes; and obtaining embedding vectors of the second type of triples based on the embedding vector of the first attribute value and the embedding vectors of the entities and the attributes (see paragraph [0067]; the system can obtain embedding representations of every element of a triple).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the user-item and item-attribute information from having item nodes with feature vectors of Sun in view of Bhatia and Liu by being able to vectorize or transform triple values into embeddings as taught by Bharathy in order to be able to represent not only the entire triple in a relation space but also be able to encode additional information of the various values of the triples into a relation or vector space thereby helping to capture more information and be able to discern similarities between triples at a higher-level of granularity.
Sun in view of Bhatia, Liu, and Bharathy do not appear to teach:
wherein the performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples comprises: randomly initializing an n-dimensional unit vector with a modulus of 1, wherein n is a positive integer;
multiplying a first attribute value of a digital type in attribute values of the second type of triples by the unit vector to obtain an embedding vector of the first attribute value;
McCallie teaches randomly initializing an n-dimensional unit vector with a modulus of 1, wherein n is a positive integer; multiplying a first attribute value of a digital type in attribute values of the second type of triples by the unit vector to obtain an embedding vector of the first attribute value (see col 10, lines 57-67; the system can multiply the attribute value by unit vectors to create an embedding of the vector).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the user-item and item-attribute information from having item nodes with feature vectors of Sun in view of Bhatia, Liu, and Bharathy by being able to embed the vector via a unit vector as taught by McCallie in order to be able to have various categories of subspaces that can be used for various evaluations instead of one single space while also normalizing values to a reduced size to make various calculations smaller thereby saving computational and memory resources by not having to perform extremely large multiplication and other high-valued operations.
Sun in view of Bhatia, Liu, Bharathy, and McCallie teach wherein the performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples comprises: randomly initializing an n-dimensional unit vector with a modulus of 1, wherein n is a positive integer; multiplying a first attribute value of a digital type in attribute values of the second type of triples by the unit vector to obtain an embedding vector of the first attribute value (see McCallie, col 10, lines 57-67; see Bharathy, paragraph [0067]; see Bhatia, paragraph [0141] and [0169]; see Liu, paragraph [0074]; the system can embed the various elements of the item’s triple and utilize a unit vector to normalize the values).
With regard to claim 2, Sun in view of Bhatia, Liu, Bharathy, and McCallie teach wherein the training the initial goods knowledge graph based on the graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph comprises: performing initialization encoding on head entities, relations and tail entities to obtain embedding vectors of the first type of triples; and inputting the embedding vectors of the first type of triples into the graph embedding model, and training the initial goods knowledge graph to obtain the embedding vectors of the entities in the trained goods knowledge graph (see Liu, paragraph [0074]; see Sun, paragraphs [0044]-[0046] and [0088]; Bhatia, paragraphs [0153], [0173], [0176], and [0144]; the system can initialize or utilize an embedding/vector of the user-item interactions and be able to input that into an encoder to obtain an embedding where the system can utilize a loss function as means to help train the encoder).
With regard to claim 3, Sun in view of Bhatia, Liu, Bharathy, and McCallie teach wherein the training the initial goods knowledge graph based on the graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph comprises: performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples; and inputting the embedding vectors of the second type of triples into the graph embedding model, and training the initial goods knowledge graph to obtain the embedding vectors of the entities in the trained goods knowledge graph (see Liu, paragraph [0074]; see Sun, paragraphs [0044]-[0046] and [0088]; Bhatia, paragraphs [0153], [0173], [0176], and [0144]; the system can initialize or utilize an embedding/vector of the item-attributes-values and be able to input that into an encoder to obtain an embedding where the system can utilize a loss function as means to help train the encoder).
With regard to claim 12, Sun in view of Bhatia, Liu, Bharathy, and McCallie teach a method for training a model, comprising: determining a goods embedding vector of goods according to a trained goods knowledge graph, wherein the trained goods knowledge graph is obtained by training an initial goods knowledge graph through the method according to claim1; and inputting the goods embedding vector of the goods into a graph convolution network model to be trained, and training the graph convolution network model to be trained to obtain a graph convolution network model, wherein the graph convolution network model to be trained is constructed based on the trained goods knowledge graph (see Liu, paragraph [0074]; see Sun, paragraphs [0044]-[0046] and [0088]; Bhatia, paragraphs [0153], [0173], [0176], and [0144]; the system can initialize or utilize an embedding/vector of the item-attributes-values and be able to input that into an encoder to obtain an embedding where the system can utilize a loss function as means to help train the respective model).
With regard to claim 13, Sun in view of Bhatia, Liu, Bharathy, and McCallie teach wherein the inputting the goods embedding vector of the goods into the graph convolution network model to be trained, and the training the graph convolution network model to be trained to obtain the graph convolution network model comprises: determining a similarity between the goods embedding vector of the goods and a head entity of an adjacent triple in a relation space, wherein the adjacent triple comprises at least one triple; weighting and summing all tail entities of the adjacent triple to obtain a first-order embedding vector of a user; taking the above operation as one propagation, and obtaining a final embedding vector of the user after H propagations of the graph convolution network model to be trained, wherein H is a positive integer (see Sun, paragraphs [0047]-[0048] and [0066]-[0068]; the system can perform multiple propagations to form a final embedding that relates to the similarity of adjacent triples/users with the various items/goods);
multiplying the final embedding vector of the user by the goods embedding vector of the goods to obtain a prediction probability of the user for the goods (see Sun, paragraph [0084]; the system can multiple the user embedding vector and goods embedding vector to determine probability of the items/goods);
computing a loss value according to the prediction probability and an actual probability of the user for the goods; and updating parameters of the graph convolution network model to be trained by using the loss value to obtain the graph convolution network model (see Sun, paragraph [0080]; see Bhatia, paragraph [0143]; the system can compute a loss function for the prediction and be able to update parameters accordingly as part of the training process).
With regard to claim 16, this claim is substantially similar to claim 1 and is rejected for similar reasons as discussed above.
With regard to claim 18, this claim is substantially similar to claim 12 and is rejected for similar reasons as discussed above.
Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al [US 2023/0153579 A1] in view of Bhatia et al [US 2020/0104395 A1], Liu et al [US 2021/0365818 A1], McCallie, Jr et al [US 11,526,508], and Bharathy et al [US 2022/0179910 A1] in further view of Torres [US 2021/0149993 A1].
With regard to claim 6, Sun in view of Bhatia, Liu, Bharathy, and McCallie teach all the claim limitations of claims 1 and 3 as discussed above.
Sun in view of Bhatia, Liu, Bharathy, and McCallie teach performing initialization encoding on entities and attributes in the second type of triples to obtain embedding vectors of the entities and the attributes; … and obtaining embedding vectors of the second type of triples based on the embedding vector of the second attribute value and the embedding vectors of the entities and the attributes (see Bharathy, paragraph [0067]; the system can obtain embedding representations of every element of a triple).
Sun in view of Bhatia, Liu, Bharathy, and McCallie teach attention mechanisms but do not explicitly recite self-attention, in particular, the references do not appear to explicitly teach:
wherein the performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples comprises: performing initialization encoding on a second attribute value of a text type in attribute values of the second type of triples based on a self-attention model to obtain an initialization- encoded result;
performing dimension reduction on the initialization-encoded result to obtain a dimension-reduced result, and taking the dimension-reduced result as an embedding vector of the second attribute value.
Torres teaches wherein the performing embedding representation on the second type
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify encoding scheme of Sun in view of Bhatia, Liu, Bharathy, and McCallie by being able to utilize word encoding via self-attention as taught by Torres in order to reduce the computational cost/resources in embedding words by the system being able to accurately embed attribute values that comprise more than a single word so that the system can consider all words in the phrase for context when determining the embedding of the next word of the attribute/input.
Sun in view of Bhatia, Liu, Bharathy, and McCallie in further view of Torres wherein the performing embedding representation on the second type of triples to obtain embedding vectors of the second type of triples comprises: performing initialization encoding on a second attribute value of a text type in attribute values of the second type of triples based on a self-attention model to obtain an initialization- encoded result (see Torres, paragraphs [0033] and [0038]; see Bharathy, paragraph [0067]; the system can obtain embedding representations of every element of a triple and for the words the system can utilize self-attention mechanisms to perform the encoding);
performing dimension reduction on the initialization-encoded result to obtain a dimension-reduced result, and taking the dimension-reduced result as an embedding vector of the second attribute value (see Bhatia, paragraphs [0066], [0091], [0046]; see Liu, paragraphs [0072] and [0074]; Su, paragraph [0005]; the system can utilize machine learning functionality to perform dimension reduction on the input encoding/vector to form an embedding vectors).
With regard to claim 7, Sun in view of Bhatia, Liu, Bharathy, and McCallie in further view of Torres teach wherein the performing dimension reduction on the initialization-encoded result to obtain the dimension-reduced result comprises: performing dimension reduction on the initialization-encoded result by means of an intermediate layer network structure of an autoencoder to obtain the dimension-reduced result (see Liu, paragraphs [0072] and [0074]; an autoencoder can be utilized as part of the system with the encoding or encoder resulting in the reduced dimensions).
Claims 8, 9, 11, 17, 20, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al [US 2023/0153579 A1] in view of Bhatia et al [US 2020/0104395 A1], Liu et al [US 2021/0365818 A1], McCallie, Jr et al [US 11,526,508], and Bharathy et al [US 2022/0179910 A1] in further view of Balasubramanian et al [US 2023/0135683 A1].
With regard to claim 8, Sun in view of Bhatia, Liu, Bharathy, and McCallie teach a method for recommending goods, comprising:
obtaining a preference probability of the user for goods according to a graph convolution network model trained based on historical goods search information of the user (see Sun, paragraph [0084]; Liu, paragraphs [0077] and [0086]; the system can determine a list of recommendations for the user),
wherein the graph convolution network model is constructed based on a trained goods knowledge graph, and the trained goods knowledge graph is obtained by training an initial goods knowledge graph through the method according to claim 1 (see Liu, paragraph [0068] and claim 1’s mapping above);
and outputting a goods recommendation list according to the preference probability (see Sun, paragraph [0084]; Liu, paragraphs [0077] and [0086]; the system can determine a list of recommendations for the user).
Sun in view of Bhatia, Liu, Bharathy, and McCallie do not appear to explicitly teach obtaining a search request from a user.
Balasubramanian teaches obtaining a search request from a user (see Figure 5 and paragraph [0043]; the system can receive an input of the user as a means to start the recommendation process).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify recommendation system of Sun in view of Bhatia, Liu, Bharathy, and McCallie by being able to receive search requests from users for items/products as taught by Balasubramanian in order to provide more contextualized recommendation results for a current session with the user by focusing the recommendation process based on what the user has explicitly expressed interest in versus merely historical information of the user.
With regard to claim 9, Sun in view of Bhatia, Liu, Bharathy, and McCallie in further view of Balasubramanian teach wherein the obtaining the preference probability of the user for goods according to the graph convolution network model trained based on historical goods search information of the user comprises: determining a similarity between the goods and a head entity of an adjacent triple in a relation space, wherein the adjacent triple comprises at least one triple; weighting and summing all tail entities of the adjacent triple with the similarity as a weight to obtain a first-order embedding vector of the user; taking the above operation as one propagation, and obtaining a final embedding vector of the user after H propagations of the graph convolution network model trained based on the historical goods search information of the user, wherein H is a positive integer (see Sun, paragraphs [0047]-[0048] and [0066]-[0068]; the system can perform multiple propagations to form a final embedding that relates to the similarity of adjacent triples/users with the various items/goods);
and multiplying the final embedding vector of the user by a goods embedding vector of the goods to obtain the preference probability of the user for the goods, wherein the goods embedding vector of the goods is obtained based on the trained goods knowledge graph (see Sun, paragraph [0084]; the system can multiple the user embedding vector and goods embedding vector to determine probability of the items/goods).
With regard to claim 11, Sun in view of Bhatia and Liu in further view of Balasubramanian teach wherein before the obtaining tbe preference probability of the user for goods according to the graph convolution network model trained based on the historical goods search information of the user, the method further comprises: determining a goods embedding vector of the goods according to the trained goods knowledge graph; and inputting the goods embedding vector of the goods into a graph convolution network model to be trained to obtain the graph convolution network model (see Liu, paragraph [0074]; see Sun, paragraphs [0044]-[0046] and [0088]; Bhatia, paragraphs [0153], [0173], [0176], and [0144]; the system create goods embedding vectors and utilize it for training the model so that it can create accurate item/goods embeddings).
With regard to claim 17, this claim is substantially similar to claim 8 and is rejected for similar reasons as discussed above.
With regard to claims 20 and 21, these claims are substantially similar to claims 2 and 3 and are rejected for similar reasons as discussed above.
Response to Arguments
Applicant’s arguments (see the second paragraph on page 11) with respect to the claim objections have been fully considered and are persuasive. The objections of the claims have been withdrawn. The applicant amended the claims for claim 11 and the respective objection has been withdrawn.
Applicant’s arguments (see second to last paragraph on page 11 through the third paragraph on page 13) with respect to 35 USC 112 rejections have been fully considered and are persuasive. The majority of the 35 USC 112 rejections have been withdrawn except as noted above. With regard to the randomly limitation, the applicant’s arguments were persuasive. As noted above, a few variables still need to be addressed.
Applicant's arguments (see fourth paragraph on page 13 through the end of page 15) have been fully considered but they are not persuasive. The applicant argues that the claims recite an improvement to the accuracy of goods recommendations. The Examiner respectfully disagrees.
With regard to applicant’s arguments regarding an improvement to the functioning of a computer or to any other technology or technical field, the Examiner notes that, per MPEP 2106.05(a), that “[a]n important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107.” (emphasis added). Additionally, it is important to note that “the judicial exception alone cannot provide the improvement” and that “the claim reflects the asserted improvement”. Wit respect to claim 1, as discussed in the 35 USC 101 rejections above, the claim relates to an abstract idea of evaluating and analysis different data sets; in particular to convert data from one form to another (similar to encrypting/coding messages). Although applicant argues the improvement of accuracy of goods recommendations; the Examiner notes that claim 1 appears to discuss creating a graph and training the graph to obtain embedding vectors; there is no discussion of goods recommendations or its actual usage. As noted for at least claims 8 and 17, these claims do recite the usage of the trained model in a manner that relates to goods recommendations. Therefore, at least claims 8 and 17 (and respective dependent claims) have their 35 USC 101 rejections withdrawn; however, with respect to the other independent and dependent claims, their rejections still stand.
Applicant's arguments (see the first paragraph on page 16 through ) have been fully considered but they are not persuasive. The applicant argues (a) that Sun does not have triples and is not a knowledge graph system but graph neural network filtering recommendation model; (b) Liu does not discuss how to construct an initial goods knowledge graph based on these links; (c) Bhatia does not involve any graph structures and does not involve training an initial goods knowledge graph; (d) Bharathy does not mention two types of triples and does not teach the embedding vectors being input alternately; and (e) McCallie does not mention two types of triples or how to process these two types of triples. The Examiner respectfully disagrees.
With regard to argument (a), the Examiner notes that the Office Action did not rely solely on Sun to teach triples; therefore, in response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). With regard to the knowledge graph, applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. As illustrated in the Sun reference, the system shows knowledge or information about users and also items as well as interactions between users and items where that information/knowledge is used in a graph data structure. As such, applicant’s arguments are not persuasive.
With regard to argument (b) that does not discuss how to construct an initial goods knowledge graph based on these links; applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Liu illustrates elsewhere in the reference teachings of creating a knowledge graph, such as in paragraphs 67 and 68. Additionally, as noted above, Liu is modifying the teachings of Sun to illustrate that the various attributes of a node in a graph can be represented as triples with edges and nodes linked to that item or user node. Therefore, applicant’s arguments are not persuasive.
With regard to argument (c) Bhatia does not involve any graph structures and does not involve training an initial goods knowledge graph; the Examiner notes that Bhatia was not relied upon to solely teach the claim limitation. As noted in the 35 USC 103 rejections, the training limitation is mapped to teachings from Liu, Sun, and Bhatia with Bhatia illustrating how the interactions of Sun of entities/users and content items (which can be represented in the graph of Sun) can be converted into embedding vectors.
With regard to argument (d) that Bharathy does not mention two types of triples and does not teach the embedding vectors being input alternately; the Examiner notes that Bharathy was not used to teach the limitation in question. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Therefore, in view of the teachings of the combination of references, the applicant’s arguments are not persuasive.
With regard to argument (e) that McCallie does not mention two types of triples or how to process these two types of triples; the Examiner notes that Bharathy was not used to teach the limitation in question. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Therefore, in view of the teachings of the combination of references, the applicant’s arguments are not persuasive.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARC S SOMERS whose telephone number is (571)270-3567. The examiner can normally be reached M-F 11-8 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann Lo can be reached at 5712729767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARC S SOMERS/Primary Examiner, Art Unit 2159 3/24/2026