Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 09/25/2025 has been entered. Claims 1-20 are pending. Applicant’s amendments to the claims overcomes the 112(b) rejections.
Response to Arguments
Applicant’s arguments with respect to 35 U.S.C § 101 filed 09/25/2025 have been fully
considered but they are not persuasive.
Applicant argues (pages 12-13 of remarks/arguments) that the claimed invention is not directed to a mental process. The examiner respectfully disagrees. The claim invention is recites determining a prediction based on a knowledge graph and determining a consolidation prediction based on the first predictions. These in their broadest reasonable interpretations can be done in the human mind. Having a machine learning model perform this task does not make them a mental process.
Applicant also argues that the claimed invention is a technical improvement (pages 14-15). The applicant recites that consolidating predictions of individual entities into one joint prediction is an improvement. According to the MPEP 2106.05(a) the improvement cannot come from the judicial exception (abstract idea) alone. The additional elements of the claims are related to acquiring data, using models and training the models.
Applicant finally argues that the prior art fails to teach the claims invention and thus the claims recite an inventive concept (page 15). The examiner respectfully disagrees due to the prior art rejection below.
Thus, the 101 rejection is maintained.
Applicant’s arguments with respect to 35 U.S.C § 103 filed 09/25/2025 have been
considered but are moot because the new ground of rejection does not rely on any reference applied in
the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
101 Subject Matter Eligibility analysis
Step 1: Claims 1-20 are within the four statutory categories (a process, machine, manufacture or composition of matter.) Claims 1-8, 16-20 describe a process, claims 9-15 describe a machine.
With respect to claim 1:
Step 2A Prong 1: The claim recites an abstract idea enumerated in the 2019 PEG.
determining, …, a prediction for each source entity based on the knowledge graph,…, wherein the prediction for each source includes recommendation data identifying one or multiple target entities; (This is an abstract idea of a "Mental Process." The "determining" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The determination could be made manually by an individual.)
determining, …, a consolidated prediction for the plurality of source entities based on the prediction for each source entity, …, wherein the consolidated prediction identifies a target entity that maximizes a joint probability of the plurality of source entities. (This is an abstract idea of a "Mental Process." The "determining" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The determination could be made manually by an individual.)
Step 2a Prong 2: The judicial exception is not integrated into a practical application
Additional elements:
receiving a knowledge graph including a plurality of source entities, a plurality of target entities and a plurality of attribute entities, wherein each source entity is linked to one or more of the plurality of attribute entities, and each target entity is linked to one or more of the plurality of attribute entities; (this limitation amounts to adding insignificant extra-solution activity to the judicial exception).
using a prediction learning model (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.)
using a consolidation learning model (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.)
the prediction model having been trained using prediction training data including historical data (this limitation amounts to adding insignificant extra-solution activity to the judicial exception).
the consolidation learning model having been trained using consolidation training data including the historical data and training recommendation data that is output from the prediction learning model during training, (this limitation amounts to adding insignificant extra-solution activity to the judicial exception).
Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The additional elements of “receiving a knowledge…”, “the prediction model…” and “the consolidation learning model…” adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept.
The additional elements “using a prediction learning model” and “using a consolidation learning model” is recited in a generic level and they represent generic computer components to apply the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept.
When considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which do not provide an inventive concept.
Therefore, claim 1 is ineligible.
With respect to claim 2:
Step 2A Prong 1: claim 2, which incorporates the rejection of claim 1, does not recite an abstract idea.
Step 2a Prong 2: The judicial exception is not integrated into a practical application.
one or more of the plurality of source entities are linked to one or more other source entities and/or one or more target entities. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception).
Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception
The additional element “one or more of the plurality…” adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept.
Therefore, claim 2 is ineligible.
With respect to claim 3:
Step 2A Prong 1: claim 3, which incorporates the rejection of claim 2, recites an additional abstract idea:
the determining, using the prediction learning model, the prediction for each source entity includes learning vector representations of the knowledge graph. (This is an abstract idea of a "Mental Process." The "determining" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The determination could be made manually by an individual.)
Step 2a Prong 2: claim 3 does not recite any additional elements and thus cannot be integrated into a practical application.
Step 2B: claim 3 does not recite an additional element.
Therefore, claim 3 is ineligible.
With respect to claim 4:
Step 2A Prong 1: claim 4, which incorporates the rejection of claim 1, recites an additional abstract idea:
the consolidated prediction identifies multiple target entities in a ranked order. (This is an abstract idea of a "Mental Process." The "identifies" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The identification could be made manually by an individual.)
Step 2a Prong 2: claim 4 does not recite any additional elements and thus cannot be integrated into a practical application.
Step 2B: claim 4 does not recite an additional element.
Therefore, claim 4 is ineligible.
With respect to claim 5:
Step 2A Prong 1: claim 5, which incorporates the rejection of claim 4, recites an additional abstract idea:
applying source entity constraints of one or more of the plurality of source entities to the ranked order to create a filtered ranked order of the identified multiple target entities. (This is an abstract idea of a "Mental Process." The "applying and create" steps under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The creation could be made manually by an individual.)
Step 2a Prong 2: claim 5 does not recite any additional elements and thus cannot be integrated into a practical application.
Step 2B: claim 5 does not recite an additional element.
Therefore, claim 5 is ineligible.
With respect to claim 6:
Step 2A Prong 1: claim 6, which incorporates the rejection of claim 1, does not recite an abstract idea.
Step 2a Prong 2: The judicial exception is not integrated into a practical application.
the recommendation data for each prediction includes a prediction explanation, and the consolidated prediction includes a consolidated prediction explanation. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception).
Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception
The additional element “the recommendation data …” adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept.
Therefore, claim 6 is ineligible.
With respect to claim 7:
Step 2A Prong 1: claim 7, which incorporates the rejection of claim 1, does not recite an abstract idea.
Step 2a Prong 2: The judicial exception is not integrated into a practical application.
the prediction for each source entity includes a weight, and wherein the determining the consolidated prediction is further based on the weights of the predictions for each source entity. (this limitation merely limits the judicial exception to a particular field of use.)
Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception
The additional element merely limits the judicial exception to a particular field of use and also cannot provide an inventive concept (MPEP 2106.05(h)).
Therefore, claim 7 is ineligible.
With respect to claim 8:
Step 2A Prong 1: claim 8, which incorporates the rejection of claim 1, recites additional abstract ideas:
fusing a new source entity into the knowledge graph by linking the new source entity to one or more of the plurality of attribute entities to produce a fused knowledge graph (This is an abstract idea of a "Mental Process." The "fusing" step under its broadest reasonable interpretation, covers concepts that can be practically performed by a human using a pen and paper.)
updating the step of determining a prediction for each source entity and the new source entity using the fused knowledge graph, and (This is an abstract idea of a "Mental Process." The "updating" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The determination could be made manually by an individual.)
updating the step of determining a consolidated prediction for the plurality of source entities and the new source entity.(This is an abstract idea of a "Mental Process." The "updating" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The determination could be made manually by an individual.)
Step 2a Prong 2: claim 8 does not recite any additional elements and thus cannot be integrated into a practical application.
Step 2B: claim 8 does not recite an additional element.
Therefore, claim 8 is ineligible.
With respect to claim 9:
The claim recites similar limitations as corresponding to claim 1. Therefore, the same subject matter analysis that was utilized for claim 1, as described above, is equally applicable to claim 9. Therefore, claim 9 is ineligible.
With respect to claim 10:
The claim recites similar limitations as corresponding to claim 8. Therefore, the same subject matter analysis that was utilized for claim 8, as described above, is equally applicable to claim 10. Therefore, claim 10 is ineligible.
With respect to claim 11:
The claim recites similar limitations as corresponding to claim 2. Therefore, the same subject matter analysis that was utilized for claim 2, as described above, is equally applicable to claim 11.
Therefore, claim 11 is ineligible.
With respect to claim 12:
The claim recites similar limitations as corresponding to claims 4 and 5. Therefore, the same subject matter analysis that was utilized for claims 4 and 5, as described above, is equally applicable to claim 12.
Therefore, claim 12 is ineligible.
With respect to claim 13:
The claim recites similar limitations as corresponding to claim 6. Therefore, the same subject matter analysis that was utilized for claim 6, as described above, is equally applicable to claim 13. Therefore, claim 13 is ineligible.
With respect to claim 14:
The claim recites similar limitations as corresponding to claim 7. Therefore, the same subject matter analysis that was utilized for claim 7, as described above, is equally applicable to claim 14. Therefore, claim 14 is ineligible.
With respect to claim 15:
The claim recites similar limitations as corresponding to claim 1. Therefore, the same subject matter analysis that was utilized for claim 1, as described above, is equally applicable to claim 15. Therefore, claim 15 is ineligible.
With respect to claim 16:
Step 2A Prong 1: claim 16, which incorporates the rejection of claim 1, does not recite an abstract idea.
Step 2A Prong 2: The judicial exception is not integrated into a practical application.
prior to determining the prediction for each source entity, training the prediction model using the prediction training data including the historical data. (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.)
Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception
The additional element is recited in a generic level and they represent generic computer components to apply the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept (MPEP 2106.05(f)).
Therefore, claim 16 is ineligible.
With respect to claim 17:
Step 2A Prong 1: claim 17, which incorporates the rejection of claim 16, does not recite an abstract idea.
Step 2A Prong 2: The judicial exception is not integrated into a practical application.
subsequent to training the prediction model, training the consolidation learning model using the historical data, one or more constraints, and the training recommendation data that is output from the trained prediction model. (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.)
Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception
The additional element is recited in a generic level and they represent generic computer components to apply the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept (MPEP 2106.05(f)).
Therefore, claim 17 is ineligible.
With respect to claim 18:
Step 2A Prong 1: claim 18, which incorporates the rejection of claim 1, does not recite an abstract idea.
Step 2A Prong 2: The judicial exception is not integrated into a practical application.
the consolidation model comprises an input layer, a representation layer, and a joint representation layer. (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.)
Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception
The additional element is recited in a generic level and they represent generic computer components to apply the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept (MPEP 2106.05(f)).
Therefore, claim 18 is ineligible.
With respect to claim 19:
Step 2A Prong 1: claim 19, which incorporates the rejection of claim 18, recites an additional abstract idea:
determining the consolidated prediction that identifies the target entity that maximizes the joint probability of the plurality of source entities based on the joint representation. (This is an abstract idea of a "Mental Process." The "determining" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The determination could be made manually by an individual.)
Step 2A Prong 2: The judicial exception is not integrated into a practical application.
processing the recommendation data from the prediction learning model, preference data, and graph data associated with the knowledge graph using the input layer, the representation layer, and the joint representation layer to compute a joint representation; and (this limitation amounts to adding insignificant extra-solution activity to the judicial exception).
Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception
The additional element adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)).
Therefore, claim 19 is ineligible.
With respect to claim 20:
Step 2A Prong 1: claim 20, which incorporates the rejection of claim 19, recites an additional abstract idea:
determining a ranked list of one or more recommendations for the plurality of source entities; (This is an abstract idea of a "Mental Process." The "determining" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The determination could be made manually by an individual.)
applying constraints to the ranked list to create a filtered ranking; and (This is an abstract idea of a "Mental Process." The "applying constraints" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The constraints could be applied manually by an individual.)
Step 2A Prong 2: The judicial exception is not integrated into a practical application.
outputting the filtered ranking. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception).
Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception
The additional element adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)).
Therefore, claim 20 is ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (NPL: “KGAT: Knowledge Graph Attention Network for Recommendation” (Published: 2019)) in view of Cao (NPL: “Attentive Group Recommendation” (Published 2018)).
Regarding claim 1, Wang teaches:
receiving a knowledge graph including a plurality of source entities, a plurality of target entities and a plurality of attribute entities, wherein each source entity is linked to one or more of the plurality of attribute entities, and each target entity is linked to one or more of the plurality of attribute entities; (Figure 1. Section 1 Introduction paragraph 2 describes the knowledge graph where “users” are the source entity, “Entities” are the target entities and “Items” are the attribute entities).
determining, using a prediction learning model, a prediction for each source entity based on the knowledge graph, the prediction learning model having been trained using prediction training data including historical data, wherein the prediction for each source includes recommendation data identifying one or multiple target entities; and (Page 3 Task Description Output: “a prediction function that predicts the probability y_ui that user u would adopt item i.” and Page 6 First Col: “For each dataset, we randomly select 80% of interaction history of each user to constitute the training set, and treat the remaining as the test set.” )
Wang does not teach A computer-implemented method of consolidating recommendations based on a plurality of individual recommendations, the method being implemented in one or more processors connected to a memory, or determining, using a consolidation learning model, a consolidated prediction for the plurality of source entities based on the prediction for each source entity, the consolidation learning model having been trained using consolidation training data including the historical data and recommendation data that is output from the prediction learning model during training, wherein the consolidated prediction identifies a target entity that maximizes a joint probability of the plurality of source entities. However, Cao does:
A computer-implemented method of consolidating recommendations based on a plurality of individual recommendations, the method being implemented in one or more processors connected to a memory (Abstract)
determining, using a consolidation learning model, a consolidated prediction for the plurality of source entities based on the prediction for each source entity, the consolidation learning model having been trained using consolidation training data including the historical data and recommendation data that is output from the prediction learning model during training, wherein the consolidated prediction identifies a target entity that maximizes a joint probability of the plurality of source entities. (Section 2. Methods first describes their AGREE model which is a group recommendation model. In Section 2.2.2 method they describe their group embedding which is based on the aggregation of user embeddings. In the same section “embedding u_t encodes the member user’s historical preference” and finally in Section 3 experiments “we first run the NCF method to predict the individual preference scores, and then apply the aggregation strategy to get the group preference score.”)
Wang and Cao are considered analogous art to the claimed invention because they are in the same field of endeavor being predictive systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the knowledge graph and individual recommendation of Wang with the group recommendation of Cao. One would want to do this to achieve group recommendation on knowledge graphs.
Regarding claim 2, Wang in view of Cao teaches claim 1 as outlined above. Wang further teaches:
one or more of the plurality of source entities are linked to one or more other source entities and/or one or more target entities. (Figure 1. Section 1 Introduction paragraph 2 describes the knowledge graph where “users” are the source entity, “Entities” are the target entities and “Items” are the attribute entities and page 3 paragraph label Collaborative Knowledge Graph).
Regarding claim 3, Wang in view of Cao teaches claim 2 as outlined above. Wang further teaches:
the determining, using the prediction learning model, the prediction for each source entity includes learning vector representations of the knowledge graph (Page 3 Methodology: “Figure 2 shows the model framework, which consists of three main components: 1) embedding layer, which parameterizes each node as a vector by preserving the structure of CKG [collaborative knowledge graph]”).
Regarding claim 4, Wang in view of Cao teaches claim 1 as outlined above. Cao further teaches:
the consolidated prediction identifies multiple target entities in a ranked order. (The prediction are scores implying they can give some ranked ordering. Also in section 3 Experiments they describe looking at the top-K recommendations and ranking some of the predictions. )
Regarding claim 5, Wang in view of Cao teaches claim 4 as outlined above. Cao further teaches:
applying source entity constraints of one or more of the plurality of source entities to the ranked order to create a filtered ranked order of the identified multiple target entities. (In Section 3.1.2 evaluation protocol they mention leaving out predictions in the ranking because it is too time consuming. This implies the can choose which predictions they want ranked making a filtered ranking. )
Regarding claim 6, Wang in view of Cao teaches claim 1 as outlined above. Wang further teaches:
the recommendation data for each prediction includes a prediction explanation, and the consolidated prediction includes a consolidated prediction explanation. (Page 4 top of 2nd Col: “When performing propagation forward, the attention flow suggests parts of the data to focus on, which can be treated as explanations behind the recommendation”)
Regarding claim 7, Wang in view of Cao teaches claim 1 as outlined above. Cao further teaches:
the prediction for each source entity includes a weight, and wherein the determining the consolidated prediction is further based on the weights of the predictions for each source entity. (Section 2.2.2 Method “User embedding aggregation. We perform a weighted sum on the embeddings of group g_l’s member users,”)
Regarding claim 8, Wang in view of Cao teaches claim 1 as outlined above. Wang further teaches:
fusing a new source entity into the knowledge graph by linking the new source entity to one or more of the plurality of attribute entities to produce a fused knowledge graph, (Figure 1. Section 1 Introduction paragraph 2 describes the knowledge graph where “users” are the source entity, “Entities” are the target entities and “Items” are the attribute entities and page 3 paragraph label Collaborative Knowledge Graph).
updating the step of determining a prediction for each source entity and the new source entity using the fused knowledge graph, and updating the step of determining a consolidated prediction for the plurality of source entities and the new source entity. (Section 1 Introductions “recursive embedding propagation, which updates a node’s embedding based on the embeddings of its neighbors” implying that the model is updated recursively as new information is added to the knowledge graph. )
Regarding claim 9, Wang teaches:
receiving a knowledge graph including a plurality of source entities, a plurality of target entities and a plurality of attribute entities, wherein each source entity is linked to one or more of the plurality of attribute entities, and each target entity is linked to one or more of the plurality of attribute entities; (Figure 1. Section 1 Introduction paragraph 2 describes the knowledge graph where “users” are the source entity, “Entities” are the target entities and “Items” are the attribute entities).
determining, using a prediction learning model, a prediction for each source entity based on the knowledge graph, the prediction learning model having been trained using prediction training data including historical data, wherein the prediction for each source includes recommendation data identifying one or multiple target entities; and (Page 3 Task Description Output: “a prediction function that predicts the probability y_ui that user u would adopt item i.” and Page 6 First Col: “For each dataset, we randomly select 80% of interaction history of each user to constitute the training set, and treat the remaining as the test set.” )
Wang does not teach A system configured for consolidating recommendations based on a plurality of individual recommendations, the system comprising one or more processors, or determining, using a consolidation learning model, a consolidated prediction for the plurality of source entities based on the prediction for each source entity, the consolidation learning model having been trained using consolidation training data including the historical data and recommendation data that is output from the prediction learning model during training, wherein the consolidated prediction identifies a target entity that maximizes a joint probability of the plurality of source entities. However, Cao does:
A system configured for consolidating recommendations based on a plurality of individual recommendations, the system comprising one or more processors (Abstract).
determining, using a consolidation learning model, a consolidated prediction for the plurality of source entities based on the prediction for each source entity, the consolidation learning model having been trained using consolidation training data including the historical data and recommendation data that is output from the prediction learning model during training, wherein the consolidated prediction identifies a target entity that maximizes a joint probability of the plurality of source entities. (Section 2. Methods first describes their AGREE model which is a group recommendation model. In Section 2.2.2 method they describe their group embedding which is based on the aggregation of user embeddings. In the same section “embedding u_t encodes the member user’s historical preference” and finally in Section 3 experiments “we first run the NCF method to predict the individual preference scores, and then apply the aggregation strategy to get the group preference score.”)
Wang and Cao are considered analogous art to the claimed invention because they are in the same field of endeavor being predictive systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the knowledge graph and individual recommendation of Wang with the group recommendation of Cao. One would want to do this to achieve group recommendation on knowledge graphs.
Regarding claim 10, Wang in view of Cao teaches claim 9 as outlined above. Wang further teaches:
fusing a new source entity into the knowledge graph by linking the new source entity to one or more of the plurality of attribute entities to produce a fused knowledge graph, (Figure 1. Section 1 Introduction paragraph 2 describes the knowledge graph where “users” are the source entity, “Entities” are the target entities and “Items” are the attribute entities and page 3 paragraph label Collaborative Knowledge Graph).
updating the step of determining a prediction for each source entity and the new source entity using the fused knowledge graph, and updating the step of determining a consolidated prediction for the plurality of source entities and the new source entity. (Section 1 Introductions “recursive embedding propagation, which updates a node’s embedding based on the embeddings of its neighbors” implying that the model is updated recursively as new information is added to the knowledge graph. )
Regarding claim 11, Wang in view of Cao teaches claim 9 as outlined above. Wang further teaches:
one or more of the plurality of source entities are linked to one or more other source entities and/or one or more target entities. (Figure 1. Section 1 Introduction paragraph 2 describes the knowledge graph where “users” are the source entity, “Entities” are the target entities and “Items” are the attribute entities and page 3 paragraph label Collaborative Knowledge Graph).
Regarding claim 12, Wang in view of Cao teaches claim 9 as outlined above. Cao further teaches:
the consolidated prediction identifies multiple target entities in a ranked order. (The prediction are scores implying they can give some ranked ordering. Also in section 3 Experiments they describe looking at the top-K recommendations and ranking some of the predictions. )
applying source entity constraints of one or more of the plurality of source entities to the ranked order to create a filtered ranked order of the identified multiple target entities. (In Section 3.1.2 evaluation protocol they mention leaving out predictions in the ranking because it is too time consuming. This implies the can choose which predictions they want ranked making a filtered ranking. )
Regarding claim 13, Wang in view of Cao teaches claim 9 as outlined above. Wang further teaches:
the recommendation data for each prediction includes a prediction explanation, and the consolidated prediction includes a consolidated prediction explanation. (Page 4 top of 2nd Col: “When performing propagation forward, the attention flow suggests parts of the data to focus on, which can be treated as explanations behind the recommendation”)
Regarding claim 14, Wang in view of Cao teaches claim 9 as outlined above. Cao further teaches:
the prediction for each source entity includes a weight, and wherein the determining a consolidated prediction is further based on the weights of the predictions for each source entity. (Section 2.2.2 Method “User embedding aggregation. We perform a weighted sum on the embeddings of group g_l’s member users,”)
Regarding claim 15, Wang teaches:
receiving a knowledge graph including a plurality of source entities, a plurality of target entities and a plurality of attribute entities, wherein each source entity is linked to one or more of the plurality of attribute entities, and each target entity is linked to one or more of the plurality of attribute entities; (Figure 1. Section 1 Introduction paragraph 2 describes the knowledge graph where “users” are the source entity, “Entities” are the target entities and “Items” are the attribute entities).
determining, using a prediction learning model, a prediction for each source entity based on the knowledge graph, the prediction learning model having been trained using prediction training data including historical data, wherein the prediction for each source includes recommendation data identifying one or multiple target entities; and (Page 3 Task Description Output: “a prediction function that predicts the probability y_ui that user u would adopt item i.” and Page 6 First Col: “For each dataset, we randomly select 80% of interaction history of each user to constitute the training set, and treat the remaining as the test set.” )
Wang does not teach A tangible, non-transitory computer-readable medium having instructions thereon which, upon being executed by one or more processors, or determining, using a consolidation learning model, a consolidated prediction for the plurality of source entities based on the prediction for each source entity, the consolidation learning model having been trained using consolidation training data including the historical data and recommendation data that is output from the prediction learning model during training, wherein the consolidated prediction identifies a target entity that maximizes a joint probability of the plurality of source entities. However, Shekhar does:
A tangible, non-transitory computer-readable medium having instructions thereon which, upon being executed by one or more processors (Section 3 Experiments they describe running simulations. They would need some sort of computer readable medium with processors to run these simulations).
determining, using a consolidation learning model, a consolidated prediction for the plurality of source entities based on the prediction for each source entity, the consolidation learning model having been trained using consolidation training data including the historical data and recommendation data that is output from the prediction learning model during training, wherein the consolidated prediction identifies a target entity that maximizes a joint probability of the plurality of source entities. (Section 2. Methods first describes their AGREE model which is a group recommendation model. In Section 2.2.2 method they describe their group embedding which is based on the aggregation of user embeddings. In the same section “embedding u_t encodes the member user’s historical preference” and finally in Section 3 experiments “we first run the NCF method to predict the individual preference scores, and then apply the aggregation strategy to get the group preference score.”)
Wang and Cao are considered analogous art to the claimed invention because they are in the same field of endeavor being predictive systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the knowledge graph and individual recommendation of Wang with the group recommendation of Cao. One would want to do this to achieve group recommendation on knowledge graphs.
Regarding claim 16, Wang in view of Cao teaches claim 1 as outlined above. Cao further teaches:
prior to determining the prediction for each source entity, training the prediction model using the prediction training data including the historical data. (Section 2 Task Formulation “In a recommendation scenario, we typically have historical user-item interactions (e.g., purchases and clicks)” and section 3 Methodology describes how they train the model).
Regarding claim 17, Wang in view of Cao teaches claim 16 as outlined above. Cao further teaches:
subsequent to training the prediction model, training the consolidation learning model using the historical data, one or more constraints, and the training recommendation data that is output from the trained prediction model. (Section 1 Introduction “we propose to learn the aggregation strategy from the historical data of group-item interactions”).
Regarding claim 18, Wang in view of Cao teaches claim 1 as outlined above. Cao further teaches:
the consolidation model comprises an input layer, a representation layer, and a joint representation layer. (Figure 2 shows the layers of the model).
Regarding claim 19, Wang in view of Cao teaches claim 18 as outlined above. Cao further teaches:
processing the recommendation data from the prediction learning model, preference data, and graph data associated with the knowledge graph using the input layer, the representation layer, and the joint representation layer to compute a joint representation; and (Figure 2 shows the layers of the model. Instead of using Cao vector representation one could substitute Wang’s knowledge graph vector representation.)
determining the consolidated prediction that identifies the target entity that maximizes the joint probability of the plurality of source entities based on the joint representation. (Section 2.2.2 Method “We perform a weighted sum on the embeddings of group g_l ’s member users, where the coefficient α(j,t) is a learnable parameter denoting the influence of member user u_t in deciding the group’s choice on item v_j . Intuitively, if a user has more expertise on an item (or items of the similar type), she should have a larger influence on the group’s choice on the item”)
Regarding claim 20, Wang in view of Cao teaches claim 1 as outlined above. Cao further teaches:
determining a ranked list of one or more recommendations for the plurality of source entities; (The prediction are scores implying they can give some ranked ordering. Also in section 3 Experiments they describe looking at the top-K recommendations and ranking some of the predictions. )
applying constraints to the ranked list to create a filtered ranking; and outputting the filtered ranking. (In Section 3.1.2 evaluation protocol they mention leaving out predictions in the ranking because it is too time consuming. This implies the can choose which predictions they want ranked making a filtered ranking. )
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL PATRICK GRUSZKA whose telephone number is (571)272-5259. The examiner can normally be reached M-F 9:00 AM - 6:00 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL GRUSZKA/Examiner, Art Unit 2121
/Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121