Prosecution Insights
Last updated: April 19, 2026
Application No. 18/176,891

SYSTEMS AND METHODS FOR PREDICTIONS USING A KNOWLEDGE GRAPH

Final Rejection §103
Filed
Mar 01, 2023
Examiner
HWANG, MEGAN ELIZABETH
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Verizon Patent and Licensing Inc.
OA Round
2 (Final)
47%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
9 granted / 19 resolved
-7.6% vs TC avg
Strong +60% interview lift
Without
With
+60.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-8, 10-17 and 19-22 are pending. Claims 9 and 18 have been canceled. Claims 21 and 22 are new. This Office Action is responsive to the amendment filed on 01/23/2026, which has been entered into the above identified application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 10-11, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Martineau et al. (US 20190392330 A1, published 12/26/2019), hereinafter Martineau; in view of Du et al. (“MetaKG: Meta-Learning on Knowledge Graph for Cold-Start Recommendation”, published 04/19/2022), hereinafter Du; in further view of Zhang et al. (“Enhancing Session-Based Recommendation with Global Context Information and Knowledge Graph”, published 04/08/2022), hereinafter Zhang. Martineau was cited in the previous Office Action. Regarding Claim 1, Martineau teaches A method comprising: receiving, by a device, real-time data associated with a prediction usage system (Martineau: “As shown in FIG. 2, the architecture 200 receives input data 202, which in this example includes at least one stream of information about users and items, as well as text about the items.” [0058]); determining, by the device, one or more features associated with the received real-time data (Martineau: “At least some of the input data 202 is provided to an aspect discovery function 204. An “aspect” generally refers to a coherent concept related to a type, feature, or other characteristic of an item.” [0059]; The aspect discovery function 204 processes the received input data 202 in order to automatically identify different aspects of various items and to identify the significances of those aspects. For example, the aspect discovery function 204 can process item descriptions (such as product specifications) to identify characteristics of the items. The aspect discovery function 204 can also process user reviews of items to identify which characteristics of the items tend to be more or less important to users.” [0060]); selecting, by the device, one or more relevant features, associated with a plurality of prediction output classes, based on the determined one or more features, using a knowledge graph for the plurality of prediction output classes (Martineau: “A graph representation function 208 uses (among other things) the aspects identified by the aspect discovery function 204 and the relationships identified by the aspect linkage function 206 to generate or update one or more knowledge graphs. Knowledge graphs can be used to encode diverse knowledge about users, items, item aspects, item properties (which can be distinct from their aspects), and other information in order to make recommendations that are both accurate and potentially explainable. Knowledge graphs offer a convenient framework to capture relationships between users, items, aspects, item properties, and other information, making knowledge graphs very well-suited to perform hybrid-based recommendations that combine collaborative filtering (which models user interests on a population level) and content-based filtering (which models user interests in items based on similarities between items). Item properties can include things like an item's price or color.” [0064]; “A shared vector space representation function 210 uses the one or more knowledge graphs generated by the graph representation function 208 to create a graph embedding within a shared vector space. For example, the shared vector space representation function 210 could traverse the knowledge graph and identify, for each node in the knowledge graph, the neighboring node or nodes. The shared vector space representation function 210 can then create a vector space identifying how the nodes in the knowledge graph are connected. In this way, the vector space identifies various users 212 and various items 214 contained in the knowledge graph while allowing some of the information in the knowledge graph to be concealed.” [0067]); providing, by the device, the one or more relevant features as input to a prediction system for the plurality of prediction output classes (Martineau: “A recommendation engine 222 receives inputs from the shared vector space or the mimicked vector space and generates recommendations for users based on the inputs. For example, the recommendation engine 222 can receive as inputs users 212, 218 paired with items 214, 220 in either of the vector spaces. The recommendation engine 222 processes this information to produce scores or other indicators identifying recommendations of one or more of the items for one or more of the users.” [0070]); obtaining, by the device, a prediction associated with the plurality of prediction output classes from the prediction system based on the provided one or more relevant features as input (Martineau: “In some embodiments, the suggestion engine 308 can interact with the recommendation engine 222, which uses the identification of a particular user 312 and an identification of one or more items 314 to determine the particular user's preference for the identified items 314. The recommendation engine 222 can perform this function using the pre-computed data or in real-time. The diversity engine 310 determines the final ordering of recommended items for the user and helps to ensure that there is diversity (variety) in the items being recommended to the user if possible.” [0075]); and providing, by the device, the obtained prediction to the prediction usage system (Martineau: “The placement engine 306 can then generate a graphical user interface identifying the recommended items for the user or otherwise identify the recommended items to the user.” [0075]). However, Martineau fails to expressly disclose wherein the knowledge graph comprises a default knowledge graph that does not include user segment nodes; determining, by the device, whether the real-time data does not include user features or is missing particular user features; when it is determined that the real-time data does not include user features or is missing particular user features: searching, by the device, the default knowledge graph to match features in the default knowledge graph with features in the real-time data, and selecting, by the device, as the one or more features, top N matching features from the default knowledge graph based on a hierarchy of features in the default knowledge graph. In the same field of endeavor, Zhang teaches wherein the knowledge graph comprises a default knowledge graph that does not include user segment nodes (Zhang: “Predicting a user’s next click by utilizing a short anonymous behavior is a challenging problem in the real-life session-based recommendation (SBR). Most existing methods usually learn the users’ preference from current session. However, they seldom consider global context information or knowledge graph and failed to distill high-quality item from similar sessions. In this work, we combine Global Context information with Knowledge Graph, and develop a new framework to enhance session-based recommendation (GCKG). Technically, we model a global knowledge graph, exploiting a knowledge aware attention mechanism for better learning item embeddings.” [Abstract]; “Session Sequence: let S = [ s 1 ,   s 2 ,   …   ,   s | S | ] denote a set of sessions over an item set V = { v 1 ,   v 2 ,   …   ,   v | V | } . An anonymous session s t = [ v 1 t ,   v 2 t ,   …   , v n t ] ∈ S e S is a sequence of items ordered by timestamps, where v j t ∈ V e V is the j-th clicked item and n is the length of session s t , , which may contain duplicated items. Global Graph: let G G = { ( v i - 1 ,   t r a i n s i t i o n ,   v i ) | v i - 1 ,   v i   ∈   V } is the Global Graph (GG), where the item set V contains all distinct items appearing in S, transition means that a time sequence relation from item v i - 1 to item v i in any session of S. When processing the individual session s t , we need to sample a Session Graph (SG) from the GG, G s t = ( V s t ,   E s t ) , where V s t contains the unique items in s t and edge set E s t contains an edge v i - 1 t ,   v i t ∈ E s t ( 2 ≤ i ≤ n ) if there is a transition from item v i - 1 t to item v i t in s t .” [Section 2. Notations and Problem Statement]); when it is determined that the real-time data does not include user features or is missing particular user features: searching, by the device, the default knowledge graph to match features in the default knowledge graph with features in the real-time data (Zhang: “Session Sequence: let S = [ s 1 ,   s 2 ,   …   ,   s | S | ] denote a set of sessions over an item set V = { v 1 ,   v 2 ,   …   ,   v | V | } . An anonymous session s t = [ v 1 t ,   v 2 t ,   …   , v n t ] ∈ S e S is a sequence of items ordered by timestamps, where v j t ∈ V e V is the j-th clicked item and n is the length of session s t , , which may contain duplicated items. Global Graph: let G G = { ( v i - 1 ,   t r a i n s i t i o n ,   v i ) | v i - 1 ,   v i   ∈   V } is the Global Graph (GG), where the item set V contains all distinct items appearing in S, transition means that a time sequence relation from item v i - 1 to item v i in any session of S. When processing the individual session s t , we need to sample a Session Graph (SG) from the GG, G s t = ( V s t ,   E s t ) , where V s t contains the unique items in s t and edge set E s t contains an edge v i - 1 t ,   v i t ∈ E s t ( 2 ≤ i ≤ n ) if there is a transition from item v i - 1 t to item v i t in s t .” [Section 2. Notations and Problem Statement]; “First, a global knowledge graph GGK is constructed by a global graph GG and a knowledge graph GK, then GCKG learns item correlations from global knowledge graph by a knowledge-aware attention mechanism and encode them into item representations. Next, A GRU and an attention net is utilized to learn the session embedding (Sect. 3.1).” [Section 3. Method]; “Generating Session Embeddings. Although the item embeddings capture the global context information in all sessions and the item knowledge, it does not capture the session-specific context information. Therefore, It is necessary to preserve the original session s t ’ information and capture the user’s current preferences by learning session-level embedding. Specifically, given a target session s t = [ v 1 t ,   v 2 t ,   …   , v n t ] , session embedding learning involves two tasks: (1) Perform an operation called embedding lookup to extract the s t -specific embedding matrix from the item knowledge embedding matrix I ,   I s = [ v 1 ,   v 2 ,   …   ,   v n ] , where I s ∈ R n × L d , and v k ∈ R L d , is the knowledge embedding of the k-th item in session s t .” [Section 3.1 Global Knowledge Graph]), and selecting, by the device, as the one or more features, top N matching features from the default knowledge graph based on a hierarchy of features in the default knowledge graph (Zhang: “Problem Statement: The goal of our model is to take all session sequences S , and knowledge graph as input, given a target session s t = [ v 1 t ,   v 2 t ,   …   , v n t ] , and returns a list of top-N candidate items to be consumed as the next one v n + 1 t . ” [Section 2. Notations and Problem Statement]; “Intuitively, the closer an item is to the preference of the current session, the more important it is to the recommendation. After obtaining the embedding of each session, we compute the score y ^ s t , v t for each candidate item v k ∈ V by concatenation its embedding v k and session representation s t c u r r e n t , s t i n f l u e n c e ,.” [Section 3.3 Making Recommendation and Model Training]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein the knowledge graph comprises a default knowledge graph that does not include user segment nodes; when it is determined that the real-time data does not include user features or is missing particular user features: searching, by the device, the default knowledge graph to match features in the default knowledge graph with features in the real-time data, and selecting, by the device, as the one or more features, top N matching features from the default knowledge graph based on a hierarchy of features in the default knowledge graph, as taught by Zhang to the method of Martineau because both of these methods are directed towards making recommendations using knowledge graphs and real-time input data. In making this combination and implementing a default/global knowledge graph to determine matching features when the real-time input is lacking user features, it would allow the method of Martineau to “[predict] the next item, based on an anonymous user clicking sequences within one visit” while “utiliz[ing] global context information and item knowledge for alleviating the data sparsity issue and filtering noisy preference signals” (Zhang: [Section 1. Introduction]). Martineau and Zhang still fail to explicitly disclose determining, by the device, whether the real-time data does not include user features or is missing particular user features. In the same field of endeavor, Du teaches determining, by the device, whether the real-time data does not include user features or is missing particular user features (Du: “Problem Statement. Given a collaborative knowledge graph G that combines the user-item bipartite graph G u and the knowledge graph G k , we aim to predict the unknown probability (i.e., user preference) p u i from user u to item i , where r u ' , i ∉ R ′. Specifically, if u is a new user with only a small number of interactions, i.e., | r u ' , i ∉ R ' ; i ' = i | is small, it is known as user cold-start problem (UC).” [Section 4. Problem Formulation]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated determining, by the device, whether the real-time data does not include user features or is missing particular user features, as taught by Du to the method of Martineau and Zhang because both of these methods are directed towards accounting for a user cold-start scenario in knowledge graph-based recommendation. Zhang assumes that all its inputs lack user information. In making this combination and utilizing presence of user features as a condition, it would allow Martineau and Zhang to account for user preference without relying on them to “effectively derive prior collaborative signals and knowledge associations within and across different user preference learning tasks to support generic and accurate recommendations” (Du: [Section 2.1 Knowledge Graph Based Recommendation]). Regarding Claim 10, Martineau, Zhang and Du teaches the method of Claim 1, further comprising: updating the knowledge graph at particular intervals (Martineau: “A mimic network builder function 216 can be used to duplicate the shared vector space and to insert new users, items, aspects, or other information into the shared vector space. This allows the new users, items, aspects, or other information to be added to the vector space more rapidly, rather than requiring the new information to be added as new nodes to the knowledge graph and then rebuilding the vector space (although this could be done periodically or other any other suitable times).” [0068]). Regarding Claims 11, 19 and 20, they are device and non-transitory computer-readable memory device claims that correspond to Claims 1 and 10. Therefore, they are rejected for the same reasons as Claims 1 and 10 above. Claims 2-4, 12-14 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Martineau in view of Zhang and Du as applied to Claims 1 and 11 above, in further view of Li et al. (“Towards purchase prediction: A transaction-based setting and a graph-based method leveraging price information”, published 01/22/2021), hereinafter Li. Li was cited in the previous Office Action. Regarding Claim 2, Martineau, Zhang and Du teach the method of Claim 1, wherein the prediction output classes include a plurality of products, the method further comprising: generating the knowledge graph for the plurality of products based on a plurality of product features and a plurality of user features (Martineau: “The one or more knowledge graphs generally include nodes that represent the items, the identified aspects of those items, and the users.” [0065]). However, they fail to expressly disclose generating the knowledge graph for the plurality of products based on a plurality of time features. In the same field of endeavor, Li teaches generating the knowledge graph for the plurality of products based on a plurality of time features (Li: “Each user or item is a vertex of the graph G, thus we have (n + m) nodes on the graph. All transaction records of users in U on items in I are denoted by T. Each transaction record t<u,i> stands for user u buying item i, additionally with a timestamp Tt<u,i> and a tuple with six elements encoding price and discount information” [Section 3. Problem statement and formulation]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated generating the knowledge graph for the plurality of products based on a plurality of time features, as taught by Li to the method of Martineau, Zhang and Du because both of these methods are directed towards making predictions/recommendations based on historical transaction data represented as a knowledge graph. In making this combination and including time features in the knowledge graph, it would allow the method of Martineau, Zhang and Du to “leverage time information to build [a] session-based sequence aware recommender system” (Li: [Section 2. Related Work]). Regarding Claim 3, Martineau, Zhang, Du and Li teach the method of Claim 2, wherein generating the knowledge graph includes: obtaining training data that includes historical purchasing data associated with the plurality of products (Martineau: “the input data 202 can include information identifying different users associated with a particular company and different items that are liked, purchased, viewed, reviewed, or otherwise used by or associated with the users in some manner.” [0058]). Regarding Claim 4, Martineau, Zhang, Du and Li teach the method of Claim 3, further comprising: generating a regression forecasting model using the obtained training data (Martineau: “The shared vector space representation function 210 represents any suitable algorithm that can generate vector spaces identifying users and items based on knowledge graphs. In some embodiments, the mimic network builder function 216 uses deep learning to train a neural network, and the neural network processes information associated with users or items (and the items' aspects) in order to update the mimicked vector space.” [0069]; “The embeddings extracted in this manner or any other suitable manner are used as inputs to a deep learning model. One specific example network architecture could model each property input independently through a deep neural network. This may allow the network to learn a more fine-grained weighting of each input and thus improve the overall recommendation result. Another specific example network architecture could concatenate all property inputs into one larger input that is fed into a deep neural network. However designed, the deep neural network can be set up as a regression that models a user's rating on a particular item.” [0170]), wherein the regression forecasting model relates the plurality of product features, the plurality of user features, and the plurality of time features to product values associated with the plurality of products (Li: “we propose a two-step graph-based model, where the graph model is applied in the first step to learn representations of both users and items over click-through data, and the second step is a classifier incorporating the price information of each transaction record.” [Abstract]). Regarding Claims 12-14 and 21, they are device and non-transitory computer-readable memory claims that correspond to Claims 2-4. Therefore, they are rejected for the same reasons as Claims 2-4 above. Claims 5-6, 15-16 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Martineau in view of Zhang, Du and Li, as applied to Claims 4 and 14 above, in further view of Lundberg et al. (“A Unified Approach to Interpreting Model Predictions”, published 12/9/2017), hereinafter Lundberg. Lundberg was cited in the previous Office Action. Regarding Claim 5, Martineau, Zhang, Du and Li teach the method of Claim 4, further comprising: ranking input features associated with the regression forecasting model (Martineau: “The aspect discovery function 204 can also process user reviews of items to identify which characteristics of the items tend to be more or less important to users. This allows the aspect discovery function 204 to both identify the aspects of items and identify the relative importance or significances of those aspects.” [0060]; “Depending on the implementation, the aspects could be organized from top-to-bottom or bottom-to-top in order of importance, or other organizations (or random ordering) can be used.” [0147]; “The embeddings extracted in this manner or any other suitable manner are used as inputs to a deep learning model. One specific example network architecture could model each property input independently through a deep neural network. This may allow the network to learn a more fine-grained weighting of each input and thus improve the overall recommendation result. Another specific example network architecture could concatenate all property inputs into one larger input that is fed into a deep neural network. However designed, the deep neural network can be set up as a regression that models a user's rating on a particular item.” [0170]); selecting a particular number of highest ranked input features (Martineau: “This could include, for example, the processor 120 of the electronic device executing the aspect linkage function 206 to select a specified number of highest-scoring aspect pairs and identifying the aspects in each pair as being related.” [0123]); and generating the knowledge graph using the selected particular number of highest ranked input features (Martineau: “A graph representation function 208 uses (among other things) the aspects identified by the aspect discovery function 204 and the relationships identified by the aspect linkage function 206 to generate or update one or more knowledge graphs. Knowledge graphs can be used to encode diverse knowledge about users, items, item aspects, item properties (which can be distinct from their aspects), and other information in order to make recommendations that are both accurate and potentially explainable.” [0064]; “The subjective and quantified aspect relationships can be expressed as weights that are attached to the edges linking aspect nodes in the knowledge graph to other nodes. For example, aspects can be related to items or users by both (i) user sentiments about the aspects and (ii) the importance of the aspects to the items or the users.” [0118]). However, they fail to expressly disclose using a machine learning interpretability (MLI) model to rank input features associated with the regression forecasting model based on an importance of an output associated with the regression forecasting model. In the same field of endeavor, Lundberg teaches using a machine learning interpretability (MLI) model to rank input features associated with the regression forecasting model based on an importance of an output associated with the regression forecasting model (Lundberg: “Shapley regression values are feature importances for linear models in the presence of multicollinearity. This method requires retraining the model on all feature subsets S ⊆ F, where F is the set of all features. It assigns an importance value to each feature that represents the effect on the model prediction of including that feature.” [Section 2.4 Classic Shapley Value Estimation]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated using a machine learning interpretability (MLI) model to rank input features associated with the regression forecasting model based on an importance of an output associated with the regression forecasting model, as taught by Lundberg to the method of Martineau, Zhang, Du and Li because both of these method are directed towards ranking features based on importance for determining influence on the predictions made by a prediction model. In making this combination and using a MLI model to rank the input features, it would allow the method of Martineau, Zhang, Du and Li to make estimations that are “better aligned with human intuition” and that “more effectually discriminate among model output classes” (Lundberg: [Section 1. Introduction]). Regarding Claim 22, it is a non-transitory computer-readable memory claim that corresponds with Claims 4-5. Therefore, it is rejected for the same reasons as Claims 4-5 above. Regarding Claim 6, Martineau, Zhang, Du, Li and Lundberg teach the method of Claim 5, wherein the knowledge graph relates the input features to user features, and wherein an edge weight for an edge associated with an input feature is based on an importance score, for the input feature, determined by the MLI model (Martineau: “The subjective and quantified aspect relationships can be expressed as weights that are attached to the edges linking aspect nodes in the knowledge graph to other nodes. For example, aspects can be related to items or users by both (i) user sentiments about the aspects and (ii) the importance of the aspects to the items or the users.” [0118]; “The aspect discovery function 204 processes the received input data 202 in order to automatically identify different aspects of various items and to identify the significances of those aspects. For example, the aspect discovery function 204 can process item descriptions (such as product specifications) to identify characteristics of the items. The aspect discovery function 204 can also process user reviews of items to identify which characteristics of the items tend to be more or less important to users. This allows the aspect discovery function 204 to both identify the aspects of items and identify the relative importance or significances of those aspects.” [0060]; “The aspect discovery function 204 represents any suitable data mining algorithm or other algorithm that can process input data to identify aspects of items and the significances of those aspects. One example implementation of the aspect discovery function 204 is described below, although other implementations of the aspect discovery function 204 can be used.” [0061]; Lundberg: “Shapley regression values are feature importances for linear models in the presence of multicollinearity. This method requires retraining the model on all feature subsets S ⊆ F, where F is the set of all features. It assigns an importance value to each feature that represents the effect on the model prediction of including that feature.” [Section 2.4 Classic Shapley Value Estimation]). Regarding Claims 15 and 16, they are device claims that correspond to the method of Claims 5 and 6. Therefore, they are rejected for the same reasons as Claims 5 and 6 above. Claims 7-8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Martineau in view of Zhang and Du, as applied to Claims 1 and 11, in view of Cetintas et al. (US 11113745 B1, published 09/07/2021), hereinafter Cetintas. Cetintas was cited in the previous Office Action. Regarding Claim 7, Martineau, Zhang and Du teach the method of Claim 1, wherein the knowledge graph relates one or more user features, one or more time features, and one or more product features to user segment nodes (Martineau: “The one or more knowledge graphs generally include nodes that represent the items, the identified aspects of those items, and the users. Edges are used to link associated nodes in the knowledge graph(s).” [0065]). However, they fail to expressly disclose wherein a particular user segment node identifies a user type. In the same field of endeavor, Cetintas teaches wherein a particular user segment node identifies a user type (Cetintas: “The user information comprises information about the user (e.g., age, geographic location, gender, etc.), which can be used to generate a multi-dimensional feature vector user representation for the user.” [Col. 3, Lines 63-67]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein a particular user segment node identifies a user type, as taught by Cetintas to the method of Martineau because both of these methods are directed towards recommendation systems that use correlation graphs to map relationships between items and users. In making this combination and tracking the type of user within the user segment, it would allow the method of Martineau to better present relevant information to a user by predicting user behavior through their demographic characteristics (Cetintas: [Col. 14, Lines 47-53]). Regarding Claim 8, Martineau, Zhang, Du and Cetintas teach the method of Claim 7, wherein the user type is defined by one or more of a purchasing habit, a geographic location, or at least one demographic factor (Cetintas: “The user information comprises information about the user (e.g., age, geographic location, gender, etc.), which can be used to generate a multi-dimensional feature vector user representation for the user.” [Col. 3, Lines 63-67]). Regarding Claim 17, it is a device claims that correspond to Claims 7 and 8. Therefore, it is rejected for the same reasons as Claims 7 and 8 above. Response to Arguments Examiner acknowledges the Applicant’s amendments to Claims 1, 11 and 20, as well as new Claims 21 and 22. Applicant's arguments, filed 01/23/2026, traversing the rejection of Claims 1-8, 10-17 and 19-20 under 35 U.S.C. § 101 have been fully considered and are persuasive. The rejection has been withdrawn. Applicant’s arguments, filed 01/23/2026, traversing the rejection of independent Claims 1, 11 and 20 amended to incorporate limitations of Claims 9 and 18 and dependent Claims 2-8, 10, and 12-18 under 35 U.S.C. §§ 102/103 have been fully considered and are found moot in light of the new grounds of rejection (see rejection above). Applicant’s arguments, filed 01/23/2026, traversing the rejection of Claims 2 and 12 under 35 U.S.C. § 103 have been fully considered but are not persuasive. Applicant alleges, on Pages 19-21 of the Remarks, that the combination of prior art does not teach the limitations of Claim 2 because a timestamp in a transaction log, as described in Li, does not disclose, or render obvious generating a knowledge graph for a plurality of products based on… a plurality of time features as recited in Claim 1. Unlike the Applicant’s specification in which time features are treated as feature nodes used in the KG structure used for feature selection, Li’s timestamp field in each transaction record is merely used to construct or generate a knowledge graph. Examiner respectfully disagrees. As explained in MPEP § 2111.01(II), “Though understanding the claim language may be aided by explanations contained in the written description, it is important not to import into a claim limitations that are not part of the claim. For example, a particular embodiment appearing in the written description may not be read into a claim when the claim language is broader than the embodiment.” While the specification describes the explicit structure of the knowledge graph as representing each feature as a separate node in the graph, this limitation is not reflected in the claim language, of which recites “generating the knowledge graph for the plurality of products based on a plurality of product features, a plurality of user features, and a plurality of time features”. As Li recites the timestamp as being used to generate the knowledge graph for the plurality of items bought by users in each transaction, Examiner asserts that the combination of prior art, including Li, teaches all the limitations of Claim 2 and corresponding Claim 12. Applicant’s arguments, filed 01/23/2026, traversing the rejection of Claims 6 and 16 under 35 U.S.C. § 103 have been fully considered but are not persuasive. Applicant alleges, on Pages 21-22 of the Remarks, that the combination of prior art does not teach the limitations of Claim 2 because Martineau’s “weights” are not disclose as being “based on an importance score… determined by” an MLI model applied to a regression forecasting model, instead describing the weights as encoding subjective and quantified aspect relationships (e.g., sentiment and aspect importance in the recommendation context), not MLI-derived feature-importance values tied to the output of a regression forecasting model. This deficiency is not remedied by Lundberg, as Lundberg does not teach or suggest using MLI-generated importance scores as an edge weight that links “input feature” nodes to “user feature” nodes in a knowledge graph. Furthermore, modifying Matineau’s edge weighting, which represents the relationship strength between “aspect” nodes and other nodes to instead weight edges based on MLI importance scores for ranked input features of a regression forecasting model is not a mere substitution of one known scoring value for another. Examiner respectfully disagrees. As disclosed by Lundberg, the MLI importance scores are used as an estimation of importance of a feature to an output prediction of a model and is explicitly meant to be aligned with human intuition (Lundberg: [Section 1. Introduction]). As disclosed by Martineau, “aspects” are treated as features (Martineau: [0059], [0087]) that can be linked to both users and items (Martineau: [0065]), represented in the vector space as latent features (Martineau: [0068], [0070]), and are used by recommendation engine, which is a regression model that uses aspect embeddings to make predictions of future values (Martineau: [0070], [0129]). As such, the modification of Martineau’s edge weighting from the recited importance scores to the MLI importance scores in is not the mere substitution of one known scoring value for another, but a substitution of scoring methods that both recite determining importance of a feature on a prediction made by a regression prediction model. As such, Examiner asserts that the combination of prior art, including Martineau and Lundberg, teaches all the limitations of Claim 6 and corresponding Claim 16. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wu et al. (“Session-Based Recommendation with Graph Neural Networks”) discusses a method for modeling session sequences as graph structured data, from which a GNN can capture complex transitions of items to provide recommendations to anonymous users. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEGAN E HWANG whose telephone number is (703)756-1377. The examiner can normally be reached Monday-Thursday 10:00-7:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.E.H./Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Mar 01, 2023
Application Filed
Oct 18, 2025
Non-Final Rejection — §103
Jan 23, 2026
Response Filed
Feb 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12456093
Corporate Hierarchy Tagging
2y 5m to grant Granted Oct 28, 2025
Patent 12437514
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437517
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437518
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437519
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
47%
Grant Probability
99%
With Interview (+60.2%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month