Prosecution Insights
Last updated: April 19, 2026
Application No. 18/113,243

AI MODEL RECOMMENDATION BASED ON SYSTEM TASK ANALYSIS AND INTERACTION DATA

Non-Final OA §103
Filed
Feb 23, 2023
Examiner
PENG, HUAWEN A
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
586 granted / 712 resolved
+27.3% vs TC avg
Strong +20% interview lift
Without
With
+20.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
14 currently pending
Career history
726
Total Applications
across all art units

Statute-Specific Performance

§101
15.6%
-24.4% vs TC avg
§103
42.9%
+2.9% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 712 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 6. Claims 1-12 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Javari et al. (US 2024/0220810) hereinafter Javari, in view of NING et al. (US 2022/0122099) hereinafter NING. In claim 1, Javari discloses “A method, comprising: generating, by a processor set, an interaction usage graph based on user interaction data on user interactions with an analytical system user interface ([0017] The user activity 114 may be user activity on the electronic user interface, such as user navigations to the documents, user selections of the subjects of the documents, sequences of user selections of the documents, etc. For example, in some embodiments, the documents 112 may be respective of products and services offered through a website (e.g., where each document is respective of a given product or service), and the user activity 114 may be user interactions with the documents themselves (e.g., user navigations to, clicks on, or examinations of the documents), and/or user interactions with the products and services that are the subjects of those documents (e.g., purchases, additions to cart, etc.) [0020] a graph builder module 110 that may receive the records of user activity 114 and build or otherwise generate, based on the records, a knowledge graph respective of the records. The knowledge graph may include a plurality of nodes, each of which nodes is representative of a user action or selection (e.g., viewing, adding to cart, or purchasing a given item), and connections between nodes. Each node-to-node connection may be representative of a session-based relationship between connected actions/selections, such as co-views, co-purchases, or co-view purchases, for example); generating, by the processor set, an interaction embedding model in a vector space based on the interaction usage graph ([0056] receiving a sequence of user actions and, at block 402, determining a context embedding vector according to the received sequence of user actions. The context embedding vector may be determined according to a self-attention layer of a machine learning model, for example, applied to embedding vectors respective of each of the user actions in the sequence [0058] querying a knowledge graph respective of possible user actions with the context embedding vector to obtain a knowledge-enhanced representation of the sequence. Querying the knowledge graph may include, for example, performing heterogenous graph propagation using the context embedding vector. In some embodiments, querying the knowledge graph at block 404 includes the context embedding module 216 querying 220 the knowledge graph from graph module 206 and obtaining the neighborhoods)”. Javari does not appear to explicitly disclose however, NING discloses “determining, by the processor set and based on the interaction embedding model in the vector space, a similarity of a portion of the interaction embedding model that corresponds to a particular analytical task among the user interactions with the analytical system user interface with a particular machine learning model from a set of one or more machine learning models ([0127] The precursor discovery unit 912 further includes matrix generation unit 916 that is configured to generate matrices containing quantified values that represent the processed user interactions with respect to the current and historical items (e.g., current interaction matrix (C), history interaction matrix (H)). This may entail coordinating with the interaction processor 914 to receive the user interaction data and/or retrieving the interaction data 926 from the storage 924. The matrix generation unit 916 is also configured to generate matrices containing values representative of the similarities (e.g., similarity scores) between the current and historical items, such as similarity matrix (S). The matrix generation unit 916 may receive the similarity values from similarity determination logic 918, which is configured to determine similarities between the content items, for instance by comparing their content features and generating similarity scores. The matrix generation unit 916 may also be configured to generate prediction matrix (P) and weight matrix (W) based on the observed data and the proposed predictive model [0128] The precursor discovery unit 912 further includes predictive modelling logic 920 configured to build and train the proposed predictive model based on the observed data. The predictive modelling logic 920 may operate in conjunction with other components in the precursor discovery unit 912, such as the interaction processor 914, matrix generation logic 916, similarity determination logic 918, and precursor determination logic 922, to obtain data necessary to build and train the predictive model); and outputting, by the processor set, to the analytical system user interface, an indication of the particular machine learning model ([0129] The precursor discovery unit 912 also includes precursor determination logic 922 configured to use the predictive model to discover precursors for a new user interaction event (e.g., a particular user selecting a new item). The precursor determination logic 922 may also operate in conjunction with other components in the precursor discovery unit 912, such as the interaction processor 914, matrix generation logic 916, similarity determination logic 918, and predictive modelling logic 920. The results containing the identified precursors may be transmitted over the network 906 to the content server 910, application server 908, and/or client device 900, and provided to the user via an interface in the display 904)”. Hence, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Javari and NING, the suggestion/motivation for doing so would have been to provide methods for discovering precursors associated with a current user interaction event based on the enhanced importance matrix and the prior interactions of the user with the plurality of historical items (Abstract). In claim 2, NING teaches The method of claim 1, further comprising: outputting, to the analytical system user interface, an indication of similarity of the particular machine learning model with the particular analytical task ([0128] The precursor discovery unit 912 further includes predictive modelling logic 920 configured to build and train the proposed predictive model based on the observed data. The predictive modelling logic 920 may operate in conjunction with other components in the precursor discovery unit 912, such as the interaction processor 914, matrix generation logic 916, similarity determination logic 918, and precursor determination logic 922, to obtain data necessary to build and train the predictive model). In claim 3, Javari teaches The method of claim 1, further comprising: using the particular machine learning model to generate a result related to the particular analytical task; and outputting, to the analytical system user interface, the result related to the particular analytical task ([0022] The sequential recommendation system 104 may further be configured to use the trained machine learning model(s) to, given an input of a sequence of user actions, predict the most likely next user action (or multiple such actions). For example, the trained machine learning model may be applied in conjunction with a website to recommend a next document to a user based on that user's sequence of actions on the website. In some embodiments, the trained machine learning model may receive a sequence of products and/or services that a user interacts with, such as by viewing, adding to cart, or purchasing, and may output to the user a predicted product or service, or the characteristics of a predicted product or service, based on that sequence). In claim 4, Javari teaches The method of claim 1, further comprising: generating respective embedding models of the one or more machine learning models in the vector space ([0056] receiving a sequence of user actions and, at block 402, determining a context embedding vector according to the received sequence of user actions. The context embedding vector may be determined according to a self-attention layer of a machine learning model, for example, applied to embedding vectors respective of each of the user actions in the sequence). In claim 5, NING teaches The method of claim 1, further comprising determining that the similarity of the portion of the interaction embedding model that corresponds to the particular analytical task with the particular machine learning model passes a selected threshold of similarity ([0113-0117] [0113] the similarities can be measured by comparing the content features of the new items with the content features of the historical items. Each entry in the matrix contains a value or score that represents the similarity between the new item and a historical item [0114] This matrix can be learned based on the proposed predictive model. In this matrix, each value or score represents how much one historical item affects or contributes to the current user interaction event (e.g., user A's current action of selecting the new item) collectively from all users [0117] a threshold function is applied to distinguish precursors from non-precursors. For example, if the value of the product is greater than 0.5, the corresponding historical item is identified as a precursor and the value of the product is included in the Precursors matrix. If the value of the product is less than or equal to 0.5, the corresponding historical item is identified as a non-precursor and the value of the product is represented as O (i.e., zero) in the Precursors matrix). In claim 6, NING teaches The method of claim 5, wherein determining the similarity of the portion of the interaction embedding model that corresponds to the particular analytical task with the particular machine learning model comprises detecting a similarity measure of the portion of the interaction embedding model that corresponds to the particular analytical task with an embedding model corresponding to the particular machine learning model in the vector space ([0051] the specific vector encoding for a given content item can be determined using machine learning methods [0053] For a given user, a user profile vector (ref. 112) is defined, that is encoded in the same vector space as the content item vectors. That is, the user profile vector has the same dimensionality and defines values for the same set of entities as the content item vectors. However, the values defined by the user profile vector indicate the relevance of the entities to the given user [0054] To determine the (expected) relevance of a given content item to the user, the similarity of the content item's vector representation to the user profile vector is determined (ref. 114). In some implementations, this is performed by determining the inner product of the content item's vector and the user profile vector to arrive at a relevance score. In the illustrated implementation, the inner product of the user profile vector (ref. 112) and each of the content item A/B/C vectors yields the relevance scores for the content items A, B and C as shown at ref. 116 [0127] The precursor discovery unit 912 further includes matrix generation unit 916 that is configured to generate matrices containing quantified values that represent the processed user interactions with respect to the current and historical items (e.g., current interaction matrix (C), history interaction matrix (H)). This may entail coordinating with the interaction processor 914 to receive the user interaction data and/or retrieving the interaction data 926 from the storage 924. The matrix generation unit 916 is also configured to generate matrices containing values representative of the similarities (e.g., similarity scores) between the current and historical items, such as similarity matrix (S). The matrix generation unit 916 may receive the similarity values from similarity determination logic 918, which is configured to determine similarities between the content items, for instance by comparing their content features and generating similarity scores. The matrix generation unit 916 may also be configured to generate prediction matrix (P) and weight matrix (W) based on the observed data and the proposed predictive model [0128] The precursor discovery unit 912 further includes predictive modelling logic 920 configured to build and train the proposed predictive model based on the observed data. The predictive modelling logic 920 may operate in conjunction with other components in the precursor discovery unit 912, such as the interaction processor 914, matrix generation logic 916, similarity determination logic 918, and precursor determination logic 922, to obtain data necessary to build and train the predictive model). In claim 7, NING teaches The method of claim 6, further comprising detecting that the similarity measure of the portion of the interaction embedding model that corresponds to the particular analytical task with an embedding model corresponding to the particular machine learning model in the vector space passes a selected threshold of cosine similarity ([0127] The precursor discovery unit 912 further includes matrix generation unit 916 that is configured to generate matrices containing quantified values that represent the processed user interactions with respect to the current and historical items (e.g., current interaction matrix (C), history interaction matrix (H)). This may entail coordinating with the interaction processor 914 to receive the user interaction data and/or retrieving the interaction data 926 from the storage 924. The matrix generation unit 916 is also configured to generate matrices containing values representative of the similarities (e.g., similarity scores) between the current and historical items, such as similarity matrix (S). The matrix generation unit 916 may receive the similarity values from similarity determination logic 918, which is configured to determine similarities between the content items, for instance by comparing their content features and generating similarity scores. The matrix generation unit 916 may also be configured to generate prediction matrix (P) and weight matrix (W) based on the observed data and the proposed predictive model [0128] The precursor discovery unit 912 further includes predictive modelling logic 920 configured to build and train the proposed predictive model based on the observed data. The predictive modelling logic 920 may operate in conjunction with other components in the precursor discovery unit 912, such as the interaction processor 914, matrix generation logic 916, similarity determination logic 918, and precursor determination logic 922, to obtain data necessary to build and train the predictive model). In claim 8, NING teaches The method of claim 5, further comprising reducing the selected threshold of similarity based on updated similarity evaluation criteria learned from prior determinations of similarity of portions of the interaction embedding model with the one or more machine learning models ([0097] a prediction matrix (P), P T×U is defined where each element, P i,u, is the estimated score from the model. The model aims to minimize the errors between the current interaction matrix (C), which represents the actual interactions observed, and the prediction matrix (P), which indicates estimated probabilities for users' actions on the new items). In claim 9, Javari teaches The method of claim 1, further comprising: detecting the user interactions with the analytical system user interface; and generating user interaction logs based on the user interactions, wherein receiving the user interaction data comprises receiving the user interaction logs ([0029] the graph builder module 110 may include graph module 206 configured to construct the knowledge graph based on the edges extracted by the edges module 204 from sequences in historical sequences module 202. The knowledge graph may include a plurality of nodes, each of which nodes is representative of a user action or selection (e.g., viewing, adding to cart, or purchasing a given item), and connections between nodes. Each node-to-node connection may be representative of a session-based relationship between connected actions/selections, such as co-views, co-purchases, or co-view purchases, for example. In some embodiments, the knowledge graph generated by the graph module 206 may be an aggregation of models. That is, for example, the knowledge graph may be an aggregation of a previous iteration graph and the output results of the edges extracted from sequences of users 208a, 208b by the edges module 204 and by graph builder module 110). In claim 10, Javari teaches The method of claim 9, wherein detecting the user interactions with the analytical system user interface comprises detecting high-granularity interaction data comprising page views, mouse movements, mouseover actions, mouse fixation actions, icon selections, keyboard usage, keyboard shortcut selections, widget selections, interaction events with user interface elements, and timestamps of the interaction events ([0017] The user activity 114 may be user activity on the electronic user interface, such as user navigations to the documents, user selections of the subjects of the documents, sequences of user selections of the documents, etc. For example, in some embodiments, the documents 112 may be respective of products and services offered through a website (e.g., where each document is respective of a given product or service), and the user activity 114 may be user interactions with the documents themselves (e.g., user navigations to, clicks on, or examinations of the documents), and/or user interactions with the products and services that are the subjects of those documents (e.g., purchases, additions to cart, etc.) [0049] The sequence of user actions may be a user's interactions with the interface with which the training data used at block 302 is associated. For example, the user actions may be a sequence of documents that the user selects (e.g., clicks), navigates to, scrolls, or the contents (e.g., products and/or services) of which documents the user purchases, adds to cart, etc. within a given browsing session [0051] the machine learning model may output words (e.g., unique attributes) that describe a predicted next product or service. In another embodiment, the machine learning model may output a unique identifier respective of one or more predicted next documents). In claim 11, Javari teaches The method of claim 1, wherein generating the interaction usage graph based on the user interaction data comprises generating a directed graph structure representing the user interactions and an order in which the user interactions occurred, based on the user interaction data ([0029] the graph builder module 110 may include graph module 206 configured to construct the knowledge graph based on the edges extracted by the edges module 204 from sequences in historical sequences module 202. The knowledge graph may include a plurality of nodes, each of which nodes is representative of a user action or selection (e.g., viewing, adding to cart, or purchasing a given item), and connections between nodes. Each node-to-node connection may be representative of a session-based relationship between connected actions/selections, such as co-views, co-purchases, or co-view purchases, for example. In some embodiments, the knowledge graph generated by the graph module 206 may be an aggregation of models. That is, for example, the knowledge graph may be an aggregation of a previous iteration graph and the output results of the edges extracted from sequences of users 208a, 208b by the edges module 204 and by graph builder module 110). In claim 12, Javari teaches The method of claim 1, wherein generating the interaction embedding model in the vector space based on the interaction usage graph comprises processing the interaction usage graph with an algorithmic framework for representational machine learning on graphs ([0056] receiving a sequence of user actions and, at block 402, determining a context embedding vector according to the received sequence of user actions. The context embedding vector may be determined according to a self-attention layer of a machine learning model, for example, applied to embedding vectors respective of each of the user actions in the sequence [0058] at block 404, querying a knowledge graph respective of possible user actions with the context embedding vector to obtain a knowledge-enhanced representation of the sequence. Querying the knowledge graph may include, for example, performing heterogenous graph propagation using the context embedding vector). In claim 14, Javari teaches The method of claim 1, wherein the user interaction data comprises events, labels, targets, and identifiers, and the machine learning models comprise predictions and features ([0049] The sequence of user actions may be a user's interactions with the interface with which the training data used at block 302 is associated. For example, the user actions may be a sequence of documents that the user selects (e.g., clicks), navigates to, scrolls, or the contents (e.g., products and/or services) of which documents the user purchases, adds to cart, etc. within a given browsing session [0051] the machine learning model may output words (e.g., unique attributes) that describe a predicted next product or service. In another embodiment, the machine learning model may output a unique identifier respective of one or more predicted next documents [0052] the model outputs a unique identifier of a document as the predicted next user action, that document may be designated as the predicted next user action. In another example, in an embodiment in which the machine learning model outputs characteristics of a document, or of a product or service, block 310 may include determining the document, product, or service on the interface that is most similar to the characteristics output by the model. In a further example, where the model outputs embeddings, block 310 may include determining the document, product, or service having embeddings that are most similar to the embeddings output by the mode [0053] outputting the predicted next user action(s) to the user in response to the received sequence of user events. For example, the predicted next document, or product or service that is the subject of the predicted next document, may be output to the user in the form of a page recommendation, product recommendation, service recommendation, etc., through the electronic interface. In some embodiments, block 312 may include displaying a link to the predicted next document in response to a user search. In some embodiments, block 312 may include displaying a link to the predicted next document in response to a user navigation. In some embodiments, block 312 may include displaying a link to the predicted next document in response to a user click). Claims 15-17 are essentially same as claims 1, 2-3 and 5-6 except that they recite claimed invention as a computer program product and are rejected for the same reasons as applied hereinabove. Claims 18-20 are essentially same as claims 1, 2-3 and 5-6 except that they recite claimed invention as a system and are rejected for the same reasons as applied hereinabove. 7. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Javari et al. (US 2024/0220810) hereinafter Javari, in view of NING et al. (US 2022/0122099) hereinafter NING, and further in view of Shah et al. (US 2022/0327637) hereinafter Shah. In claim 13, per rejections in claims 1 and 12 Javari and NING do not appear to explicitly disclose however, Shah discloses “The method of claim 12, wherein the algorithmic framework for representational machine learning on graphs comprises the node2vec framework ([0013] the neural network can be trained using a loss function based on random walks (e.g., node2vec, DeepWalk, struc2walk, etc.), graph factorization, or node proximity in a graph. During graph embedding, the graph analyzer can also be configured to tune how the neural network performs graph embedding. For example, when performing random walks within node2vec, the graph analyzer can apply differing numbers of random walks based on a degree centrality of a vertex in the graph. Thus, by using such techniques, the graph analyzer can develop encoding functions for both the aggregate of connected neighbors as well as for each vertex itself. The encoding functions (or machine learning models) can then be used to convert each vertex in the graph into a tensor in a vector space individually representing a position and/or a level of interaction between users in the social network)” Hence, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Javari, NING and Shah, the suggestion/motivation for doing so would have been to implement social distance quantification based on user interactions via graph embedding ([0006]). Conclusion 8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is listed on 892 form. Examiner’s Note: Examiner has cited particular figures, and paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested for the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUAWEN A PENG whose telephone number is (571)270-5215. The examiner can normally be reached Mon thru Fri 9 am to 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUAWEN A PENG/Primary Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

Feb 23, 2023
Application Filed
Nov 26, 2025
Non-Final Rejection — §103
Feb 26, 2026
Interview Requested
Mar 09, 2026
Applicant Interview (Telephonic)
Mar 09, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602367
DATA INTEGRITY CHECKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602625
SYSTEMS AND METHODS FOR CREATING A RICH SOCIAL MEDIA PROFILE
2y 5m to grant Granted Apr 14, 2026
Patent 12598135
TECHNIQUES TO BALANCE LOG STRUCTURED MERGE TREES
2y 5m to grant Granted Apr 07, 2026
Patent 12579160
SYSTEMS, METHODS, AND APPARATUSES FOR GENERATING, EXTRACTING, CLASSIFYING, AND FORMATTING OBJECT METADATA USING NATURAL LANGUAGE PROCESSING IN AN ELECTRONIC NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12567274
GEOGRAPHIC MANAGEMENT OF DOCUMENT CONTENT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+20.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 712 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month