Prosecution Insights
Last updated: April 19, 2026
Application No. 18/592,044

TECHNIQUES TO PERSONALIZE CONTENT USING MACHINE LEARNING

Non-Final OA §101§102§103
Filed
Feb 29, 2024
Examiner
SANTIAGO-MERCED, FRANCIS Z
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Adobe Inc.
OA Round
1 (Non-Final)
29%
Grant Probability
At Risk
1-2
OA Rounds
3y 7m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
37 granted / 126 resolved
-22.6% vs TC avg
Strong +41% interview lift
Without
With
+41.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
49 currently pending
Career history
175
Total Applications
across all art units

Statute-Specific Performance

§101
46.3%
+6.3% vs TC avg
§103
35.0%
-5.0% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
6.9%
-33.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 126 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This is a Non-Final Office Action in response to the application filed 02/29/2024. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are currently pending in the application and have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more. With respect to claims 1-20, the independent claims (claims 1, 8 and 14) are directed, in part, to a method, a system and a non-transitory computer-readable medium for personalizing content. Step 1 – First pursuant to step 1 in the January 2019 Guidance, claims 1-17 are directed to a method comprising a series of steps which falls under the statutory category of a process, claims 8-13 are directed to a system which falls under the statutory category of a machine and claims 14-20 are directed to a computer-readable medium which falls under the statutory category of an article of manufacture. However, these claim elements are considered to be abstract ideas because they are directed to a mental process which includes observations or evaluations. As per Step 2A - Prong 1 of the subject matter eligibility analysis, the claims are directed, in part, to receiving activity data associated with a user from a device; generating a touchpoint embedding and a decision embedding using a graph neural network (GNN) model based on the activity data, the GNN model trained using a knowledge graph; predicting a touchpoint using a first classifier based on the touchpoint embedding; predicting a decision stage using a second classifier based on the decision embedding; and generating personalized content for the touchpoint based on the decision stage using a large language model (LLM). If a claim limitation, under its broadest reasonable interpretation covers an observation or evaluation, then it falls under the “mental process” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As per Step 2A - Prong 2 of the subject matter eligibility analysis, this judicial exception is not integrated into a practical application. In particular, the independent claims recite additional elements: device, neural network model, large language model, system, memory, devices, model trainer module, training dataset, non-transitory computer-readable medium. The dependent claims recite additional elements electronic display, database, These additional element in both steps are recited at a high-level of generality (i.e., as a generic device performing a generic computer function of receiving and storing data) such that these elements amount no more than mere instructions to apply the exception using a generic computer component. Examiner looks to Applicant’s specification in at least figures 1 and 5 and related text and [0098-0102] to understand that the invention may be implemented in a generic environment that “The one or more servers 510 implements a content delivery apparatus 502. In one embodiment, the content delivery apparatus 502 includes at least one processor; at least one memory including instructions executable by the at least one processor; and a machine learning model comprising parameters stored in the at least one memory, wherein the machine learning model comprises a GNN model 304. The servers 510 may include content delivery apparatus 502 implementing touchpoint content adaptation system 400 that is designed for performing targeted content delivery. In an example process, the content delivery apparatus 502 obtains activity data 508 from a user 104 via the device 436. The user 104 interacts with the content delivery apparatus 502viaauserinterfaceof the content delivery apparatus 502. In some cases, portions of the user interface are displayed on a personal machine or device 436 of the user 104. The activity data 508 represents various actions, activities or behaviors of the user 104.” “The content delivery apparatus 502 or components thereof are implemented on a server. A server provides one or more functions to users linked by way of one or more of the various networks. In some cases, the server includes a single microprocessor board, which includes a microprocessor responsible for controlling all aspects of the server. In some cases, a server uses microprocessor and protocols to exchange data with other devices/users on one or more of the networks via hypertext transfer protocol (HTTP), and simple mail transfer protocol (SMTP), although other protocols such as file transfer protocol (FTP), and simple network management protocol (SNMP) can also be used. In some cases, a server is configured to send and receive hypertext markup language (HTML) formatted files (e.g., for displaying web pages). In various embodiments, a server comprises a general-purpose computing device, a personal computer, a laptop computer, a mainframe computer, a super computer, or any other suitable processing apparatus.” Accordingly, these additional elements do not integrate the abstract idea into a practical application because they are mere instructions to implement the abstract idea on a computer. As per Step 2B of the subject matter eligibility analysis, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional elements are mere instructions to apply the abstract idea on a computer. When considered individually, these claim elements only contribute generic recitations of technical elements to the claims. It is readily apparent, for example, that the claim is not directed to any specific improvements of these elements and the invention is not directed to a technical improvement. When the claims are considered individually and as a whole, the additional elements noted above, appear to merely apply the abstract concept to a technical environment in a very general sense – i.e. a generic computer receives information from another generic computer, processes the information and then sends information back. In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that amount to significantly more than the abstract idea itself. The most significant elements of the claims, that is the elements that really outline the inventive elements of the claims, are set forth in the elements identified as an abstract idea. In addition, the use of neural networks is well-understood, routine, and conventional in the art. See, e.g., Dailey et al., US Patent No. 6,917,952 (col. 10, lines 10-12), noting that “The preferred embodiment uses neural networks, and conventional methods of training them as are known in the art.” See also, Hao et al., US 2014/0086495 (par. 73), noting “Methods for defining and training artificial neural network models are well-known to those skilled in the art, and any such method can be used in accordance with the present invention.” Accordingly, the additional element directed to a neural network fails to add significantly more to the claims. The fact that the generic computing devices are facilitating the abstract concept is not enough to confer statutory subject matter eligibility. The dependent claims further refine the abstract idea. These claims do not provide a meaningful linking to the judicial exception. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above – such as by describing the nature and content of the data that is received/sent. While these descriptive elements may provide further helpful context for the claimed invention these elements do not serve to confer subject matter eligibility to the invention since their individual and combined significance is still not significantly more than the abstract concepts at the core of the claimed invention. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-6, 14-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US Pub. No. 2025/0094708 (hereinafter; Cunningham). Regarding claim 1, Cunningham discloses: A method, comprising: receiving activity data associated with a user from a device; (Cunningham [0005] discloses the disclosed systems can receive input from a client device to select one or more content items and generate text representations of the content items[…]) generating a touchpoint embedding and a decision embedding using a graph neural network (GNN) model based on the activity data, the GNN model trained using a knowledge graph; (Cunningham [0032-0034] disclose the use of a knowledge graph with a graph neural network.) predicting a touchpoint using a first classifier based on the touchpoint embedding; (Cunningham [0033] discloses predictions based on the use of data; [0034] discloses using machine learning to determine classifications; [0021] discloses the content item segmenting system can generate an output request embedding from the model output request.) predicting a decision stage using a second classifier based on the decision embedding; (Cunningham [0033] discloses predictions based on the use of data; [0034] discloses using machine learning to determine classifications; [0021] discloses the content item segmenting system can generate an output request embedding from the model output request.) and generating personalized content for the touchpoint based on the decision stage using a large language model (LLM). (Cunningham [0021] discloses the content item segmenting system can select one or more content items for analysis by a large language model according to the model output request (e.g., based on the output request embedding).) Regarding claims 2/15, Cunningham discloses: The method of claim 1; The computer-readable storage medium of claim 14, comprising presenting the personalized content for the touchpoint on an electronic display of the device. (Cunningham [0043] discloses Based on instructions from the client application 112, the client device 110 can present or display information, including a user interface for presenting content items or model outputs from the content management system 104 or from other network locations.) Regarding claims 3/16, Cunningham discloses: The method of claim 1; The computer-readable storage medium of claim 14, comprising: retrieving content from a database based on the touchpoint and the decision stage; constructing a prompt for the LLM to regenerate the content; and generating the personalized content based on the regenerated content from the LLM. (Cunningham [0005] discloses The disclosed systems can utilize the large language model to generate a model output from a prompt that includes one or more selected text segments and the model output request. See also [0024] In addition to providing the most relevant text segments and the model output request, the content item segmenting system can also provide a threshold number of additional text segments as part of the large language model prompt. For instance, by way of example, and not limitation, the content item segmenting system can send the first and second text segments beginning at the start point of the content item/s to provide context and to guide the large language model in its output generation.) Regarding claims 4/17, Cunningham discloses: The method of claim 1; The computer-readable storage medium of claim 14, wherein the activity data is graph-structured data for the knowledge graph, the graph-structured data comprising a user node, a touchpoint node, an event node, and a set of edges between the user node, the touchpoint node, and the event node. (Cunningham [0032] discloses including graph information from a knowledge graph; [0037] discloses a compatibility graph that refers to a data graph that defines or indicates relationships between content items and/or types of content items (e.g., using nodes and edges). In particular, a compatibility graph includes, but is not limited to, a set of nodes and edges that indicate conversion paths for converting a content item of one type to a content item of another type (e.g., using multiple conversion steps to traverse across nodes linking the types in the graph). Regarding claims 5/18, Cunningham discloses: The method of claim 1; The computer-readable storage medium of claim 14, wherein the touchpoint embedding is a vector comprising values representing a relationship between a user node, a touchpoint node, and an edge between the user node and the touchpoint node. (Cunningham discloses vectors in at least [0022]; [0033]; [0060].) Regarding claims 6/19, Cunningham discloses: The method of claim 1; The computer-readable storage medium of claim 14, wherein the decision embedding is a vector comprising values representing a relationship between a user node, an event node, and an edge between the user node and the event node. (Cunningham discloses vectors in at least [0022]; [0033]; [0060].) Regarding claim 14, Cunningham discloses: A non-transitory computer-readable medium storing executable instructions, which when executed by one or more processing devices, cause the one or more processing devices to perform operations comprising: receiving activity data associated with a user from a device; (Cunningham [0005] discloses the disclosed systems can receive input from a client device to select one or more content items and generate text representations of the content items[…]) generating a touchpoint embedding and a decision embedding using a graph neural network (GNN) model based on the activity data, the GNN model trained using a knowledge graph; (Cunningham [0032-0034] disclose the use of a knowledge graph with a graph neural network.) predicting a touchpoint using a first classifier based on the touchpoint embedding; (Cunningham [0033] discloses predictions based on the use of data; [0034] discloses using machine learning to determine classifications; [0021] discloses the content item segmenting system can generate an output request embedding from the model output request.) predicting a decision stage using a second classifier based on the decision embedding; (Cunningham [0033] discloses predictions based on the use of data; [0034] discloses using machine learning to determine classifications; [0021] discloses the content item segmenting system can generate an output request embedding from the model output request.) and generating personalized content for the touchpoint based on the decision stage using a large language model (LLM), the personalized content comprising a multimedia message in a natural language. (Cunningham [0021] discloses the content item segmenting system can select one or more content items for analysis by a large language model according to the model output request (e.g., based on the output request embedding).) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 7-13, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cunningham in view of US Pub. No. 2017/0308650 (hereinafter; Brill). Regarding claims 7/20, although Cunningham discloses personalizing content using large language models, Cunningham does not specifically disclose updating a knowledge graph. However, Brill discloses the following limitations: The method of claim 1; The computer-readable storage medium of claim 14, comprising: detecting a transaction associated with the user from the device; and updating the knowledge graph with graph-structured data based on the activity data associated with the user after the transaction. (Brill discloses updating a knowledge graph in at least [0061].) It would have been obvious to one or ordinary skill in the art to combine the large language models of Cunningham with the intelligent system for integrating and presenting information of Brill in order to update electronic record systems (Brill abstract) because the references are analogous since they both fall within Applicant's field of endeavor and are reasonably pertinent to the problem with which Applicant is concerned. Regarding claim 8, Cunningham discloses: A system comprising: a memory component; and one or more processing devices coupled to the memory component, the one or more processing devices to perform operations comprising: accessing, by a model trainer module, a training dataset to train a graph neural network (GNN) model, (Cunningham [0032-0034] disclose the use of a knowledge graph with a graph neural network.) the training dataset comprising multiple datapoints, each datapoint comprising samples from a knowledge graph, each sample comprising a user node, a touchpoint node, an event node, and a set of edges between the user node, the touchpoint node, or the event node; (Cunningham [0032] discloses including graph information from a knowledge graph; [0037] discloses a compatibility graph that refers to a data graph that defines or indicates relationships between content items and/or types of content items (e.g., using nodes and edges). In particular, a compatibility graph includes, but is not limited to, a set of nodes and edges that indicate conversion paths for converting a content item of one type to a content item of another type (e.g., using multiple conversion steps to traverse across nodes linking the types in the graph). generating, by the GNN model, a touchpoint embedding based on a first datapoint from the training dataset; (Cunningham [0032-0034] disclose the use of a knowledge graph with a graph neural network.) generating, by the GNN model, a decision embedding based on a second datapoint from the training dataset; (Cunningham [0024]; [0032] disclose using a second segment to train the large language model; [0032-0034] disclose the use of a knowledge graph with a graph neural network.) evaluating, by the model trainer module, the touchpoint embedding and the decision embedding using labels associated with the first datapoint and the second datapoint, respectively; (Cunningham [0035-0037] disclose extracting data and a set of nodes with different data.) Although Cunningham discloses personalizing content using large language models, Cunningham does not specifically disclose updating data. However, Brill discloses the following limitations: updating, by the model trainer module, parameters for the GNN model using a loss function and optimization algorithm based on evaluation results to train the GNN model. (Brill discloses updating data and a knowledge graph in at least [0061]; [0078].) It would have been obvious to one or ordinary skill in the art to combine the large language models of Cunningham with the intelligent system for integrating and presenting information of Brill in order to update electronic record systems (Brill abstract) because the references are analogous since they both fall within Applicant's field of endeavor and are reasonably pertinent to the problem with which Applicant is concerned. Regarding claims 9, Cunningham discloses: The system of claim 8, wherein the touchpoint embedding is a vector comprising values representing a relationship between a user node, a touchpoint node, and an edge between the user node and the touchpoint node. (Cunningham discloses vectors in at least [0022]; [0033]; [0060].) Regarding claim 10, Cunningham discloses: The system of claim 8, wherein the decision embedding is a vector comprising values representing a relationship between a user node, an event node, and an edge between the user node and the event node. (Cunningham discloses vectors in at least [0022]; [0033]; [0060].) Regarding claim 11, although Cunningham discloses personalizing content using large language models, Cunningham does not specifically disclose testing. However, Brill discloses the following limitations: The system of claim 8, the one or more processing devices to perform operations comprising evaluating, by a model evaluator module, the trained GNN model using a testing dataset comprising datapoints to test the GNN model. (Brill discloses testing in at least [0043]; [0061].) It would have been obvious to one or ordinary skill in the art to combine the large language models of Cunningham with the intelligent system for integrating and presenting information of Brill in order to update electronic record systems (Brill abstract) because the references are analogous since they both fall within Applicant's field of endeavor and are reasonably pertinent to the problem with which Applicant is concerned. Regarding claim 12, Cunningham discloses: The system of claim 8, the one or more processing devices to perform operations comprising re-training, by the model trainer module, the trained GNN model using feedback information. (Cunningham [0032-0034] disclose a “large language model” refers to a machine learning model trained to perform computer tasks to generate or identify content items in response to trigger events (e.g., user interactions, such as text queries and button selections). In particular, a large language model can be a neural network (e.g., a deep neural network) with many parameters trained on large quantities of data (e.g., unlabeled text) using a particular learning technique (e.g., self-supervised learning). For example, a large language model can include parameters trained to generate model outputs (e.g., content items, summaries, or query responses) and/or to identify content items based on various contextual data, including graph information from a knowledge graph and/or historical user account behavior. A neural network can include a deep neural network, a convolutional neural network, a transformer neural network, a recurrent neural network (e.g., an LSTM), a graph neural network, or a generative adversarial neural network. Upon training, such a neural network may become a large language model. See also [0081] model feedback.) Regarding claim 13, although Cunningham discloses personalizing content using large language models, Cunningham does not specifically disclose encoding. However, Brill discloses the following limitations: The system of claim 8, the one or more processing devices to perform operations comprising: detecting a transaction from activity data associated with a user from a device; encoding the knowledge graph with a new user node representing the user, a new event node representing the transaction, and a new edge between the new user node and the new event node; generating a new datapoint for the training dataset; and re-training the GNN model using the new datapoint. (Brill [0060-0061] disclose encoded rules; information can include various facts 360, which may be stored as nodes connected by edges representing relationships between the nodes such that the nodes and edges together can encode the rules 350.) It would have been obvious to one or ordinary skill in the art to combine the large language models of Cunningham with the intelligent system for integrating and presenting information of Brill in order to update electronic record systems (Brill abstract) because the references are analogous since they both fall within Applicant's field of endeavor and are reasonably pertinent to the problem with which Applicant is concerned. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANCIS Z SANTIAGO-MERCED whose telephone number is (571)270-5562. The examiner can normally be reached M-F 7am-4:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN EPSTEIN can be reached at 571-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FRANCIS Z. SANTIAGO MERCED/Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Feb 29, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547958
SWAPPING TASK ASSIGNMENTS TO DETERMINE TASK SELECTION
2y 5m to grant Granted Feb 10, 2026
Patent 12524719
SYSTEMS AND METHODS FOR PREDICTING AND MANAGING TOOL ASSETS
2y 5m to grant Granted Jan 13, 2026
Patent 12493845
SYSTEMS AND METHODS FOR MULTI-CHANNEL CUSTOMER COMMUNICATIONS CONTENT RECOMMENDER
2y 5m to grant Granted Dec 09, 2025
Patent 12348826
HOTSPOT LIST DISPLAY METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jul 01, 2025
Patent 12271852
SYSTEMS AND METHODS FOR MULTI-CHANNEL CUSTOMER COMMUNICATIONS CONTENT RECOMMENDER
2y 5m to grant Granted Apr 08, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
29%
Grant Probability
70%
With Interview (+41.1%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 126 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month