Prosecution Insights
Last updated: April 19, 2026
Application No. 18/734,932

DATABASE SYSTEMS AND METHODS FOR PERSONALIZED AGENT AUTOMATIONS

Non-Final OA §102§103
Filed
Jun 05, 2024
Examiner
SERROU, ABDELALI
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Salesforce Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
437 granted / 587 resolved
+12.4% vs TC avg
Strong +30% interview lift
Without
With
+30.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
23 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
19.7%
-20.3% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 587 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The filed information disclosure statement (IDS) is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5-7, 10-12, 14-16, and 19-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bendersky (US 2024/0403564). As per claim 1, Bendersky teaches determining, at a database system, an action to be performed on behalf of a user of a client device coupled to the database system over a network ([0028], [0042], receiving a textual prompt from a user specifying a task for an LLM); identifying, at the database system, a relevant subset of data in a database of the database system associated with the user based on the action ([0030], user category is identified from among a plurality of possible user categories each associated with a respective fine-tuned user prompt embedding stored in the embedding data store of the embedding identifier); generating, at the database system, a personalized input prompt for an execution plan for the action using the using the relevant subset of data ([0029], determining, based on the set of user features associated with the user, a user prompt embedding for the user); providing the personalized input prompt to a service configurable to generate a personalized conversational response comprising a sequence of steps for the execution plan ([0029], the LLM 240 receives the user prompt embedding and conditions the textual prompt on the user prompt embedding for the user to generate the personalized response to the textual prompt); receiving, at the database system, the personalized conversational response comprising textual content indicative of the sequence of steps of the execution plan from the service ([0029], the LLM 240 receives the user prompt embedding 212 and conditions the textual prompt 202 on the user prompt embedding 212 for the user 102 to generate the personalized response 252 to the textual prompt 202. The user prompt embedding 212 may include a respective soft prompt configured to guide the LLM 240 to provide personalized responses. The user prompt embedding 212 conditions the textual prompt 202 for the LLM 240 to generate personalized/tailored results for a dad under the age of 40 in the personalized response); automatically executing, by the database system, the steps of the execution plan in accordance with the sequence using the service to perform the action with respect to a data record in the database at the database system [0028]- [0030], executing the user’s prompt in accordance with stored instruction to perform operations such as responding to a user prompt like “I want to buy a pair of shoes”) ; and automatically providing, by the database system, a response to the client device indicative of the action with respect to the data record at the database system (Figs. 1-2, [0029]- [0033], providing the generated personalized response 252 to the textual prompt 202). As per claim 2, Bendersky teaches identifying the relevant subset of data comprises identifying a prior execution plan associated with the user based on a relationship between the action and a prior action associated with the prior execution plan; and generating the personalized input prompt comprises grounding an input prompt for the execution plan for the action using the prior execution plan ([0012], [0038], wherein previous tasks or queries input to the LLM and may include at least one of a recent activity history including previous queries during a dialog session, and/or site visits by the user 102, recent documents from a private corpus of the user, recent user history information associated with the textual prompt, or personalized results associated with the textual prompt are used to generate the personalized response). As per claim 3, Bendersky teaches validating the sequence of steps of the execution plan align with user data associated with the user in the database prior to automatically executing the steps of the execution plan ([0029], aligning the user prompt embedding with the set of user features associated with the user in order to generate the personalized response). As per claim 5, Bendersky teaches generating an input prompt for execution information for a respective step of the execution plan; providing the input prompt to the service; receiving, from the service, an executable response to the input prompt for performing the respective step of the execution plan; and executing, at the database system, the executable response to invoke an auxiliary service for performing the respective step of the execution plan ([0028]- [0030], the language model system executes the LLM that receives, as input, the textual prompt and generates, as output, a personalized response to the textual prompt. The language model system receives the set of user features associated with the user as input and determine, as output, a user prompt embedding for the user. Thereafter, the LLM receives the user prompt embedding and conditions the textual prompt on the user prompt embedding for the user to generate the personalized response to the textual prompt). As per claim 6, Bendersky teaches obtaining a subset of user profile data relevant to the respective step of the execution plan; and grounding the input prompt using the subset of user profile data prior to providing the input prompt to the service (Fig. 2 and 0012], [0029]- [0038], wherein user history information associated with the textual prompt is used to ground the prompt prior to providing the input prompt to the service). As per claim 7, Bendersky teaches validating the executable response aligns with user data associated with the user in the database prior to executing the executable response ([0029]). As per claim 10-12, and 14-16, Bendersky teaches a computer readable medium ([0045]). The remaining steps are rejected for the same rationale as applied to the method steps of rejected claims 1-3 and 5-7. As per claim 19, system claim 19 and method claim 1 are related as apparatus and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 19 is similarly rejected under the same rationale as applied above with respect to method claim 1. Furthermore, Shek teaches one or more processors; and memory storing thereon instructions, as claimed ([0042]). As per claim 20, Bendersky teaches wherein the service comprises a chatbot service at an external system coupled to the database system over the network, wherein the chatbot service is configured to generate the personalized conversational response using at least one of a large language model (LLM) or a generative pre-trained transformer (GPT) model (Figs. 1A-1C, [0003], [0023]- [0024]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4, 8-9, 13, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Bendersky (US 2024/0403564) in view of Shek (US 2021/0165967). As per claim 4, Bendersky teaches using user profile data associated with the user and user feedback to adjust sequence of steps for the execution plan, wherein automatically executing the steps of the execution plan comprises automatically executing the adjusted sequence of steps of the execution plan in accordance with the adjusted sequence ([0012], [0038], wherein previous tasks or queries input to the LLM 240 and may include at least one of a recent activity history including previous queries during a dialog session, and/or site visits by the user 102, recent documents from a private corpus of the user, recent user history information associated with the textual prompt, or personalized results associated with the textual prompt are used to generate the personalized response). Bendersky may not explicitly disclose augmenting the personalized input prompt using user profile data associated with the user in response to misalignment between the sequence of steps of the execution plan and the user profile data, resulting in an augmented personalized input prompt; and providing the augmented personalized input prompt to the service configurable to generate a second personalized conversational response comprising an adjusted sequence of steps for the execution plan, wherein automatically executing the steps of the execution plan comprises automatically executing the adjusted sequence of steps of the execution plan in accordance with the adjusted sequence. Shek in the same field of endeavor teaches augmenting the personalized input prompt using user profile data associated with the user in response to misalignment between the sequence of steps of the execution plan and the user profile data, resulting in an augmented personalized input prompt; and providing the augmented personalized input prompt to the service configurable to generate a second personalized conversational response comprising an adjusted sequence of steps for the execution plan, wherein automatically executing the steps of the execution plan comprises automatically executing the adjusted sequence of steps of the execution plan in accordance with the adjusted sequence ([0026]- [0034]). Therefore, it would have been obvious at the time the application was filed to use the above features of Shek with the system of Bendersky, in order to maintain accuracy, relevance, and performance as real-world data changes. As per claims 8-9, Bendersky teaches validating the executable response aligns with user data associated with the user in the database prior to executing the executable response ([0029]). Furthermore, Shek in the same field of endeavor teaches determining, for every step, relevancy scores for each entity in the set of entities and the set of one or more relationships, wherein the initial set of relevancy scores are based at least on a domain for the topic of the chatbot conversation ([0003], [0012]). Therefore, it would have been obvious at the time the application was filed to use the above features of Shek with the system of Bendersky, in order to perform validating the auxiliary service associated with the executable response aligns with user data associated with the user in the database prior to executing the executable response; and for each respective step of the execution plan, validating the respective step of the execution plan aligns with user data associated with the user in the database prior to executing the respective step of the execution plan, as claimed. This would enhance accuracy and ensure reliable performance of conversational systems. As per claim 13, 17, 18, Bendersky teaches a computer readable medium ([0045]). The remaining steps are rejected for the same rationale as applied to the method steps of rejected claims 4, 8, and 9. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELALI SERROU whose telephone number is (571)272-7638. The examiner can normally be reached M-F 9 Am - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDELALI SERROU/ Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jun 05, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §102, §103
Mar 25, 2026
Interview Requested
Apr 09, 2026
Examiner Interview Summary
Apr 09, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602544
INFORMATION PROCESSING APPARATUS, OPERATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596875
TECHNIQUES FOR ADAPTIVE LARGE LANGUAGE MODEL USAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12597417
EXPORTING MODULAR ENCODER FEATURES FOR STREAMING AND DELIBERATION ASR
2y 5m to grant Granted Apr 07, 2026
Patent 12596889
GENERATION OF NATURAL LANGUAGE (NL) BASED SUMMARIES USING A LARGE LANGUAGE MODEL (LLM) AND SUBSEQUENT MODIFICATION THEREOF FOR ATTRIBUTION
2y 5m to grant Granted Apr 07, 2026
Patent 12591603
AUTOMATED KEY-VALUE EXTRACTION USING NATURAL LANGUAGE INTENTS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+30.4%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 587 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month