Prosecution Insights
Last updated: April 19, 2026
Application No. 18/442,776

DATABASE SYSTEMS AND AUTOMATED CONVERSATIONAL INTERACTION METHODS USING BOUNDARY COALESCING CHUNKS

Non-Final OA §102§103
Filed
Feb 15, 2024
Examiner
THOMAS-HOMESCU, ANNE L
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Salesforce Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
276 granted / 360 resolved
+14.7% vs TC avg
Strong +37% interview lift
Without
With
+36.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
394
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 360 resolved cases

Office Action

§102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 15 February 2024 and 07 March 2024, respectively, are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 2, 6, 8, 10, 11, 15, 17, and 19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20230086653, hereinafter referred to as Zykh et al. Regarding claim 1, Zykh et al. discloses a method comprising: dividing text data into a plurality of primary chunks based at least in part on one or more input criteria associated with a service (Zykh et al., fig. 3 – “Greeting Segment” + “Good Service” Response Segment. Here, “text data” = greeting segment + intent category. And, “…received review content into categories based on previously defined groups of intent. For example, such intent categories may include but not limited to: “Good Service”, “Slow Service”, “Employee Misconduct”, “Theft Or Fraud”, etc.”,” Zykh et al., para [0035].), wherein the plurality of primary chunks are ordered in accordance with the text data (Zykh et al., fig. 3(301) – sentence segment. Here, the sentence segments of “Greeting Segment” and “Good Service” (a specific intent category) are interpreted as primary chunks.); generating one or more secondary chunks by merging a first subset of the text data of a preceding primary chunk and a second subset of the text data of a following primary chunk of a respective pair of adjacent primary chunks of the plurality of primary chunks (“Once the criteria at 508 is met, at 510, the response computing device 112 is configured to select based on the primary intent a sentence segment from a set of sentence segments. An example of this is shown at FIG. 3, where a greeting segment may be selected first and then a more specific intent driven sentence segment is selected in the response segment 303 and combined with one or more randomly selected possible inserts 302 for gaps in the response segment 303. In this way, by the sentence segment having gaps in a sentence and combining it with randomly selected inserts for the gaps to customize the sentence segment, a custom response may be generated,” Zykh et al., para [0067]. Here, the “customized sentence segment” is interpreted as a secondary chunk.); when a semantic similarity between a conversational input to a user interface and a respective secondary chunk of the one or more secondary chunks is greater than a threshold, inputting the respective secondary chunk to the service, wherein the service generates response data based at least in part on a subset of the text data associated with the respective secondary chunk (“In at least some aspects, at step 512, the response computing device 112 is further configured after generating the response at step 510 to automatically respond (e.g. communicate a response and instruct the display thereof on the user device 102). Notably, the response computing device 112 is configured to automatically respond to the harvested content 117 with the automated response 125 having the customized sentence segment (e.g. as illustrated in the response 121) when the confidence score (e.g. score information 132) exceeds a second threshold score (e.g. the second threshold 136),” Zykh et al., para [0068]. The customized sentence segment (i.e., secondary chunk) is submitted to the response generating service when the confidence score (i.e., semantic similarity) exceeds a threshold (i.e., second threshold score).); and providing a response to the conversational input at the user interface based at least in part on the response data generated by the service (“FIG. 1B is a diagram illustrating an example of an output display screen, or portions thereof, on a user interface of the response computing device of FIG. 1A for processing harvested content including review content and generating a response, in accordance with an embodiment,” Zykh et al., para [0009].). As to claim 10, CRM claim 10 and method claim 1 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 10 is similarly rejected under the same rationale as applied above with respect to method claim. And, Zykh et al., fig. 2, shows storage (CRM), processor(s), and instructions. As to claim 19, system claim 19 and method claim 1 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 19 is similarly rejected under the same rationale as applied above with respect to method claim. Regarding claim 2, Zykh et al. discloses the method of claim 1, wherein: the plurality of primary chunks are distinct (Zykh et al., fig. 3 – “Greeting Segment” + “Good Service” Response Segment. This example shows that the primary chunks “Greeting Segment” and “Good Service” are distinct.); and generating the one or more secondary chunks comprises generating the one or more secondary chunks that overlap at least a last portion of the preceding primary chunk and an initial portion of the following primary chunk of the respective pair of adjacent primary chunks of the plurality of primary chunks (Zykh et al., para [0067]. Here, the “customized sentence segment” is interpreted as a secondary chunk, wherein the customized sentence segment (secondary chunk) comprises a first sentence segment (a primary chunk) followed by an adjacent sentence segment (another primary chunk).). As to claim 11, CRM claim 11 and method claim 2 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 11 is similarly rejected under the same rationale as applied above with respect to method claim. And, Zykh et al., fig. 2, shows storage (CRM), processor(s), and instructions. Regarding claim 6, Zykh et al. discloses the method of claim 1, wherein inputting the respective secondary chunk to the service comprises: generating a grounded input prompt for the service based at least in part on the subset of text data associated with the respective secondary chunk and the conversational user input, wherein the subset of text data comprises the respective first subset of the text data of the preceding primary chunk and the respective second subset of the text data of the following primary chunk of the respective pair of adjacent primary chunks of the plurality of primary chunks corresponding to the respective secondary chunk (Zykh et al., para [0067]. Here, the secondary chunk (i.e., customized sentence segments) are provided to the response service. See also fig. 5(510)(512). The examiner further notes that an explicit definition of “grounded input prompt” is not provided in the claim language. The examiner interprets grounded input prompt as based on additional contextual information (i.e., context of the harvested content).); and providing the grounded input prompt to the service, wherein the service generates the response data based on the grounded input prompt (Zykh et al., para [0067]. Here, the secondary chunk (i.e., customized sentence segments) are provided to the response service. See also fig. 5(510)(512).). As to claim 15, CRM claim 15 and method claim 6 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 15 is similarly rejected under the same rationale as applied above with respect to method claim. And, Zykh et al., fig. 2, shows storage (CRM), processor(s), and instructions. Regarding claim 8, Zykh et al. discloses the method of claim 1, wherein generating the one or more secondary chunks comprises merging the first subset of the text data of the preceding primary chunk and the second subset of the text data of the following primary chunk of the respective pair of adjacent primary chunks of the plurality of primary chunks corresponding to the respective secondary chunk using natural language processing (NLP) to delimit the respective secondary chunk (“Referring to FIGS. 1A and 1B, the response computing device 112 is configured to determine (e.g. via a language understanding model 258 shown in FIG. 2) a primary intent of the received review from the harvested content 117,” Zykh et al. [0026]. NLP (via a language understanding model) is performed to glean the primary intent of the user’s review. This harvested content is then used to select a “greeting” sentence (1st primary chunk) and an “intent” sentence (second primary chunk) that together form the delimited secondary chunk.). As to claim 17, CRM claim 17 and method claim 8 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 17 is similarly rejected under the same rationale as applied above with respect to method claim. And, Zykh et al., fig. 2, shows storage (CRM), processor(s), and instructions. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3-5 and 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230086653, hereinafter referred to as Zykh et al., in view of US 20240419745, hereinafter referred to as Brooks et al. Regarding claim 3, Zykh et al. discloses the method of claim 1, but not further comprising: determining a numerical representation of the conversational input to the user interface; and selecting the respective secondary chunk of the one or more secondary chunks when a second numerical representation of the respective secondary chunk is closest to the numerical representation of the conversational input relative numerical representations of the plurality of primary chunks. Brooks et al. is cited to disclose determining a numerical representation of the conversational input to the user interface (“More specifically, and as previously detailed, the depersonalization model may first encode the custom content into a vector space while maintaining semantic equivalence of the input prior to the decoder reconstructing the content in a semantically equivalent but depersonalized style,” Brooks et al., para [0047]. The vector is a numerical representation. And, “Such pipeline applications may also leverage hierarchical embodiments of this system to personalize data pipelines operating over multiple content types, and may include use within chatbots, commentary (e.g., sports, entertainment), audiobooks, lectures, etc.,” Brooks et al., para [0053]. This excerpt shows that the conversational input may be through a user interface, such as a chatbot.); and selecting the respective secondary chunk of the one or more secondary chunks when a second numerical representation of the respective secondary chunk is closest to the numerical representation of the conversational input relative numerical representations of the plurality of primary chunks (Brooks et al., para [0047]. The “semantic equivalence” represents a closeness between the input and vector representation.). Brooks et al. benefits Zykh et al. by removing personalization from input content, thereby retaining semantic meaning while genericizing content which may then be captured and translated to other content of the same content type for purposes of training the content generation model (Brooks et al., para [0047] and [0049]. Therefore, it would be obvious for one skilled in the art to combine the teachings of Zykh et al. with those of Brooks et al. to facilitate automated response generation to a broad audience of online reviewers as taught by Zykh et al. As to claim 12, CRM claim 12 and method claim 3 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 12 is similarly rejected under the same rationale as applied above with respect to method claim. And, Zykh et al., fig. 2, shows storage (CRM), processor(s), and instructions. Regarding claim 4, Zykh et al., as modified by Brooks et al., discloses the method of claim 3, further comprising removing personal identifying information from the conversational input prior to determining the numerical representation of the conversational input (Brooks et al., para [0047].). As to claim 13, CRM claim 13 and method claim 4 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 13 is similarly rejected under the same rationale as applied above with respect to method claim. And, Zykh et al., fig. 2, shows storage (CRM), processor(s), and instructions. Regarding claim 5, Zykh et al., as modified by Brooks et al., discloses the method of claim 4, further comprising supplementing the response data with the personal identifying information removed from the conversational input to obtain the response prior to providing the response to the conversational input at the user interface (Brooks et al., para [0047].). As to claim 14, CRM claim 14 and method claim 5 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 14 is similarly rejected under the same rationale as applied above with respect to method claim. And, Zykh et al., fig. 2, shows storage (CRM), processor(s), and instructions. Claim(s) 7, 9, 16, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230086653, hereinafter referred to as Zykh et al., in view of US 20250173330, hereinafter referred to as Durg et al. Regarding claim 7, Zykh et al. discloses the method of claim 6, wherein: the response data comprises a conversational response responsive to the conversational user input (Zykh et al., fig. 5(512). And, “FIG. 1B is a diagram illustrating an example of an output display screen, or portions thereof, on a user interface of the response computing device of FIG. 1A for processing harvested content including review content and generating a response, in accordance with an embodiment,” Zykh et al., para [0009].). Zykh et al., though, does not disclose that the service comprises a large language model-based (LLM-based) service. Durg et al. is cited to disclose that the service comprises a large language model-based (LLM-based) service (“If a structured query cannot be generated by the prompt, the prompt containing the instructions is processed through an augmented retrieval process wherein similar documents are retrieved through the knowledge bases and those documents which satisfy a similarity threshold may be processed through the currently-selected LLM for the answer. The answer thus generated can be provided to the user via an output screen of the chatbot interface. Feedback is also collected from the users for the answers provided, and based on the feedback the LLM used for responding to user queries can be switched to a different LLM,” Durg et al., para [0017].). Durg et al. benefits Zykh et al. by incorporating LLMs to generate answers to user input, thereby enabling versatile, context-aware response generation. Therefore, it would be obvious for one skilled in the art to combine the teachings of Zykh et al. with those of Durg et al. to improve the automated response generation of Zykh et al. As to claim 16, CRM claim 16 and method claim 7 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 16 is similarly rejected under the same rationale as applied above with respect to method claim. And, Zykh et al., fig. 2, shows storage (CRM), processor(s), and instructions. Regarding claim 9, Zykh et al. discloses the method of claim 1, but not wherein: the service comprises a large language model-based (LLM-based) chatbot service; inputting the respective secondary chunk to the service comprises providing a grounded input prompt to the LLM-based chatbot service comprising the subset of the text data associated with the respective secondary chunk and the conversational user input; the LLM-based chatbot service generates a conversational response to the conversational user input using the subset of the text data associated with the respective secondary chunk; and providing the response to the conversational input at the user interface comprises updating the user interface to provide a graphical representation of the conversational response responsive to the conversational user input. Durg et al. is cited to dislcose the service comprises a large language model-based (LLM-based) chatbot service (Durg et al., para [0017].); inputting the respective secondary chunk to the service comprises providing a grounded input prompt to the LLM-based chatbot service comprising the subset of the text data associated with the respective secondary chunk and the conversational user input (“New topics can be identified through LLM prompting with few-shot prompting or classical topic modeling techniques. Few-shot prompting can be used as a technique to enable in-context learning where demonstrations are provided in the prompt to steer the model to better performance. The demonstrations serve as conditioning for subsequent examples where the model is to generate a response. New intents can be added to address prominent gaps to improve the accuracies of the LLMs,” Durg et al., para [0034]. The examiner further notes that an explicit definition of “grounded input prompt” is not provided in the claim language. The examiner interprets grounded input prompt as based on additional contextual information.); the LLM-based chatbot service generates a conversational response to the conversational user input using the subset of the text data associated with the respective secondary chunk (Durg et al., para [0034].); and providing the response to the conversational input at the user interface comprises updating the user interface to provide a graphical representation of the conversational response responsive to the conversational user input (Durg et al., para [0034].). Durg et al. benefits Zykh et al. by incorporating LLMs to generate answers to user input, thereby enabling versatile, context-aware response generation. Therefore, it would be obvious for one skilled in the art to combine the teachings of Zykh et al. with those of Durg et al. to improve the automated response generation of Zykh et al. As to claim 18, CRM claim 18 and method claim 9 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 18 is similarly rejected under the same rationale as applied above with respect to method claim. And, Zykh et al., fig. 2, shows storage (CRM), processor(s), and instructions. As to claim 20, system claim 20 and method claim 9 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 20 is similarly rejected under the same rationale as applied above with respect to method claim. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached PTO-892. In particular, the examiner notes Grimshaw et al., para [0030], describes a message generator which combines at least a portion of a selected message and a request phrase to form a prompt for requesting one or more draft messages. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNE L THOMAS-HOMESCU whose telephone number is (571)272-0899. The examiner can normally be reached Mon-Fri 8-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh M Mehta can be reached on 5712727453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANNE L THOMAS-HOMESCU/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Feb 15, 2024
Application Filed
Feb 02, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592241
METHOD AND APPARATUS FOR ENCODING AND DECODING AUDIO SIGNAL USING COMPLEX POLAR QUANTIZER
2y 5m to grant Granted Mar 31, 2026
Patent 12591741
VIOLATION PREDICTION APPARATUS, VIOLATION PREDICTION METHOD AND PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12573369
METHOD FOR CONTROLLING UTTERANCE DEVICE, SERVER, UTTERANCE DEVICE, AND PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12561684
Evaluating User Status Via Natural Language Processing and Machine Learning
2y 5m to grant Granted Feb 24, 2026
Patent 12554926
METHOD, DEVICE, COMPUTER EQUIPMENT AND STORAGE MEDIUM FOR DETERMINING TEXT BLOCKS OF PDF FILE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+36.7%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 360 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month