Prosecution Insights
Last updated: April 19, 2026
Application No. 18/506,422

METHOD AND SYSTEM FOR PROVIDING CUSTOMER-SPECIFIC INFORMATION

Final Rejection §101§103
Filed
Nov 10, 2023
Examiner
PRATT, EHRIN LARMONT
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
2 (Final)
15%
Grant Probability
At Risk
3-4
OA Rounds
4y 9m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 15% of cases
15%
Career Allow Rate
52 granted / 338 resolved
-36.6% vs TC avg
Moderate +13% lift
Without
With
+13.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
41 currently pending
Career history
379
Total Applications
across all art units

Statute-Specific Performance

§101
37.1%
-2.9% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
12.5%
-27.5% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 338 resolved cases

Office Action

§101 §103
DETAILED ACTION This communication is a Final Office Action on the merits in response to communications received on 12/02/2025. Therefore, claims 1-20 are pending and have been addressed below. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 1. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 2. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Under Step 1 of the two-part analysis from Alice Corp, claim 1 recites a process (i.e., a series of acts or steps), claim 14 recites a machine (i.e., a thing, consisting of parts, or of certain devices and combination of devices), claim 20 recites a manufacture (i.e., an article that is given a new form, quality, property, or combination through man-made or artificial means) Thus, each of the claims fall within one of the four statutory categories. 3. Under Step 2A – Prong One of the two-part analysis from Alice Corp, the claimed invention recites an abstract idea. Claims 1, 14, and 20 recites: “receiving…a first prompt associated with a customer;”, “creating…a vector associated with the first prompt;”, “comparing…the vector associated with the first prompt with vectors associated with a plurality of documents associated with the customer to identify one or more candidate documents from the plurality of documents;”, “creating…a second prompt based upon the first prompt and the one or more candidate documents;”, “inputting…the second prompt…to obtain a response to the second prompt;” and “presenting…the response.” Under the broadest reasonable interpretation, the limitations recite processes for receiving a query from a customer and responding with relevant documents that correspond to the query which encompasses concepts such as a commercial interaction, (i.e., marketing or sales activities, business relations) and mental processes, (i.e., observations, evaluations, opinions, judgment), that fall within the certain methods of organizing human activity and mental processes groupings of abstract ideas. See MPEP 2106.04 The Applicant’s Specification at [0003] Chatbots utilizing generative pre-trained transformer (GPT) models (such as ChatGPT®) are powerful tools that may generate realistic and engaging responses to user inputs. A GPT-based chatbot, however, may not always provide factual or accurate information in its responses, especially when the conversation involves specific or specialized knowledge. The chatbot may make up facts, misinterpret information, or confuse different domains or entities, i.e., have the so-called chatbot "hallucinations." Accordingly, conventional methods or systems of utilizing chatbots to provide information may fail to provide accurate information associated with a specific customer as the chatbots may confuse information of the specific customer with other customers and/or make other mistakes. Consistent with the disclosure the “receiving” step allows a customer to ask a question, i.e., related to insurance processing, in order to obtain a response with relevant/helpful documents that correspond to the question which are activities that pertain to commercial interactions, i.e., sales/marketing activities and/or business relations. Also, the series of steps of “creating” and “comparing” are mental processes for evaluating the information from the customer’s prompt against information from similar documents for providing a response for the customer’s question which are acts that may be performed in the human mind with or without pen and paper. Accordingly, the claim recites an abstract idea. 4. Under Step 2A – Prong Two of the two-part analysis from Alice Corp, this judicial exception is not integrated into a practical application because the additional elements of: “a computer-implemented method”, “by one or more processors”, “from a user device”, “by the one or more processors”, “into a chatbot”, “via the user device”, “a computing system comprising:”, “one or more processors”, “a non-transitory memory storing one or more instructions, the instructions, when executed by the one or more processors, cause the one or more processors to:”, “a computer readable storage medium storing non-transitory computer readable instructions for providing customer-specific information, wherein the non-transitory computer readable instructions, when executed on one or more processors, cause the one or more processors to:” – see claims 1, 14, 20 are recited at a high-level of generality in light of the specification [Fig. 1, ¶ 0024, 0028, 0031, 0048-0049, 0108-0109]. For example, the processor 120 may include one or more suitable processors (e.g., central processing units (CPUs) and/or graphics processing units (GPUs)). The user device 102 may be any suitable device, including one or more computers, mobile devices, wearables, smart watches, smart contact lenses, smart glasses, augmented reality glasses, virtual reality headsets, mixed or extended reality glasses or headsets, and/or other electronic or electrical component. In some embodiments, the voice bots or chatbots 150 discussed herein may be configured to utilize AI and/or ML techniques. For instance, the voice bot or chatbot 150 may be a ChatGPT bot, an InstructGPT bot, a Codex bot, or a Google Bard bot. The voice bot or chatbot 150 may employ supervised or unsupervised ML techniques, which may be followed by, and/or used in conjunction with, reinforced or reinforcement learning techniques. The voice bot or chatbot 150 may employ the techniques utilized for ChatGPT, ChatGPT bot, InstructGPT bot, Codex bot, or Google Bard bot.] Thus, because the specification describes the additional elements in general terms without describing the particulars the additional elements may be broadly but reasonably construed as reciting generic computer components performing conventional computer functions in light of the applicant’s specification. Therefore, the additional elements recited in the claim add the words “apply it” with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely use a computer processor as a tool to perform the abstract idea as discussed in MPEP 2106.05 (f). The other additional elements of: “for providing customer-specific information, comprising:” is an attempt to limit the claimed invention to a particular technological environment or field of use, as discussed in MPEP 2106.05 (h). Thus, the additional claim elements are not indicative of integration into a practical application, because the claims do not involve improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)), the claims do not apply or use the abstract idea to effect a particular treatment or prophylaxis for a disease or medical condition (Vanda Memo), the claims do not apply the abstract idea with, or by use of, a particular machine (MPEP 2106.05(b)), the claims do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and the claims do not apply or use the abstract idea in some other meaningful way beyond generally linking the use of the abstract idea to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (MPEP 2106.05(e) and Vanda Memo). Therefore, the claims do not, for example, purport to improve the functioning of a computer. Nor do they effect an improvement in any other technology or technical field. Accordingly, the additional elements do not impose any meaningful limits on practicing the abstract idea and the claims are directed to an abstract idea. 5. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of: “a computer-implemented method”, “by one or more processors”, “from a user device”, “, by the one or more processors”, “into a chatbot”, “via the user device”, “a computing system comprising:”, “one or more processors”, “a non-transitory memory storing one or more instructions, the instructions, when executed by the one or more processors, cause the one or more processors to:”, “a computer readable storage medium storing non-transitory computer readable instructions for providing customer-specific information, wherein the non-transitory computer readable instructions, when executed on one or more processors, cause the one or more processors to:” – see claims 1, 14, 20 at best amount to nothing more than mere instructions in which to apply the judicial exception and cannot provide an inventive concept at Step 2B. 6. Claims 2-13 and 15-19 are dependent of claims 1 and 14 Claims 2 and 15 recites “wherein prior to comparing the vector associated with the first prompt with the vectors associated with the plurality of documents, the instructions, when executed by the one or more processors, cause the one or more processors to: detect a stimulus associated with an insurance claim process; and retrieve the plurality of documents from a record of a customer database associated with the customer.” which further narrows how the abstract idea may be performed, but does not make the claim any less abstract. For example, collecting data and recognizing certain data within the collected data set is a mental process. Claim 3 recites “wherein the stimulus is the customer originating a conversation with an enterprise representative associated with the insurance claim process, the customer updating an insurance claim associated with the insurance claim process, an update to documents associated with the insurance claim process, and/or detecting new documents associated with the insurance claim process.” which further narrows the data recited in the abstract idea may be performed, but does not make the claim any less abstract. Claims 4 and 16 recite “wherein comparing the vector associated with the first prompt and the vectors associated with the plurality of documents to identify the one or more candidate documents comprises: determining, by the one or more processors, similarity values between the vector associated with the first prompt and the vectors associated with the plurality of documents; and selecting, by the one or more processors, the one or more candidate documents based upon the corresponding similarity values.” which further narrows how the abstract idea may be performed, but does not make the claim any less abstract. Claims 5 and 17 recites “receiving, by the one or more processors from the user device, a third prompt associated with the customer; creating, by the one or more processors, a vector associated with the third prompt; determining, by the one or more processors, similarity values between the vector associated with the third prompt and the vectors associated with the plurality of documents; determining, by the one or more processors, that none of the plurality of documents meets a similarity threshold; performing, by the one or more processors, at least one of the following: (i) identifying a second one or more candidate documents from the plurality of documents based upon the corresponding similarity values; and/or (ii) presenting a warning to the user via the user device.” which further narrows how the abstract idea may be performed, but does not make the claim any less abstract, claims 6 and 18 recite “wherein the vectors associated with the plurality of documents are maintained in a vector database, wherein ingesting a particular document into the vector database comprises: separating, by the one or more processors, the particular document into a set of chunks; creating, by the one or more processors, a respective vector corresponding to the chunks in the set of chunks; and storing, by the one or more processors, vectors corresponding to the set of chunks in the vector database” which further narrows how the abstract idea may be performed, but does not make the claim any less abstract. The additional elements of the “one or more processors” and “vector database” organize and store data according to their ordinary operating capacity and do not alter the analysis., claim 7 recites “wherein comparing the vector associated with the first prompt with the vectors associated with the particular document includes: comparing, by the one or more processors, the vector associated with the first prompt with vectors associated with the respective vectors corresponding to the chunks in the set of chunks.” which further narrows how the abstract idea may be performed, but does not make the claim any less abstract., claim 8 recites “further comprising: maintaining, by the one or more processors, a relationship between the set of chunks and the particular document in a relational data table.” adds insignificant extra solution activity, i.e., data storage, to the judicial exception. Also, See Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log), claim 9 recites “wherein the plurality of documents includes: insurance claim forms, medical records, bills, and/or police reports associated with the customer.” which further describes the data or information recited in the abstract idea, but does not make the claim any less abstract., claim 10 recites “wherein the first prompt includes an inquiry regarding: an accident history of the customer, a description of an injury of the customer, a causation of an injury of the customer, a health condition of the customer prior to an injury, a general medical history of the customer, a medical history of the customer associated with an injury, a description of a property damage, a causation of a property damage, a repair and/or replacement history associated with a property damage, a description of a property loss, a causation of a property loss, a replacement history associated with a property loss, and/or a monetary amount requested by the customer.” which further describes the data or information recited in the abstract idea, but does not make the claim any less abstract, claims 11 and 19 recites “wherein creating the vector associated with the first prompt includes: splitting, by the one or more processors, the first prompt into semantic clusters; encoding, by the one or more processors, the semantic clusters as a set of vectors, wherein a similarity between the vectors associated with the semantic clusters depends on a relevance between the semantic clusters corresponding to the vectors; and calculating, by the one or more processors, a feature vector based upon the set of vectors associated with the semantic clusters, the feature vector being the vector associated with the first prompt.” which further narrows how the abstract idea may be performed, but does not make the claim any less abstract., claim 12 recites “wherein encoding the semantic clusters comprises: encoding, by the one or more processors and via a machine learning (ML) model comprising a plurality of parameters, the set of vectors, wherein the plurality of parameters are iteratively updated during training of the ML model.” which further narrows how the abstract idea may be performed, but does not make the claim any less abstract. The additional elements of “by the one or more processors” and “via a machine learning (ML) model” as recited are merely being used in their ordinary capacity and aid in performing the abstract idea, claim 13 recites “wherein the chatbot implements a trained model, wherein training the model includes: creating a first set of vectors associated with first training data; training the model in a first stage using the first set of vectors; creating a second set of vectors associated with second training data, wherein the second training data include prompts associated with questions and documents, and responses associated with the prompts; and training the model in a second stage using the second set of vectors.” which narrows how the abstract idea may be performed. The additional elements of “the chatbot”, “a trained model” and “training the model” add the words apply it (or an equivalent) or mere instructions to implement the abstract idea on a computer, as discussed in MPEP 2106.05(f). The steps for training are simply being used to update the data necessary for carrying out the judicial exception. Accordingly, each of the dependent claims were considered individually and in combination with the judicial exception and none of the limitations recited in the dependent claims add features that integrate the judicial exception into a practical application or provide an inventive concept. Response to Arguments Applicant's arguments filed 12/02/2025 have been fully considered but they are not persuasive. With Respect to Rejections Under 35 USC 101 Applicant argues “Claims 1-20 Even assuming arguendo that claim 1 recites a judicial exception, claim 1 is not directed to an abstract idea under Prong 2, at least because claim 1 as a whole integrates the judicial exception into a practical application of the alleged judicial exception by providing an improvement to the function of a computer. More particularly, claim 1 improves the quality of response generated by a chatbot by improving the quality of the input to the actual chatbot. As a result, the chatbot is less likely to hallucinate an answer, thereby improving the performance of the chatbot and the systems that interface with the chatbot. Applicant respectfully submits that the specification provides sufficient details regarding such improvements. As discussed in the specification: Chatbots utilizing generative pre-trained transformer (GPT) models (such as ChatGPT®) are powerful tools that may generate realistic and engaging responses to user inputs. A GPT-based chatbot, however, may not always provide factual or accurate information in its responses, especially when the conversation involves specific or specialized knowledge. The chatbot may make up facts, misinterpret information, or confuse different domains or entities, i.e., have the so-called chatbot "hallucinations." Accordingly, conventional methods or systems of utilizing chatbots to provide information may fail to provide accurate information associated with a specific customer as the chatbots may confuse information of the specific customer with other customers and/or make other mistakes. Specification, para. [0003] (emphasis added). To mitigate this technical problem, a computer system of the application may retrieve documents based upon the prompt from the user. The retrieved documents may provide ground-truth information that may be used to respond to the prompt. The computer system may then generate a second prompt including the ground-truth information of the documents and input the second prompt into a chatbot to cause it to generate a response to the second prompt. With the ground-truth information input to the chatbot as part of the prompt, the chatbot may be less prone to hallucinations. That is, the chatbot is more likely to base the answer upon the specific data provided as part of the prompt itself. Accordingly, Applicant respectfully submits this process generates improved chatbot prompts. Applicant further respectfully submits that this process of generating improved chatbot prompts is "a particular solution to a problem or a particular way to achieve a desired outcome." MPEP §2106.05(a). Moreover, claim 1 reflects the improvements described by the specification, and integrates any allegedly-recited abstract ideas into a practical application thereof in accordance with MPEP § 2106.04(d)(1).” The Examiner respectfully disagrees. Contrary to the remarks, the claimed invention remains ineligible under Step 2A Prong Two of the two-part analysis. Here, the Applicant argues the claimed invention improves quality of response generated by a chatbot by improving the quality of input to the actual chatbot, however, these benefits are not technical improvements to chatbot functionality. Instead, they are benefits that flow from performing an abstract idea in conjunction with a generic chatbot. The Specification [¶ 0003] makes it clear that chatbots for generating realistic and engaging responses to user inputs were commonly used at the time of invention. An improvement to the information input into a chatbot and/or chatbot prompts is not equivalent to a technical improvement in the chatbot’s functionality. See MPEP 2106.05(a) For these reasons, the rejections under 101 are being maintained. Applicant further argues “In particular, claim 1 recites, in part: creating, by the one or more processors, a vector associated with the first prompt; comparing, by the one or more processors, the vector associated with the first prompt with vectors associated with a plurality of documents associated with the customer to identify one or more candidate documents from the plurality of documents; creating, by the one or more processors, a second prompt based upon the first prompt and the one or more candidate documents; inputting, by the one or more processors, the second prompt into a chatbot to obtain a response to the second prompt. Accordingly, Applicant respectfully submits that claim 1 is patent eligible at least under Step 2A, Prong 2.” “Independent claims 14 and 20 recite limitations similar to those recited by claim 1, and therefore are patent eligible for at least reasons similar to those discussed above. Dependent claims 2-13 and 15-19 incorporate by reference each and every element recited by their respective independent claim, and therefore are patent eligible for at least reasons similar to those discussed above. Thus, Applicant respectfully requests withdrawal and reconsideration of the rejections of claims 1-20 under 35 U.S.C. § 101.” The Examiner respectfully disagrees. Contrary to the remarks the claimed invention remains ineligible under Step 2A Prong Two of the two part analysis. A claim is not patent eligible merely because it applies an abstract idea in a narrow way, the claim's focus must be something other than the abstract idea itself. The additional elements of “one or more processors” and “a chatbot” recite the words “apply it” (or an equivalent) with the judicial exception or merely include instructions to implement an abstract idea on a computer. In at least [Fig. 1, ¶ 0024, 0028, 0031, 0048-0049, 0108-0109] of the Applicant’s Specification provides a general explanation regarding chatbot technology. At best, the “one or more processors” and “chat bot” operate in their ordinary or normal capacity (i.e., receive prompt, generate response) to aid in performing the abstract idea. Merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 223-24, 110 USPQ2d 1976, 1983-84 (2014). See In re Alappat, 33 F.3d 1526, 1545, 31 USPQ2d 1545, 1558 (Fed. Cir. 1994); In re Bilski, 545 F.3d 943, 88 USPQ2d 1385 (Fed. Cir. 2008) The specificity of the presently recited techniques recited in claim 1 is insufficient to establish patent eligibility. Claims 14 and 20 recite subject matter substantially similar to claim 1 and are being held ineligible for the same rationales. The rejection presents findings for each of the dependent claims 2-13 and 15-19, explaining why the additional limitations recited by these claims do not impart subject matter eligibility. For these reasons the rejections under 101 are being maintained. Applicant further agues “Further, the application provides a particular way to generate a vector associated with the first prompt. As discussed in the specification: Creating the vector associated with the first prompt may include: (1) splitting texts of the first prompt into semantic clusters, and (2) determining a feature vector of the first prompt based upon the semantic clusters. A semantic cluster may be one or more words, a portion of a word, and/or a character. .. . When the semantic cluster is [a] phrase comprising more than one words, the server 404 may first split the sentence into words, and then cluster related words together. ... [T]he server 404 may determine the feature vector by (1) encoding the semantic clusters as a set of vectors, and (2) determining a feature vector based upon the set of vectors associated with the semantic clusters. ... In some instances, a distance between the vectors reflects a semantic similarity between the corresponding semantic clusters, i.e., a smaller distance between two vectors corresponds to a greater similarity in semantic meanings between two corresponding semantic clusters. The distance between vectors may be a cosine distance, a Euclidean distance, or any other appropriate distance for vectors.” “Various techniques may be used to determine a feature vector for a set of vectors. For example,... the server 404 may combine the set of vectors into a matrix, calculate an eigenvector of the resulting matrix, and use the eigenvector as the feature vector. In yet another example, the server 404 may use a trained machine learning model (such as Recurrent Neural Networks (RNN), Bidirectional Encoder Representations from Transformer (BERT), etc.) to determine a feature vector for the set of semantic clusters. Specification, paras. [0085]-[0090] (emphasis added).” Accordingly, the feature vector of the first prompt may be indicative of the semantic meaning of the first prompt. The distance between the feature vector of the first prompt and the feature vector of a document may be indicative of the semantic similarity between the first prompt and the document. Using feature vectors generated in the manner described above, a computer system of the application may retrieve documents that are semantically similar to the first prompt. Advantageously, documents retrieved based upon semantic similarities may be more relevant to the prompt, as compared to documents retrieved using some conventional manners, such as retrieving documents based upon keyword matching. Documents of higher relevance may lead to responses of higher quality. The application therefore provides further improvement to the technologies of retrieving documents based upon prompts. Applicant respectfully submits that this improvement in document retrieval is achieved by a particular way of generating feature vectors of prompts and/or documents. See MPEP §2106.05(a) ("An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome.").” The Examiner respectfully disagrees. Contrary to the remarks, the claimed invention remains ineligible under Step 2A Prong Two of the two-part analysis. Here, the Applicant relies upon features for creating a vector associated with a first prompt (such as splitting texts, encoding semantic clusters, determining a feature vector) that are not reflected in claim 1. Adding these features to the claim merely narrows how the abstract idea may be performed but does not make the claimed invention any less abstract. Next, the Applicant’s Specification (¶ 0085-0090) cited above also makes clear that the claimed invention employs any suitable machine learning techniques. Thus, the machine learning technology described in the disclosure is generic as demonstrated from the Specification and cannot be relied upon to show an improvement to machine learning technology. Lastly, the remarks discuss advantages for using the feature vector of a the first prompt and the feature vector of a document for retrieving relevant documents. These advantages for document retrieval cited by Applicant do not lead towards eligibility. It has been clear since Alice that a claimed invention’s use of the ineligible concept to which it is directed cannot integrate the judicial exception into a practical application or supply the inventive concept. For these reasons, the rejections under 101 are being maintained. Applicant argues “Claim 11 reflects the improvements described by the specification, and integrates any allegedly-recited abstract ideas into a practical application thereof in accordance with MPEP § 2106.04(d)(1). In particular, claim 11 recites, in part: creating the vector associated with the first prompt includes: splitting, by the one or more processors, the first prompt into semantic clusters; encoding, by the one or more processors, the semantic clusters as a set of vectors, wherein a similarity between the vectors associated with the semantic clusters depends on a relevance between the semantic clusters corresponding to the vectors; and calculating, by the one or more processors, a feature vector based upon the set of vectors associated with the semantic clusters, the feature vector being the vector associated with the first prompt. Accordingly, Applicant respectfully submits that claim 11 is patent eligible at least under Step 2A, Prong 2 at least for the additional reasons discussed above.” The Examiner respectfully disagrees. Contrary to the remarks, the claimed invention remains ineligible under Step 2A Prong Two. Here, the Applicant cites to the MPEP§ 2106.04(d)(1) and restates the limitations recited in claim 11. Other than restating the steps in the claim Applicant does not explain how these limitations impart subject matter eligibility. When viewed separately, these limitations further describe how a vector is created associated with the first prompt. As a whole, adding these limitations to claim 1 merely narrows how the abstract idea may be carried out, but does not alter the analysis. For these reasons, the rejections under 101 are being maintained. With Respect to Rejections Under 35 USC 103 Applicant’s arguments, see pgs. 1-2, filed 12/02/2025, with respect to 1-20 have been fully considered and are persuasive. The rejection under 35 USC 103 over Mahmound (US 2022/0156298 A1) in view of Raval Contractor (US 2021/0232613 A1) of 09/03/2025 has been withdrawn. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sengupta (US 2024/0403566 A1) – ([¶ 0006] In some aspects, the techniques described herein relate to a method including: receiving data characterizing a first prompt from a user interface; generating data characterizing a second prompt, where the second prompt is configured to generate a response from an artificial intelligence model that has a greater relevancy than a response from the artificial intelligence model generated by providing the first prompt to the artificial intelligence model; receiving data characterizing a response to the second prompt by providing the data characterizing the second prompt to an artificial intelligence based model; and providing the response to the second prompt in the user interface.) K. Yager, "Domain-Specific chatbots for science using embeddings", June 2023, pgs. 1-12, downloaded from https://pubs.rsc.org/en/content/articlepdf/2023/dd/d3dd00112a, DOI:10.1039/D3DD00112A THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EHRIN PRATT whose telephone number is (571)270-3184. The examiner can normally be reached 8-5 EST Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached at 571-272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EHRIN L PRATT/Examiner, Art Unit 3629 /JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Nov 10, 2023
Application Filed
Aug 28, 2025
Non-Final Rejection — §101, §103
Nov 20, 2025
Applicant Interview (Telephonic)
Nov 20, 2025
Examiner Interview Summary
Dec 02, 2025
Response Filed
Feb 24, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12524786
METHODS AND SYSTEMS FOR DETERMINING GUEST SATISFACTION INCLUDING GUEST SLEEP QUALITY IN HOTELS
2y 5m to grant Granted Jan 13, 2026
Patent 12175549
RECOMMENDATION ENGINE FOR TESTING CONDITIONS BASED ON EVALUATION OF TEST ENTITY SCORES
2y 5m to grant Granted Dec 24, 2024
Patent 12079894
GUEST QUARTERS COORDINATION DURING MUSTER
2y 5m to grant Granted Sep 03, 2024
Patent 12057143
SYSTEM AND METHODS FOR PROVIDING USER GENERATED VIDEO REVIEWS
2y 5m to grant Granted Aug 06, 2024
Patent 11941642
QUEUE MANAGEMENT SYSTEM UTILIZING VIRTUAL SERVICE PROVIDERS
2y 5m to grant Granted Mar 26, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
15%
Grant Probability
28%
With Interview (+13.1%)
4y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 338 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month