Prosecution Insights
Last updated: April 19, 2026
Application No. 18/807,547

SYSTEMS AND METHODS FOR RESPONDING TO USER INPUTS

Non-Final OA §101§103
Filed
Aug 16, 2024
Examiner
SERROU, ABDELALI
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Docsplain AI Doctor Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
437 granted / 587 resolved
+12.4% vs TC avg
Strong +30% interview lift
Without
With
+30.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
23 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
19.7%
-20.3% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 587 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 2. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Is the claimed invention to a process, machine, manufacture or composition of matter? The claimed invention, at independent claims 1 and 14, is directed to a method (process), system (machine), and computer readable medium (manufacture) for receiving, by a processor, a user input including a category selection and contextual data; providing, by the processor, an input prompt to a large language model (LLM) based on the user input, the input prompt including a source identifier and one or more instructions; receiving, by the processor, a LLM output generated in response to the input prompt, the LLM output including data limited to sources identified by the source identifier; and providing, by the processor, a user output based on the LLM output. Step 2A, prong 1: Does the claim recite an abstract idea, law or nature, or natural phenomenon? Under the 35 U.S.C. 101 new guidelines, the broadest reasonable interpretation of the claims, the claimed steps fall within the “Mental Processes” grouping of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. The step of receiving a user input including a category selection and contextual data, may be practically performed by a human receiving a query including a category selection and contextual data. The steps of providing an input prompt to a large language model (LLM), receiving a LLM out, and providing a user output based on the LLM are mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). The claims do not provide any details about how the LLM operates or how the input prompt is processed and how the output response is generated. Therefore, the claimed steps fall within the mental process grouping of abstract ideas. Step 2A, prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements of “receiving by a processor”, “generating a LLM output”, “providing by a processor” are mere data gathering and manipulating recited at high level of generality, and thus are insignificant extra-solution activity. The processor is recited at a high level of generality, and it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f). Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claims are directed to the judicial exception. Step 2B: Does the claim recite additional elements that amount to significantly more than the abstract idea? As to whether the claims as a whole amount to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim (Step 2B), as explained above in Step 2A, Prong 2, the use of a “processor” is at high level of generality, and even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, and therefore do not provide an inventive concept. Accordingly, the claims are ineligible. Dependent claims 2-13 and 15-26 further refer to the claimed source identifier, LLM output, instructions, user input/query, which encompasses a mental process that is practically performed in the human mind and insignificant extra-solution activity, as explained above in Step 2A, Prong 1. Claims 12 and 25 recite, the LLM is based on Open AI GPT model. However, mere nominal recitation of a generic network appliance does not take the claims limitations out of the mental processes grouping. Accordingly, claims 1-26 are directed to an abstract idea, and are not patent eligible. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 9, 13-17, 22, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20250006196) in view of Perez (US 20220292157). As per claim 1, Wang teaches receiving, by a processor, a user input including a category selection and contextual data ([0026], receiving a user input “please turn on the kitchen lights every morning at 8 am”. The user input represents instructions for one or more actions (e.g., API definitions) related to turning on the kitchens lights every morning); providing, by the processor, an input prompt to a large language model (LLM) based on the user input, the input prompt including a source identifier and one or more instructions ([0026], [0030], generating and providing an input prompt to the LLM. The prompt includes the user input data and instructions to determine the one or more portions of data (or types of data) relevant to the processing of the user input); receiving, by the processor, a LLM output generated in response to the input prompt, the LLM output including data limited to sources identified by the source identifier ([0030], the LLM may receive and process the prompt and generate model output data representing the one or more portions of data (or types of data); and [0031], wherein said, the action plan execution component may process the prompt generation action plan data to execute the one or more instructions to retrieve/receive data corresponding to the user input and that may be used to generate the language model prompt.) ; and providing, by the processor, a user output based on the LLM output ([0035], providing an output responsive to the user input). Wang may not explicitly disclose the input prompt including a source identifier. Perez in the same field of endeavor teaches a request prompt including a source identifier ([0051]). Therefore, it would have been obvious at the time the application was filed to use the above feature of Perez with the system of Wang, in order to provide an input prompt including a source identifier and one or more instructions, as claimed. This would focus on information from a particular domain or trusted source. As per claim 2, Wang may not explicitly disclose wherein the source identifier includes a list of trusted websites corresponding to the category selection. Perez in the same field of endeavor teaches a request prompt including a source identifier ([0051]). Therefore, it would have been obvious at the time the application was filed to use the above feature of Perez with the system of Wang, in order to provide an input prompt including a source identifier and one or more instructions, as claimed. This would focus on information from a particular domain or trusted source. As per claim 3, Wang teaches wherein the LLM output is generated based on natural language processing of the user input ([0023], where the language model 160 is an LLM, the input to the LLM may be in the form of a prompt). As per claim 4, Wang teaches wherein the one or more instructions includes an assigned engagement role to the LLM ([0026], wherein the instructions assign a role of turning on the kitchen lights). As per claim 9, Wang teaches wherein the user output includes one or more follow-up questions ([0060], output model output data “anything else I can help you with”) . As per claim 13, Wang teaches wherein the user output is stored in a memory and the method further comprises: providing, by the processor, a second input prompt to the LLM based on additional contextual data and the stored user output ([0057], for a subsequent iteration, generating a subsequent prompt). As per claims 14-17, 22, and 26, system claims 14-17, 22, and 26 and method claims 1-4, 9, 13 are related as apparatus and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claims 14-17, 22, and 26 are similarly rejected under the same rationale as applied above with respect to method claims 1-4, 9, 13. Furthermore, wand teaches one or more processors; and a non-transitory computer-readable medium having stored thereon instructions, as claimed ([0113], [0123]). Claims 5, 11, 12, 18, 24, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Perez, and further in view of Marwah (US 2025/0036878). As per claims 5 and 18, Wang in view of Perez may not explicitly disclose wherein the user input includes a medical inquiry. Marwah in the same field of endeavor teaches wherein the user input includes a medical inquiry ([0081]). Therefore, it would have been obvious at the time the application was filed to use the medical inquiry feature of Marwah with the system of Wang in view of Perez, in order to receive a user input including a medical inquiry, as claimed. This would provide a complete and consistent medical record. As per claims 11 and 24, Wang in view of Perez may not explicitly disclose wherein the LLM output includes a list of sources used to generate the LLM output. Marwah in the same field of endeavor teaches wherein the LLM output includes a list of sources used to generate the LLM output ([0081], generating an answer to the medical question and include at least one source used to generate the answer). Therefore, it would have been obvious at the time the application was filed to use the above feature of Marwah with the system of Wang in view of Perez, in order to output a list of sources used to generate the LLM output, as claimed. This would provide a a way to verify accuracy (Marwah, [0081]). As per claims 12 and 25, Wang in view of Perez may not explicitly disclose wherein the LLM is based on an OpenAl® GPT model. Marwah in the same field of endeavor teaches wherein the LLM is based on an OpenAl® GPT model ([0100]). Therefore, it would have been obvious at the time the application was filed to use the above feature of Marwah with the system of Wang in view of Perez, in order to improve communication quality and accessibility. Claims 6-8, 10, 19-21, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Perez and Marwah, and further in view of Koh (US 20190392926). As per claims 6 and 19, Wang in view of Perez and Marwah may not explicitly disclose wherein the category selection includes a medical specialty selection. Koh in the same field of endeavor teaches a system for providing and organizing medical information, wherein a medical specialty selection is selected ([0087]). Therefore, it would have been obvious at the time the application was filed to use Koh’s above features with the system of Wang in view of Perez and Marwah, in order to improve healthcare systems. As per claims 7 and 20, Wang in view of Perez and Marwah may not explicitly disclose wherein the contextual data includes patient triage data. Koh in the same field of endeavor teaches a system for providing and organizing medical information, wherein the contextual data includes patient triage data ([0071], wherein the user provides contextual data clarifying whether the needed drug dosage is for an adult patient or a pediatric patient). Therefore, it would have been obvious at the time the application was filed to use Koh’s above features with the system of Wang in view of Perez and Marwah, in order to enhance patient safety. As per claims 8 and 21, Wang in view of Perez and Marwah may not explicitly disclose wherein the contextual data further includes patient medical history data. Koh in the same field of endeavor teaches a system for providing and organizing medical information, wherein contextual data includes patient medical history data ([0092], wherein contextual data relates previous MRI images, medical information chart…). Therefore, it would have been obvious at the time the application was filed to use Koh’s above features with the system of Wang in view of Perez and Marwah, in order to improve patient care and enable better clinical decision-making. As per claims 10 and 23, Wang in view of Perez may not explicitly disclose wherein the LLM output includes a first response portion and a second response portion, wherein: the first response portion is related to the user input and is tailored to a specialized audience; and the second response portion is a simplified version of the first response portion and is tailored to a general audience. Koh in the same field of endeavor teaches a system for providing and organizing medical information, wherein a language model output includes a first response portion including more specialized medical information and a second response portion including a simplified version of the first response portion (Fig. 8A, [0086]- [0087]). Therefore, it would have been obvious at the time the application was filed to use Koh’s above features with the system of Wang in view of Perez, in order to support quick decision making and in the same time provide deeper context for those who need detailed information. Conclusion 4. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELALI SERROU whose telephone number is (571)272-7638. The examiner can normally be reached M-F 9 Am - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDELALI SERROU/Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Aug 16, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602544
INFORMATION PROCESSING APPARATUS, OPERATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596875
TECHNIQUES FOR ADAPTIVE LARGE LANGUAGE MODEL USAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12597417
EXPORTING MODULAR ENCODER FEATURES FOR STREAMING AND DELIBERATION ASR
2y 5m to grant Granted Apr 07, 2026
Patent 12596889
GENERATION OF NATURAL LANGUAGE (NL) BASED SUMMARIES USING A LARGE LANGUAGE MODEL (LLM) AND SUBSEQUENT MODIFICATION THEREOF FOR ATTRIBUTION
2y 5m to grant Granted Apr 07, 2026
Patent 12591603
AUTOMATED KEY-VALUE EXTRACTION USING NATURAL LANGUAGE INTENTS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+30.4%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 587 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month