Prosecution Insights
Last updated: April 19, 2026
Application No. 18/767,899

END-TO-END AUTOMATED LARGE LANGUAGE MODEL EVALUATION AND DEPLOYMENT

Non-Final OA §101§103
Filed
Jul 09, 2024
Examiner
PATEL, SHREYANS A
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Intuit Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
359 granted / 403 resolved
+27.1% vs TC avg
Moderate +7% lift
Without
With
+7.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
46 currently pending
Career history
449
Total Applications
across all art units

Statute-Specific Performance

§101
21.3%
-18.7% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101. Claims 1, 9 and 17 are directed to an abstract idea. The claim core invention is essentially information processing and evaluation: generating prompts, obtaining textual responses, determining context, and then making a “validity verdict” and selectively presenting/withholding that response. Mental processes include concepts performed in the human mind such as observation, evaluation, judgment, opinion, and furthermore, the claims recite generic computer components. The claims, as written does not clearly integrate that abstract idea into a practical application because the additional elements are stated at a high level of generality (processor(s), UI, first LLM and second LLM, generic prompts, and a generic computing system state). The claim does not recite a particular machine implementation or a specific improvement to computer functionality. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims are (i) mere instructions to implement the idea on a computer, and/or (ii) recitation of generic computer structure that serves to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. There is further no improvement to the computing device. Dependent claims 2-8, 10-16 and 18-20 further recite an abstract idea performable by a human and do not amount to significantly more than the abstract idea as they do not provide steps other than what is conventionally known in natural language processing. Claims 2, 10 and 18: a mental process/information organization step implemented on a generic computer. Claims 3 and 11: an abstract evaluation/judgment concept performed using generic computing components. Claims 4 and 12: a mental process implemented on generic hardware. Claims 5 and 13: does not integrate the abstract idea into a practical application. Claims 6, 14 and 19: generic computer of the abstract idea without improving computer functionality. Claims 7, 15 and 20: an abstract evaluation/judgment step performed by a generic computer. Claims 8 and 16: the computing system to a tax calculation engine merely applies the abstract idea to a particular field (tax processing) without adding a technological improvement or inventive concept. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 9-15 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gray et al. (US 2024/0220735) in view of Inan et al. (“Llama Guard: LLM based Input-Output Safeguard for Human-Ai Conversations”; Dec. 7, 2023). Claims 1, 9 and 17, Gary teaches a method comprising: receiving, by at least one processor, a user query entered through a user interface (UI) ([0034] client device has a user input engine configured to detect user input using user interface input devices and instances of a query formulated based on user input typed); generating, by the at least one processor, a first prompt including at least the user query ([0007] [0012] a prompt of “In the context of <query>, summarize” can be process); inputting, by the at least one processor, the first prompt to a first large language model (LLM) and receiving a first response from the first LLM ([0007] the cited prompt can be processed using the LLM to generate the NL based summary); determining, by the at least one processor, a context of a processing state of a computing system corresponding to a state of the UI at a time the user query was entered ([0036] context engine configured to determine a context including current state of a query session and recent queries; interaction via the client device, a location of the client device, profile data of a profile of the user of the client device); generating, by the at least one processor, an answer to the user query and sending the answer to the UI, wherein the answer includes the first response for a valid verdict or omits the first response for an invalid verdict ([0017] [0035] rendering engine provides NL summary for audible and/or visual presentation; determining whether to and/or how to render NL based summaries based on the confidence measures). The difference between the prior art and the claimed invention is that Gray does not explicitly teach generating, by the at least one processor, a second prompt including at least the context and the first response; inputting, by the at least one processor, the second prompt to a second LLM different from the first LLM and receiving a second response from the second LLM; determining, by the at least one processor, a validity verdict of the first response using the second response. Inan teaches generating, by the at least one processor, a second prompt including at least the context and the first response ([3.1] [pg. 3] the type of classification classifying the agent messages (dubbed responses); the conversation contains a conversation where users and agents take turn (response classification uses conversation context and the agent response)); inputting, by the at least one processor, the second prompt to a second LLM different from the first LLM and receiving a second response from the second LLM ([Abstract] [3.1] [pgs. 1 & 3] Llama Guard, Llama2-7b model classifying the responses generated by LLMs; output format includes “safe” “unsafe”). determining, by the at least one processor, a validity verdict of the first response using the second response ([3.1] [Abstract] [pgs. 1 & 3] output should be safe or unsafe; generating binary decision scores). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Gray with teachings of Inan by modifying the generative summaries for search results as taught by Gray to include generating, by the at least one processor, a second prompt including at least the context and the first response; inputting, by the at least one processor, the second prompt to a second LLM different from the first LLM and receiving a second response from the second LLM; determining, by the at least one processor, a validity verdict of the first response using the second response as taught by Inan for the benefit of encouraging researchers to further develop and adapt them to meet the evolving needs of the community for AI safety (Inan [Abstract]). Claims 2, 10 and 18, Gray further teaches the method of claim 1, wherein the first prompt further includes at least one instruction for responding to the user query, the context, or a combination thereof ([0007] [0012-0014] a prompt of answer [query] can be processed using the LLM in generating the NL based summary; a prompt of “In the context of <query>, summarize <Content A>, <Content B>, <Content C>, and <Content D>” can be processed using the LLM to generate the NL based summary). Claims 3 and 11, Inan further teaches the method of claim 1, wherein the second prompt further includes at least one evaluation step, at least one inaccuracy criterion, or a combination thereof ([3.1] each task takes a set of guidelines as input, which consist of numbered categories of violation, as well as plain text descriptions as to what is safe and unsafe within that category). Claims 4 and 12, Gray further teaches the method of claim 1, wherein determining the context comprises: determining the processing state of the computing system ([0036] current state of a query session); determining at least one data entry applicable to the processing state ([0036] profile data, and/or a current location); and defining the context as data describing at least a portion of the processing state and the at least one data entry ([0036] determine a current context based on a current state of a query session (e.g., considering one or more recent queries of the query session), profile data, and/or a current location of the client device 110). Claims 5 and 13, Gray further teaches the method of claim 1, further comprising: determining, by the at least one processor, the processing state of the computing system by obtaining data from the computing system ([0036] a current state of a query session (location/profile data of the client device 110 (a processor and memory))); wherein the computing system is separate from, and in communication with, at least one device comprising the at least one processor ([Fig. 1] [0038] one or more of the software applications can be hosted remotely (e.g., by one or more servers) and can be accessible by the client device 110 over one or more of the networks 199; Client Device 110; NL based Response System 120; Search System(s) 160). Claims 6, 14 and 19, Gray further teaches the method of claim 1, wherein: each of the first LLM and the second LLM are separate from, and in communication with, at least one device comprising the at least one processor ([Fig. 1] [0031] implemented remotely from the client device 110 (processor and memory)); the first LLM utilizes a first model algorithm to generate the first response ([0044-0045] LLM response generation engine using LLM to generate an NL based summary). Inan further teaches the second LLM utilizes a second model algorithm to generate the second response ( [Abstract] Llama Guard, a Llama2-7b model, instruction-tuned). Claims 7, 15 and 20, Inan further teaches the method of claim 1, wherein the validity verdict indicates at least one inaccuracy criterion met by the first response ([3.1] if unsafe, output lists taxonomy categories (violated)). Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gray et al. (US 2024/0220735) in view of Inan et al. (“Llama Guard: LLM based Input-Output Safeguard for Human-Ai Conversations”; Dec. 7, 2023) and further in view of Christian et al. (US 2023/0214892). Claims 8 and 16, Gray and Inan teach all the limitations in claim 1. The difference between the prior art and the claimed invention is that Gray nor Inan explicitly teach wherein the computing system comprises a tax calculation engine (TKE), and the processing state includes at least one of information received by the TKE from the UI, information received by the TKE from at least one additional source, a calculation performed by the TKE, tax data identified by the TKE as being relevant to the user, or a combination thereof. Christian teaches wherein the computing system comprises a tax calculation engine (TKE), and the processing state includes at least one of information received by the TKE from the UI, information received by the TKE from at least one additional source, a calculation performed by the TKE, tax data identified by the TKE as being relevant to the user, or a combination thereof ([Figs. 9A-9B] [0034-0038] edge version of a tax calculation engine; receiving a tax calculation request from a client application; global tax rules database (tax rate and rule data); configured to calculate tax burden; identify a subset of the tax rate and rule data 28A applicable to each of the subset of products in each of the subset of geographic regions 54). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Gray and Inan with teachings of Christian by modifying generative summaries for search results as taught by Gray to include wherein the computing system comprises a tax calculation engine (TKE), and the processing state includes at least one of information received by the TKE from the UI, information received by the TKE from at least one additional source, a calculation performed by the TKE, tax data identified by the TKE as being relevant to the user, or a combination thereof as taught by Christian for the benefit of calculating taxes applicable to transactions for goods and service at locations around the world (Christian [0002]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gardener et al. (US 12/008,332) – A method of generating summaries of content items using one or more large language models (LLMs) is disclosed. A first content item is identified. The first content item includes a set of sub-content items. A level of abstraction is determined for the content item. A prompt is automatically engineered for providing to the one or more LLMs. The prompt includes a reference to the first content item and the level of the abstraction for the first content item. A response to the prompt is received from the LLM. The response includes a second content item. The second content item includes a representation of the first content item that is generated by the LLM. The representation omits or simplifies one or more of the set of sub-content items based on the level of abstraction. The representation is used to control an output that is communicated to a target device. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHREYANS A PATEL whose telephone number is (571)270-0689. The examiner can normally be reached Monday-Friday 8am-5pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHREYANS A. PATEL Primary Examiner Art Unit 2653 /SHREYANS A PATEL/Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jul 09, 2024
Application Filed
Feb 12, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586597
ENHANCED AUDIO FILE GENERATOR
2y 5m to grant Granted Mar 24, 2026
Patent 12586561
TEXT-TO-SPEECH SYNTHESIS METHOD AND SYSTEM, A METHOD OF TRAINING A TEXT-TO-SPEECH SYNTHESIS SYSTEM, AND A METHOD OF CALCULATING AN EXPRESSIVITY SCORE
2y 5m to grant Granted Mar 24, 2026
Patent 12548549
ON-DEVICE PERSONALIZATION OF SPEECH SYNTHESIS FOR TRAINING OF SPEECH RECOGNITION MODEL(S)
2y 5m to grant Granted Feb 10, 2026
Patent 12548583
ACOUSTIC CONTROL APPARATUS, STORAGE MEDIUM AND ACCOUSTIC CONTROL METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12536988
SPEECH SYNTHESIS METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
96%
With Interview (+7.4%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month