Prosecution Insights
Last updated: April 19, 2026
Application No. 18/638,961

AUTOMATICALLY GENERATING CONTEXT-BASED DYNAMIC OUTPUTS USING ARTIFICIAL INTELLIGENCE TECHNIQUES

Non-Final OA §103
Filed
Apr 18, 2024
Examiner
NGUYEN, CAM LINH T
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
3 (Non-Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
97%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
651 granted / 778 resolved
+28.7% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
11 currently pending
Career history
789
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
23.5%
-16.5% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 778 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/25/2025 has been entered. Claims 6 – 8, 15, 20 have been cancelled. Claims 1 – 5, 9 – 14, 16 – 19, 21 – 25 are currently pending. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1 – 5, 9 – 14, 16 – 19, 21 – 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mukherjee et al (U.S. 2024/0354436 A1) in view of Das et al (U.S. 2007/0220036 A1) further in view of Nesargi et al (U.S. 2023/0216956 A1), [all previously provided]. [Symbol font/0xA8]As per claims 1, 11, 16, Mukherjee discloses a method, system comprising: “obtaining at least one query from at least one user device using at least one user interface” See Fig. 4, step 402, Para. 0124 of Mukherjee wherein “the document search system 102 may receive, from a user via a user interface, a first user input including a natural language query. For example, the user interface module 104 may receive the first user input for the LLM 130 from the user 150”. “classifying one or more intentions associated with the at least one query by processing at least a portion of the at least one query using one or more artificial intelligence (Para. 0058) techniques” See Fig. 4, step 404 - 410, Para. 0066, 0125, 0128 - 0130 of Mukherjee wherein the query is parsed into vectors and the context also obtained from the user. Information of the context also include the meaning of intent of user since it contains information about the user. (“A Context can include, for example, any information associated with user inputs, prompts, responses, and/or the like, that are generated and/or communicated to/from the user, the document search system, the LLM, and/or any other device or system”, “the document search system 102 may generate a context associated with the first user input received at block 402 ... to include any information associated with the user 150, a user session, or some other characteristics. For example, context may include all or part of a conversation history from one or more sessions with the user 150”). Mukherjee does not clearly disclose “identifying at least one query-related template, from one or more template databases, corresponding to at least one of the one or more classified intentions; accessing one or more data sources identified in the at least one query-related template and executing one or more placeholder queries, associated with the at least one query- related template and the one or more data sources, to fetch data from at least one template- designated portion of the one or more data sources”. However, Das discloses a method, system for troubleshooting to diagnose compute problem including the teaching of: “Obtaining at least one query from at least one user device: See Fig. 3, step 200 of Das, (user report incident). “identifying at least one query-related template, from one or more template databases, corresponding to at least one of the one or more classified intentions” See abstract, Para. 0022, 0029, 0041 of Das wherein a template is selected based on user query (intend), [“A manifest template is chosen from a set of manifest templates that best fits the problem trying to be diagnosed. ... The manifest template is customized for the particular incident to create a manifest”, “The system will include a set of preexisting manifest templates. One of these manifest templates will be used to create a manifest in step 204”]. “accessing one or more data sources identified in the at least one query-related template and executing one or more placeholder queries, associated with the at least one query- related template and the one or more data sources, to fetch data from at least one template- designated portion of the one or more data sources”: See Para. 0029, 0041, 0052, 0055 of Das wherein data will be collected from sources (local or remote computer) using the template, [“The manifest, and tools identified by the manifest, will be downloaded to user client device 2. Data collected by the tools downloaded from tool server 34 and data collection according to a manifest will be uploaded to collection server 40 from user client device 2”, “The manifest will identify existing data to collect and tools to run to generate additional data”, “After running the tools in steps 436 and/or 438, any data on the local machine specified by the manifest (data files, data values, registry values, etc.) is obtained”]. “identifying one or more data sources related to one or more of the at least one query and the one or more classified intentions by processing the at least a portion of the at least one query using the one or more artificial intelligence techniques” See Fig. 4, step 406, Para. 0066, 0126 of Mukherjee wherein the system identify relate documents, (“the document search module 106 may execute, using the query vector generated at block 404, a similarity search in a document search model to identify one or more similar document portions”). It would have been obvious to one with ordinary skill in the art before the effective filling date of the claim invention to apply the teaching of Das into the invention of Mukherjee since both inventions were available and the combination would provide the user with more desirable results and reduce the time searching for information. “dynamically (0010, large amounts of data are automatically and dynamically calculated interactively in response to user inputs) generating at least one context-based version (prompt) of the at least one query by integrating into at least a portion of the at least one query, content from the at least one query-related template and at least a portion of the data fetched from the at least one template-designated portion of the one or more data sources” See Fig. 4, step 412, Para. 0045, 0091, 0130 of Mukherjee in combination with Das teaching (using the template to obtain additional data) wherein a first prompt is generated that includes user input(query), document portion (sources), and the context (intention of user), (“the prompt generation module 114 may generate a first prompt for the LLM 130, where the first prompt may include the first user input and/or the similar document portions. Additionally and/or optionally, the prompt generation module 114 may generate the first prompt for the LLM 130 further based on the context associated with the user query or the user 150 that may be optionally generated at block 408”). In case the Applicant disagrees that the “prompt” and the “context” in Mukherjee are not equivalent to the “context-based version” and “intention”, the Examiner provides another example. Nesargi, in the same field of endeavor, discloses a method, system for facilitating communication between a user and a service provider (See abstract of Nesargi) including the teaching of: Receiving a search query: See Fig. 3, step 301, Para. 0057 of Nesargi. Determining intention of the user: See Para. 0040, 0051, 0052 of Nesargi wherein “when a caller dials into IVR system 113, IVR system 113 may determine identification and intention”, “training module 207 may include an artificial intelligence (AI) engine configured to use natural language processing to conduct automated conversations with the users. For example, training module 207 may automatically process responses and may determine the intentions of the callers based on the response”. Dynamically generating at least one context-based version (a dynamic customized script) of the at least one query by integrating at least a portion of the one or more classified intentions and data associated with at least a portion of the one or more identified data sources into at least a portion of the at least one query: See Fig. 3, step 309, Para. 0058, 0060 of Nesargi wherein “integration platform 109 may generate a presentation of the dynamic customized script in a user interface of a device associated with the agent…a dynamic customized script is a script that changes in real-time based, at least in part, on contextual information of the user, user interaction with IVR system 113, or a combination thereof”. It would have been obvious to one with ordinary skill in the art before the effective filling date of the claim invention to apply the teaching of Nesargi (if not already taught by Mukherjee) into the invention of Mukherjee/Das since both inventions were available and the combination would provide the user with more desirable results and reduce the time searching for information. “performing one or more automated actions based at least in part on the at least one dynamically generated context-based version of the at least one query” See Fig. 4, Para. 0049, 0141 of Mukherjee, (“the system may use the feedback to fine-tune the performance of the LLM, such as by adjusting or modifying one or more weights associated with the LLM, or trigger training and/or re-training of the LLM”, “the document search system 102 may generate training data and/or updated prompt based at least on the user feedback ... may generate training data and/or updated prompt to the LLM 130 for the LLM 130 to provide an updated output that may fulfill expectation of the user 150”. “wherein performing one or more automated actions comprises modifying one or more portions of the at least one query-related template identified from the one or more template databases based at least in part on at least a portion of the at least one dynamically generated context-based version of the at least one query” See Fig. 3- 4, Para. 0044, 0047 of Das wherein template can be created, edited, modified by the system based on user context (specific problem/incident reported in Fig. 3), [“A support professional, with the proper access, will be able to create, edit and otherwise use manifest templates and data collection items using Manifest Manager 32”, “support professional will customize a manifest template by removing data collection items or adding data collection items using a graphical user interface (GUI). In step 256, a manifest is generated and stored based on the customization of the manifest template performed in step 254”]. wherein the method is performed by at least one processing device comprising a processor coupled to a memory” See Fig. 1A-1B of Mukherjee wherein a computer is provide to perform the functions. [Symbol font/0xA8]As per claims 2, 12, 17, “wherein performing one or more automated actions comprises automatically generating at least one response to the at least one dynamically generated context-based version of the at least one query by processing the at least one dynamically generated context-based version of the at least one query using the one or more artificial intelligence techniques, and outputting the at least one response to the at least one user device via the at least one user interface” See Fig. 4, Para. 0049, 0141 of Mukherjee, (“the system may use the feedback to fine-tune the performance of the LLM, such as by adjusting or modifying one or more weights associated with the LLM, or trigger training and/or re-training of the LLM”, “the document search system 102 may generate training data and/or updated prompt based at least on the user feedback ... may generate training data and/or updated prompt to the LLM 130 for the LLM 130 to provide an updated output that may fulfill expectation of the user 150”. [Symbol font/0xA8]As per claims 3, 13, 18, “wherein performing one or more automated actions comprises generating at least one template based at least in part on the at least one dynamically generated context-based version of the at least one query” See Fig. 3 - 4, Para. 0044, 0047 of Das wherein “A support professional, with the proper access, will be able to create, edit and otherwise use manifest templates and data collection items using Manifest Manager 32”, “support professional will customize a manifest template by removing data collection items or adding data collection items using a graphical user interface (GUI). In step 256, a manifest is generated and stored based on the customization of the manifest template performed in step 254”. [Symbol font/0xA8]As per claims 4, 14, 19, “wherein classifying one or more intentions associated with the at least one query comprises processing the at least a portion of the at least one query using one or more large language models (LLMs)” See Para. 0032 – 0033 of Mukherjee wherein “the system can enable natural language searching and response, utilizing one or more LLMs, with references to a large set of documents, without being constrained by a size limit on prompts for the LLMs”. [Symbol font/0xA8]As per claims 5, “wherein classifying one or more intentions associated with the at least one query comprises processing the at least a portion of the at least one query using one or more of at least one generative pretrained transformer (GPT) model and one or more bidirectional encoder representations from transformers (BERT) models” See Para. 0038, 0055, 0063 of Mukherjee wherein “the system may employ a language model such as a LLM (e.g., GPT-2) to vectorize the user query and portions of the set of documents permissioned to the user”, “Examples of models, language models, and/or LLMs that may be used in various implementations of the present disclosure include, for example, Bidirectional Encoder Representations from Transformers (BERT),…, Generative Pre-trained Transformer 2 (GPT-2), Generative Pre-trained Transformer 3 (GPT-3), Generative Pre-trained Transformer 4 (GPT-4)”. [Symbol font/0xA8]As per claims 9, “wherein performing one or more automated actions comprises automatically training at least a portion of the one or more artificial intelligence techniques using feedback related to the at least one dynamically generated context-based version of the at least one query” See Fig. 4, Para. 0049, 0141 of Mukherjee, (“the system may use the feedback to fine-tune the performance of the LLM, such as by adjusting or modifying one or more weights associated with the LLM, or trigger training and/or re-training of the LLM”, “the document search system 102 may generate training data and/or updated prompt based at least on the user feedback ... may generate training data and/or updated prompt to the LLM 130 for the LLM 130 to provide an updated output that may fulfill expectation of the user 150”. [Symbol font/0xA8]As per claims 10, “wherein obtaining at least one query from at least one user device comprises obtaining at least one query from at least one user device using at least one chatbot interface” See Fig. 7 – 10 of Mukherjee wherein a chatbot interface is provided. [Symbol font/0xA8]As per claims 21, “wherein classifying one or more intentions associated with the at least one query comprises processing the at least a portion of the at least one query using one or more of at least one GPT model and one or more BERT models” See Para. 0038, 0055, 0063 of Mukherjee wherein GPT and BERT models are used. [Symbol font/0xA8]As per claims 22, “wherein performing one or more automated actions comprises automatically training at least a portion of the one or more artificial intelligence techniques using feedback related to the at least one dynamically generated context- based version of the at least one query” See Fig. 4, Para. 0049, 0141 of Mukherjee, (“the system may use the feedback to fine-tune the performance of the LLM, such as by adjusting or modifying one or more weights associated with the LLM, or trigger training and/or re-training of the LLM”, “the document search system 102 may generate training data and/or updated prompt based at least on the user feedback ... may generate training data and/or updated prompt to the LLM 130 for the LLM 130 to provide an updated output that may fulfill expectation of the user 150”). [Symbol font/0xA8]As per claims 23, “wherein obtaining at least one query from at least one user device comprises obtaining at least one query from at least one user device using at least one chatbot interface” See Fig. 7 – 8 of Mukherjee wherein a chat interface is provided. [Symbol font/0xA8]As per claims 24, “The non-transitory processor-readable storage medium of The non-transitory processor-readable storage medium of wherein performing one or more automated actions comprises automatically training at least a portion of the one or more artificial intelligence techniques using feedback related to the at least one dynamically generated context-based version of the at least one query” See Fig. 4, Para. 0049, 0141 of Mukherjee, (“the system may use the feedback to fine-tune the performance of the LLM, such as by adjusting or modifying one or more weights associated with the LLM, or trigger training and/or re-training of the LLM”, “the document search system 102 may generate training data and/or updated prompt based at least on the user feedback ... may generate training data and/or updated prompt to the LLM 130 for the LLM 130 to provide an updated output that may fulfill expectation of the user 150”). [Symbol font/0xA8]As per claims 25, “The non-transitory processor-readable storage medium of The non-transitory processor-readable storage medium of wherein obtaining at least one query from at least one user device comprises obtaining at least one query from at least one user device using at least one chatbot interface” See Fig. 7 – 8 of Mukherjee wherein a chat interface is provided. Response to Arguments Applicant's arguments filed 11/25/2025 have been fully considered but they are not persuasive. Applicant argues the cited arts fail to disclose “accessing one or more data sources identified in the at least one query-related template and executing one or more placeholder queries, associated with the at least one query-related template and the one or more data sources, to fetch data from at least one template-designated portion of the one or more data sources”. The Examiner respectfully disagrees. As disclosed above, Das teaches in Fig. 3 and Fig. 7, Para. 0029, 0041, 0052, 0055: wherein data will be collected from sources (either local or remote computer) using the template from the administrator, [“The manifest, and tools identified by the manifest, will be downloaded to user client device 2. Data collected by the tools downloaded from tool server 34 and data collection according to a manifest will be uploaded to collection server 40 from user client device 2”, “The manifest will identify existing data to collect and tools to run to generate additional data”, “After running the tools in steps 436 and/or 438, any data on the local machine specified by the manifest (data files, data values, registry values, etc.) is obtained”]. Applicant argues the cited arts fail to disclose “performing one or more automated actions based at least in part on the at least one dynamically generated context-based version of the at least one query comprises modifying one or more portions of the at least one query-related template identified from the one or more template databases based at least in part on at least a portion of the at least one dynamically generated context-based version of the at least one query”. The Examiner respectfully disagrees. In Fig. 3- 4, Para. 0044, 0047 Das teaches wherein template can be created, edited (modified) by the system based on user context (user continue upload specific problem/incident reported in Fig. 3), [“A support professional, with the proper access, will be able to create, edit and otherwise use manifest templates and data collection items using Manifest Manager 32”, “support professional will customize a manifest template by removing data collection items or adding data collection items using a graphical user interface (GUI). In step 256, a manifest is generated and stored based on the customization of the manifest template performed in step 254”]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAM LINH T NGUYEN whose telephone number is (571)272-4024. The examiner can normally be reached M-F: 7:00 - 3:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached at 571- 272- 4024. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAM LINH T NGUYEN/Primary Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Apr 18, 2024
Application Filed
Apr 26, 2025
Non-Final Rejection — §103
Jul 19, 2025
Interview Requested
Jul 30, 2025
Examiner Interview Summary
Jul 30, 2025
Applicant Interview (Telephonic)
Jul 31, 2025
Response Filed
Sep 25, 2025
Final Rejection — §103
Nov 11, 2025
Interview Requested
Nov 20, 2025
Applicant Interview (Telephonic)
Nov 20, 2025
Examiner Interview Summary
Nov 25, 2025
Response after Non-Final Action
Dec 16, 2025
Request for Continued Examination
Dec 31, 2025
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596707
STRUCTURED QUERY LANGUAGE GENERATION USING LARGE LANGUAGE MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12585641
GENERATIVE ARTIFICIAL INTELLIGENCE BASED CONVERSION OF NATURAL LANGUAGE REQUESTS TO DATA WAREHOUSE QUERY INSTRUCTION SETS
2y 5m to grant Granted Mar 24, 2026
Patent 12585626
SYSTEM AND METHOD FOR ENRICHING AND NORMALIZING DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12561297
RULE REMEDIATION ACTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12530375
AUTOMATIC ANALYZER OF MULTIDIMENSIONAL CYTOMETRY DATA
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
97%
With Interview (+13.4%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 778 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month