Prosecution Insights
Last updated: April 19, 2026
Application No. 18/332,549

SYSTEMS AND METHODS FOR DYNAMIC LARGE LANGUAGE MODEL PROMPT GENERATION

Final Rejection §102§103
Filed
Jun 09, 2023
Examiner
CHOWDHURY, ZIAUL A.
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
Shopify Inc.
OA Round
4 (Final)
87%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
473 granted / 544 resolved
+31.9% vs TC avg
Strong +37% interview lift
Without
With
+36.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
15 currently pending
Career history
559
Total Applications
across all art units

Statute-Specific Performance

§101
14.7%
-25.3% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 544 resolved cases

Office Action

§102 §103
Detailed Action 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant’s amended claims dated January 20th, 2026 responding to the November 5th, 2025 Office Action provided in the rejection of claims 1-20. Status of Claims 2. Claims 21, 33 and 40 have been amended. Claims 21-40 and 42 are pending in the application, of which claims 21, 33 and 40 are in independent form and these claims (21-40 and 42) are subject to following rejection(s) and/or objection(s) indicated under section and subsections of No. 3 below. Response to the Amendments 3. (A). Regarding art rejection: In regards to claims 21-40 and 42 Applicants arguments are not persuasive; further, Applicants' amendment necessitated new grounds of rejections presented in the following art rejection. (B). Finality: Applicant's arguments filed January 20th, 2026 have been fully considered but they are not persuasive. Further, Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Claim Rejections – 35 USC §102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. 4. Claims 21, 33 and 40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gajek et al. (US Patent No. US 11861320 B1 herein after Gajek). Per Claim 21: Gajek discloses: A computer-implemented method (At least see Col. 5:49-50 - FIG. 1 illustrates a document reduction and analysis overview method 100) comprising: obtaining a prompt template for generating a prompt to a large language model (At least see Col. 11:36-38 -a prompt template may include a set of instructions for causing a large language model to generate a correspondence document), the prompt template being associated with a first location within a document having a plurality of sections, each section being associated with a respective location within the document (At least see Col. 30:30-33 -one or more identifiers that uniquely identify individual text portions and/or groups of text portions stored in a document repository or other location accessible to the text generation interface system), the prompt template including instructions for text generation based on the first location within the document and including one or more parameters (At least see Col. 11:34-38 -a portion of input text may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to generate a correspondence document); determining, based on the document, a first value to be used for a first one of the one or more parameters (At least see Col. 16:20-21 - number of text chunks may be strictly capped at the input value); and generating the prompt to cause the large language model to generate text for a first section associated with the first location within the document using the prompt template (At least see Col. 6:27-31 - first subset of the text portions to the text generation modeling system in one or more prompts. The prompts may instruct the text generation modeling system to identify which, if any, of the text portions are relevant to determining an answer to the query), the first value, and a second value that is used for a second one of the one or more parameters (At least see Col. 6:24-31 - second subset of the text portions may be identified by providing some or all of the first subset of the text portions to the text generation modeling system in one or more prompts. The prompts may instruct the text generation modeling system to identify which, if any, of the text portions are relevant to determining an answer to the query). Per Claim 33: Gajek discloses: A non-transitory computer readable storage medium storing executable instructions (At least see Col. 16:37-38 - one or more non-transitory computer readable media), execution of which by a processor causing the processor to: obtain a prompt template for generating a prompt to a large language model (At least see Col. 11:36-38 -a prompt template may include a set of instructions for causing a large language model to generate a correspondence document), the prompt template being associated with a first location within a document having a plurality of sections, each section being associated with a respective location within the document (At least see Col. 30:30-33 -one or more identifiers that uniquely identify individual text portions and/or groups of text portions stored in a document repository or other location accessible to the text generation interface system), the prompt template including instructions for text generation based on the first location within the document and including one or more parameters (At least see Col. 11:34-38 -a portion of input text may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to generate a correspondence document); determine, based on the document, a first value to be used for a first one of the one or more parameters (At least see Col. 16:20-21 - number of text chunks may be strictly capped at the input value); and generate the prompt to cause the large language model to generate text for a first section associated with the first location within the document using the prompt template (At least see Col. 6:27-31 - first subset of the text portions to the text generation modeling system in one or more prompts. The prompts may instruct the text generation modeling system to identify which, if any, of the text portions are relevant to determining an answer to the query), the first value, and a second value that is used for a second one of the one or more parameters (At least see Col. 6:24-31 - second subset of the text portions may be identified by providing some or all of the first subset of the text portions to the text generation modeling system in one or more prompts. The prompts may instruct the text generation modeling system to identify which, if any, of the text portions are relevant to determining an answer to the query). Per Claim 40: Gajek discloses: A system (At least see Col. 6:44 - FIG. 2 illustrates a text generation system 200) comprising: at least one hardware processor (At least see Col. 39:2-3 a system uses a processor in a variety of contexts but can use multiple processors); and at least one non-transitory memory storing instructions, which, when executed by the at least one hardware processor (At least see Col. 16:37-38 - one or more non-transitory computer readable media), cause the system to: obtain a prompt template for generating a prompt to a large language model (At least see Col. 11:36-38 -a prompt template may include a set of instructions for causing a large language model to generate a correspondence document), the prompt template being associated with a first location within a document having a plurality of sections, each section being associated with a respective location within the document (At least see Col. 30:30-33 -one or more identifiers that uniquely identify individual text portions and/or groups of text portions stored in a document repository or other location accessible to the text generation interface system), the prompt template including instructions for text generation based on the first location within the document and including one or more parameters (At least see Col. 11:34-38 -a portion of input text may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to generate a correspondence document); determine, based on the document, a first value to be used for a first one of the one or more parameters (At least see Col. 16:20-21 - number of text chunks may be strictly capped at the input value); and generate the prompt to cause the large language model to generate text for a first section associated with the first location within the document using the prompt template (At least see Col. 6:27-31 - first subset of the text portions to the text generation modeling system in one or more prompts. The prompts may instruct the text generation modeling system to identify which, if any, of the text portions are relevant to determining an answer to the query), the first value, and a second value that is used for a second one of the one or more parameters (At least see Col. 6:24-31 - second subset of the text portions may be identified by providing some or all of the first subset of the text portions to the text generation modeling system in one or more prompts. The prompts may instruct the text generation modeling system to identify which, if any, of the text portions are relevant to determining an answer to the query). 5. Claims 22-26, 28-39 and 42 rejected under 35 U.S.C. 103 as being unpatentable over Gajek et al. (US Patent No. US 11861320 B1 herein after Gajek) in view of Heller et al. (US Patent Application PUB. No. 2024/0273309 A herein after Heller). Per Claim 22: Gajek sufficiently disclose the method system and computer program product as set forth above, but Gajek does not explicitly disclose: selecting, based on the document, multiple candidate values for the second value; providing the candidate values for display; and in response to receiving a selection from among the candidate values, identifying the selected candidate value as the second value. However, Heller discloses: selecting, based on the document, multiple candidate values for the second value (At least see ¶[0025] - Large language models often receive as input a portion of input text and generate in response a portion of output text); providing the candidate values for display to a user (At least see ¶[0196] - Presenting the summary as output may involve, for instance, presenting the summary in a user interface, outputting the summary via a chat interface); and in response to receiving a selection from among the candidate values, identifying the selected candidate value as the second value (At least see ¶[0071] -client machine may select a particular text generation flow from a list; also see - ¶[0090] A portion of the text is selected at 506. In some embodiments, as discussed herein, text may be pre-divided into text portion). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Heller into Gajek’s invention because Heller’s teaching provides techniques for generation of novel text via a large language model that include a text generation interface that serves as an interface between client machines to a text generation system that suitable for implementing a large language model that cooperatively maintains continual interactions; and as such, text generation interface system then provides output text to the one or more client machines based on the interaction with the large language model (please see [0020]). Per Claim 23: Heller also discloses: generating a prediction of the second value based on the document (At least see ¶[0048] - text generation model 276 may be trained to predict successive words in a sentence); determining a confidence score for the prediction of the second value is less than a threshold (At least see ¶[0384] 3. assign the interrogatory an integer score between 0 and 9, with 0 meaning that there is no text relevant to the interrogatory and 9 meaning that the extracted text is exceptionally relevant); and providing a request to provide input for the second value to a user in response to determining the confidence score is less than the threshold (At least see ¶[0121] - text generation modeling system 270 may be configured such that the entire state of the text generation model needs to fit in a prompt smaller than a designated threshold). Per Claim 24: Heller also discloses: providing the generated text for display to a user (At least see ¶[0265] - Presenting the consolidated timeline may involve, for instance, displaying the timeline in a user interface, including the timeline in a chat message); and in response to receiving a request from the user to regenerate the text: providing information indicative of the first value for display to the user (At least see ¶[0196] - Presenting the summary as output may involve, for instance, presenting the summary in a user interface, outputting the summary via a chat interface); and in response to receiving, from the user, a third value to replace the determined first value, modifying the prompt to the large language model based on the third value and the prompt template (At least see ¶[0076] -a prompt template may include a set of instructions for causing a large language model to generate a correspondence document. The prompt template may be modified to determine a prompt by adding a portion of input text that characterizes the nature of the correspondence document to be generated). Per Claim 25: Heller also discloses: receiving an input to add the section to the document, wherein the prompt to the large language model is generated in response to receiving the input (At least see ¶[0076] -prompt template may be modified to determine a prompt by adding a portion of input text that characterizes the nature of the correspondence document to be generated. The added input text may identify information such as the correspondence recipient); generating the first section in the document; and adding the generated text to the generated section of the document (At least see ¶[0104] - text portions may be processed via the method 500 shown in FIG. 5 to ensure that each text portion is smaller than the maximum chunk size. However, a text chunk may already include one or more text portions added to the text chunk in a previous iteration). Per Claim 26: Heller also discloses: recommending a section type for the section to be added to the document, wherein the recommended section type is one of multiple section types available for adding to the document and wherein each of the multiple types of sections is associated with a different location within the document (At least see ¶[0123] - chat prompt at 806 may involve selecting a chat prompt template configured to instruct the text generation modeling system 270 to suggest one or more skills. The text generation modeling system 270 may indicate the recommended skill or skills via natural language text and/or via one or more skill codes. Then, parsing the chat message at 818 may involve searching the chat response message 816 for the natural language text and/or the one or more skill codes); wherein obtaining the prompt template comprises obtaining the prompt template associated with the first location which is associated with the recommended section type of the section (At least see ¶[0123] - selecting a chat prompt template configured to instruct the text generation modeling system 270 to suggest one or more skills. The text generation modeling system 270 may indicate the recommended skill or skills via natural language text and/or via one or more skill codes. Then, parsing the chat message at 818 may involve searching the chat response message). Per Claim 28: Heller also discloses: wherein receiving the input to add the first section to the document comprises receiving the first location for the section on the document (At least see ¶[0123] - selecting a chat prompt template configured to instruct the text generation modeling system 270 to suggest one or more skills. The text generation modeling system 270 may indicate the recommended skill or skills via natural language text and/or via one or more skill codes. Then, parsing the chat message at 818 may involve searching the chat response message), and wherein recommending the first section type of the section to be added to the document comprises: selecting the recommended section type based on the first location for the first section (At least see ¶[0076] - a portion of input text may be added to a prompt template at an appropriate location). Per Claim 29: Heller also discloses: receiving an input to move the first section from the first location to a second location within the document (At least see ¶[0076] -a prompt may be determined by supplementing and/or modifying a prompt template based on the input text. For instance, a portion of input text may be added to a prompt template at an appropriate location); and retrieving a different prompt template that is associated with the second location (At least see ¶[0142] - chat prompt at 806 may involve selecting a chat prompt template configured to instruct the text generation modeling system 270 to revise correspondence). Per Claim 30: Heller also discloses: wherein the first one of the one or more parameters specifies a tone for the text (At least see ¶[0141] - a message tone for generating the correspondence text), and wherein determining the first value based on the document comprises: sending text retrieved from other sections of the plurality of sections to the large language model for analysis of a tone of the other sections (At least see ¶[0141] - a message tone for generating the correspondence text. Then, the chat response message received at 816 may include novel text for including in the correspondence. The novel text may be parsed and incorporated into a correspondence letter, which may be included with the chat output message sent at 822 and presented to the user); and selecting a tone as the first value based on the analysis of the tone of the other sections performed by the large language model (At least see ¶[0141] - parser may perform operations such as formatting the novel text in a letter format). Per Claim 31: Heller also discloses: identifying a product category of a product to be described in the first section; and selecting a tone as the first value based on the product category (At least see ¶[0141] - request may also include information such as the recipient of the correspondence, the source of the correspondence, and the content to be included in the correspondence. The content of the correspondence may include, for instance, one or more topics to discuss. The request may also include metadata information such as a message tone for generating the correspondence text). Per Claim 32: Heller also discloses: wherein determining the first value comprises retrieving the first value from another section of the plurality sections (At least see [0073] - a search of a database, set of documents, or other data source may be executed base at least in part on one or more search parameters determined based on a request received … the request may identify one or more search terms and a set of documents to be searched using the one or more search terms). Per Claim 34: Heller also discloses: select, based on the document, multiple candidate values for the second value (At least see ¶[0025] - Large language models often receive as input a portion of input text and generate in response a portion of output text); send the candidate values for display to a user (At least see ¶[0196] - Presenting the summary as output may involve, for instance, presenting the summary in a user interface, outputting the summary via a chat interface); and in response to receiving a selection from among the candidate values, identifying the selected candidate value as the second value (At least see ¶[0071] -client machine may select a particular text generation flow from a list; also see - ¶[0090] A portion of the text is selected at 506. In some embodiments, as discussed herein, text may be pre-divided into text portion). Per Claim 35: Heller also discloses: generate a prediction of the second value based on the document (At least see ¶[0048] - text generation model 276 may be trained to predict successive words in a sentence); determine a confidence score for the prediction of the second value is less than a threshold (At least see ¶[0384] 3. assign the interrogatory an integer score between 0 and 9, with 0 meaning that there is no text relevant to the interrogatory and 9 meaning that the extracted text is exceptionally relevant); and provide a request to provide input for the second value to a user in response to determining the confidence score is less than the threshold (At least see ¶[0121] - text generation modeling system 270 may be configured such that the entire state of the text generation model needs to fit in a prompt smaller than a designated threshold). Per Claim 36: Heller also discloses: receive an input to add the section to the document, wherein the prompt to the large language model is generated in response to receiving the input (At least see ¶[0076] -prompt template may be modified to determine a prompt by adding a portion of input text that characterizes the nature of the correspondence document to be generated. The added input text may identify information such as the correspondence recipient); generate the first section in the document; and add the generated text to the generated first section of the document (At least see ¶[0104] - text portions may be processed via the method 500 shown in FIG. 5 to ensure that each text portion is smaller than the maximum chunk size. However, a text chunk may already include one or more text portions added to the text chunk in a previous iteration). Per Claim 37: Heller also discloses: recommend a section type for the first section to be added to the document, wherein the recommended section type is one of multiple section types available for adding to the document and wherein each of the multiple types of sections is associated with a different location within the document (At least see ¶[0123] - chat prompt at 806 may involve selecting a chat prompt template configured to instruct the text generation modeling system 270 to suggest one or more skills. The text generation modeling system 270 may indicate the recommended skill or skills via natural language text and/or via one or more skill codes. Then, parsing the chat message at 818 may involve searching the chat response message 816 for the natural language text and/or the one or more skill codes); wherein obtaining the prompt template comprises obtaining the prompt template associated with the first location which is associated with the recommended section type of the first section (At least see ¶[0123] - selecting a chat prompt template configured to instruct the text generation modeling system 270 to suggest one or more skills. The text generation modeling system 270 may indicate the recommended skill or skills via natural language text and/or via one or more skill codes. Then, parsing the chat message at 818 may involve searching the chat response message). Per Claim 38: Heller also discloses: wherein the first one of the one or more parameters specifies a tone for the text (At least see ¶[0141] - a message tone for generating the correspondence text), and wherein determining the first value based on the document comprises: sending text retrieved from other sections of the plurality of sections to the large language model for analysis of a tone of the other sections (At least see ¶[0141] - a message tone for generating the correspondence text. Then, the chat response message received at 816 may include novel text for including in the correspondence. The novel text may be parsed and incorporated into a correspondence letter, which may be included with the chat output message sent at 822 and presented to the user); and selecting a tone as the first value based on the analysis of the tone of the other sections performed by the large language model (At least see ¶[0141] - parser may perform operations such as formatting the novel text in a letter format). Per Claim 39: Heller also discloses: wherein determining the first value comprises retrieving the first value from another section of the plurality of sections (At least see [0073] - a search of a database, set of documents, or other data source may be executed base at least in part on one or more search parameters determined based on a request received … the request may identify one or more search terms and a set of documents to be searched using the one or more search terms). 6. Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Gajek et al. (US Patent No. US 11861320 B1 herein after Gajek) in view of Heller et al. (US Patent Application PUB. No. 2024/0273309 A herein after Heller), and further in view of Gurgu et al. (US Patent Application Publication No. 2023/0297887 A1 herein after Gurgu). Per Claim 27: Gajek modified by Heller sufficiently discloses the method as set forth above, but Gajek modified by Heller does not explicitly disclose: using a trained machine learning model to generate the recommended section type based on current contents of the document, wherein the trained machine learning model is trained based on layouts of other documents to predict a likely next section type for the document based on the current contents of the document. However, Gurgu discloses: using a trained machine learning model to generate the recommended section type based on current contents of the document, wherein the trained machine learning model is trained based on layouts of other documents to predict a likely next section type for the document based on the current contents of the document (At least see ¶[0069] - training system 110 is configured to automatically recommend training questions to the chatbot builder 12 for training the one or more machine learning models of the intent classification system 100. One or more training questions may be generated based on an input prompt. The prompt may be manually and/or automatically formulated using the source data in the knowledge base 14. In this manner, the generated training questions may be catered to the enterprise's business). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Gurgu into Gajek modified by Heller because Gurgu invention provides several advantageous features such as identify a structure for generating an input; formulate the input according to the structure; provide the input to a first machine learning model; receive an output from the first machine learning model based on the input; and train a second machine learning model based on the output (please see ¶[0015] and ¶[0018]). CONCLUSION 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZIAUL A. CHOWDHURY whose telephone number is (571)270-7750. The examiner can normally be reached on 9:30PM 6:30PM Monday -Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S. Sough can be reached on 571-272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Status information for published applications may be obtained from Patent Public Search tool (for all users) – A link to the Patent Public Search Tool is available at www. Uspto.gov/PatentPublicSearch. To find a U.S. patent or U.S. patent application publication, open the Patent Public Search tool by selecting “Start search”. Type the U.S. patent or U.S. patent application publication number in the “Search” panel without any punctuation and followed by an”.pn.”. Should you have questions on access to the system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZIAUL A CHOWDHURY/ Primary Examiner, Art Unit 2192 03/10/2026
Read full office action

Prosecution Timeline

Jun 09, 2023
Application Filed
Nov 13, 2024
Response after Non-Final Action
Mar 05, 2025
Non-Final Rejection — §102, §103
May 08, 2025
Response Filed
Aug 09, 2025
Final Rejection — §102, §103
Oct 09, 2025
Response after Non-Final Action
Oct 21, 2025
Request for Continued Examination
Oct 25, 2025
Response after Non-Final Action
Nov 01, 2025
Non-Final Rejection — §102, §103
Jan 12, 2026
Examiner Interview Summary
Jan 12, 2026
Applicant Interview (Telephonic)
Jan 20, 2026
Response Filed
Mar 10, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602312
CONFIGURABLE IDENTIFICATION MECHANISM OF DEBUG PARAMETERS IN MULTI-PROCESS OR MULTI-THREADED DEBUGGING
2y 5m to grant Granted Apr 14, 2026
Patent 12602204
DEVELOPING A SOFTWARE PRODUCT IN A NO-CODE DEVELOPMENT PLATFORM TO ADDRESS A PROBLEM RELATED TO A BUSINESS DOMAIN
2y 5m to grant Granted Apr 14, 2026
Patent 12596344
CONTROL SYSTEM, CONTROL PROGRAM TRANSMISSION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12591427
PLC-BASED SUPPORT FOR ZERO-DOWNTIME UPGRADES OF CONTROL FUNCTIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12578956
Method and apparatus for firmware patching
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+36.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 544 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month