Prosecution Insights
Last updated: April 19, 2026
Application No. 18/767,770

SYSTEMS AND METHODS FOR IMPROVING INTERACTIONS WITH ARTIFICIAL INTELLIGENCE MODELS

Non-Final OA §DP
Filed
Jul 09, 2024
Examiner
ZONG, RUOLEI
Art Unit
2441
Tech Center
2400 — Computer Networks
Assignee
Practical Creativity LLC
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
814 granted / 938 resolved
+28.8% vs TC avg
Moderate +12% lift
Without
With
+12.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
15 currently pending
Career history
953
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
46.1%
+6.1% vs TC avg
§102
5.8%
-34.2% vs TC avg
§112
16.9%
-23.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 938 resolved cases

Office Action

§DP
DETAILED ACTION The non-final office action is responsive to the filing of U.S. Patent Application 18/767,770 on 07/09/2024. Claims 1-20 are pending; claims 1-20 are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “receive a input from a user device”, “query a database for historical data related to the received input”, “generate a prompt, wherein the prompt is generated based on results associated with the database query”, “provide, via electronic communication, the generated prompt to the first tuned AI model”, “obtain at least one short code from the first tuned AI model in response to the provided generated prompt, wherein at least one obtained short code is associated with a first configuration”, “send, via electronic communication, the generated prompt to one of the tuned library AI models (second tuned AI model) based on the at least one short code associated with the first configuration, wherein the second tuned AI model is trained to generate responses based on the first configuration”, “obtain, via electronic communication, a response generated from the second tuned AI model, wherein the generated response is based on a statistical inference that is made based on training data and model weights”, and “transmit, via electronic communication, the generated response to the user device” in claim 19. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. It seems that Fig. 8 and paragraph [0145] discuss the structure of the device which can be implemented with logics shown in Fig. 3-4 to perform claimed functions. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,052,206 B1 (hereinafter P206). Although the claims at issue are not identical, they are not patentably distinct from each other. Claim 1 of the Instant Application Claim 1 of P206 A computer implemented method for improving interactions with artificial intelligence (AI) models, the method comprising: A computer implemented method for improving interactions with artificial intelligence (AI) models, the method comprising: generating a prompt; receiving input from a user device; querying a database for historical data related to the received input; generating a prompt, wherein the prompt is generated based on results associated with the database query; obtaining at least one short code from a first tuned AI model in response to the generated prompt, wherein at least one obtained short code is associated with a first configuration; and providing, via electronic communication, the generated prompt to a first tuned AI model, wherein the first tuned AI model is trained to return at least one of a plurality of short codes, wherein each short code is associated with a particular configuration of a plurality of configurations, wherein the first tuned AI model is trained to select the at least one short code to be returned based, at least in part, on statistical analysis of at least a subset of the plurality of short codes and the generated prompt, wherein each short code is associated with other tuned AI models, and wherein each of the other tuned AI models associated with a short code is associated with a particular configuration of the plurality of configurations; obtaining at least one short code from the first tuned AI model in response to the generated prompt, wherein at least one obtained short code is associated with a first configuration; sending, via electronic communication, the generated prompt to another tuned AI models (second tuned AI model) based on the at least one short code associated with the first configuration, wherein the second tuned AI model is trained to generate responses based on the first configuration. sending, via electronic communication, the generated prompt to one of the other tuned AI models (second tuned AI model) based on the at least one short code associated with the first configuration, wherein the second tuned AI model is trained to generate responses based on the first configuration; obtaining, via electronic communication, a response generated from the second tuned AI model, wherein the generated response is based on a statistical inference that is made based on training data and model weights; and transmitting, via electronic communication, the generated response to the user device. Claims 1 of the instant application is obviously disclosed by patent claim 1 in that claim 1 of the patent contains all the limitations of claims 1 of the instant application. Claim 1 of the instant application therefore is not patently distinct from the earlier patent claim and as such is unpatentable for obvious-type double patenting. As to claim 2-20, claims 1-20 of P206 obviously disclose all limitations in claims 2-20 of the instant application. Accordingly, claims 2-20 of the instant application are not patently distinct from the earlier patent claims and as such are unpatentable for obvious-type double patenting. Allowable Subject Matter Claims 1-20 are allowable over prior art references on record. Note: the instant application discloses “[t]he first tuned AI model may return a short code associated with the determined configuration of an appropriate response. In one embodiment, the first tuned AI model may receive input from a therapy chatbot interface and return a short code, wherein the short code indicates a configuration of response the chatbot should return, from a plurality of predefined configurations of therapy chatbot responses” (see paragraph [012] as originally filed). The following is a statement of reasons for the indication of allowable subject matter: the prior art references on record do not disclose “obtaining at least one short code from a first tuned AI model in response to the generated prompt, wherein at least one obtained short code is associated with a first configuration; and sending, via electronic communication, the generated prompt to another tuned AI models (second tuned AI model) based on the at least one short code associated with the first configuration, wherein the second tuned AI model is trained to generate responses based on the first configuration.” For example, U.S. Patent Application Publication 2024/0176960 A1 to Maurer et al (hereinafter Maurer) discloses “techniques for transcribing and/or summarizing multimedia collaboration sessions are discussed herein. For example, users can communicate within a teleconferencing meeting associated with a channel. In some examples, a first machine learning model may be configured to receive audio-visual data and user interaction data (e.g., selected emojis, detected gestures, messages or text input by a user, a thread of messages, etc.) and output a teleconferencing meeting summary” (Abstract). Particularly, Maurer discusses “the process 500 may at operation 504 include performing, using a first trained ML model such as the ML model(s) 142 described above, natural language processing (NLP) on the raw audio-visual data to generate transcript data associated with the teleconferencing meeting” and “the ML model(s) 142 can comprise a single ML model or in some examples can comprise a number of ML models with discrete tasks (e.g., a first ML model can convert speech to text, a second ML model can determine a summary, a third ML model can determine action items/relevant details based on context data from a channel, and the like)” (see [0145]-[0153]). However, Maurer does not disclose the limitations identified above. U.S. Patent Application Publication 2024/0160902 A1 to PADGETT et al (hereinafter PADGETT) discloses “Methods and systems for generating output content using a generative artificial intelligence (AI) model based on an input. A similarity-assessment layer at the output of the generative AI model determines a similarity measure for the output content vis-à-vis pre-existing items in a repository. The similarity measure is compared to a threshold value and, responsive to the comparison indicating excessive similarity, one or both of the input and the generative AI model are adjusted, and the generative AI model is re-run to generate new output content” (see Abstract). Particularly, PADGETT discloses “The first generative AI model 402 is used to generate a plurality of outputs. To generate an output, the first generative AI model 402 takes a prompt” (see [0107]-[0113]) “During this first stage, the outputs generated by the first generative AI model 402 form the plurality of outputs, and are stored in memory. A similarity-assessment layer 404 may then be used to evaluate each of the outputs for similarity vis-à-vis a repository 406 of pre-existing items. The evaluation may include calculating a similarity measure that measures the similarity between the result and one of the items in the repository 406. The similarity measure may be a dissimilarity measure in some implementations. Measuring similarity may be based on a distance metric that quantifies the extent to which the output differs from the item from the repository 406” (see [0107]-[0113]), and “The filtered outputs 408 are then used as a second training data set for a second generative AI model 410. By filtering the outputs to exclude outputs too similar to the items from the repository 406, the system 400 produces a set of filtered outputs 408 that are each sufficiently different from the items of the repository 406 that it might be assumed that leakage of any of the outputs from the filtered outputs 408 through the second generative AI model 410 will not result in reproduction of an item from the original training data set” (see [0114]-[0116], [0117]-[0130]). However, PADGETT does not disclose the limitations identified above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUOLEI ZONG whose telephone number is (571)270-7522. The examiner can normally be reached Monday-Friday 8:30AM-4:30PM IFP. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivek Srivastava can be reached at (571)272-7304. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RUOLEI ZONG/Primary Examiner, Art Unit 2441 1/15/2026
Read full office action

Prosecution Timeline

Jul 09, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596506
Storage System Cloning
2y 5m to grant Granted Apr 07, 2026
Patent 12591701
USER STEERING THROUGH WORKSPACE ORCHESTRATION
2y 5m to grant Granted Mar 31, 2026
Patent 12592983
LOCAL DEVICE IDENTIFIERS IN A STORAGE NETWORK
2y 5m to grant Granted Mar 31, 2026
Patent 12580857
Maintaining IP/MAC Association Using ARP Scanning And Spoofing
2y 5m to grant Granted Mar 17, 2026
Patent 12574282
NETWORK COMPONENT EVENTS WITH APPLICATION GRAPH DATA
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+12.3%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 938 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month