Prosecution Insights
Last updated: April 19, 2026
Application No. 17/881,746

Prompting Machine-Learned Models Using Chains of Thought

Non-Final OA §DP
Filed
Aug 05, 2022
Examiner
BYCER, ERIC J
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
318 granted / 479 resolved
+11.4% vs TC avg
Strong +44% interview lift
Without
With
+43.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
8 currently pending
Career history
487
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
24.4%
-15.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 479 resolved cases

Office Action

§DP
DETAILED ACTION This action is responsive to the following communications: Original Application filed on August 5, 2022. All references to this application refer to the U.S. Patent Application Publication No. 2023/0394328 A1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending in this case. Claims 1, 19, and 20 are the independent claims. Claims 1-20 are rejected. Priority This application claims the benefit of U.S. Provisional Patent Application No. 63/348,637, filed on June 3, 2022. Information Disclosure Statement The listing of references in the specification is not a proper information disclosure statement (IDS). 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, “the list may not be incorporated into the specification but must be submitted in a separate paper.” Therefore, unless the references have been cited by the Examiner on form PTO-892, they have not been considered. If the Applicants wish the references to be considered, they should be provided on a future IDS submission. The following were references were listed in the Specification, but not provided with an IDS. Roy et al., “Reasoning about Quantities in Natural Language,” Transactions of the Association for Computational Linguistics (Paragraph 0066) Geva et al., “Commonsense reasoning: CommonsenseQA and StrategyQA” (Paragraph 0089) Clark et al., “Think you have solved question answering? Try arc, the AI2 reasoning challenge” (Paragraph 0089). “Solving general arithmetic word problems” by Roy et al., in the 8/31/2022 IDS recites a different date than the same reference listed in the Specification (see Paragraph 0070). In the written description, the date is recited as December 2, 2015. The 8/31/2022 IDS lists the date as August 20, 20161. The Examiner requests clarification as to the proper date of the reference, and either: correction to the written description or submission of the earlier copy of the reference with a subsequent IDS consistent with the written description. Additionally, “MAWPS: A Math Word Problem Repository” on the 8/31/2022 IDS contained a typo in the author’s name (see 8/31/2022 IDS, page 3: the listing cut off the first letter of the primary author’s last name). The Examiner has amended the listing in the annotated IDS and considered the associated reference. Specification The disclosure is objected to because of the following informalities: The second sentence of paragraph 0039 contains a type and is unclear. It currently recites “In some embodiments, the machine-learned model 100 is configured to attend over the instructive sequence 204 when processing the operative query 112.” The reference to 204 should recite “104.” It is unclear what “to attend over the instructive sequence” means. Based on the claims, it is believed that this should recite “In some embodiments, the machine-learned model 100 is configured to process the operative query 112 with attention over the instructive sequence 104.” In paragraph 0085, the authors of the paper should be included: “with (3) and (4) from (BIG-bench collaboration, Srivastava et al., “Beyond the imitation game: Measuring and extrapolating the capabilities of language models…” The Specification includes two different “Table 10.” The first appears just before paragraph 0096, and the second right after paragraph 0097. Beginning with the second table 10, each table number should be incremented by 1. Additionally, all references to the tables (beginning with the references in paragraph 0097) need to be updated as well. See paragraphs 0097, 0098, and 0099). Appropriate corrections are required. The use of trademarks has been noted in this application. The term should be accompanied by the generic terminology, if appropriate; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term. Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks. The following were not properly marked: PYTHON (paragraphs 0027, 0051 (3 times), and 0132) GOOGLE MAPS (Table 5 (twice)) WIKIPEDIA (paragraphs 0091 and 0092) Appropriate corrections are required. Claim Objections Claims 2, 13-15, and 18-20 are objected to because of the following informalities: In claims 2, 13-15, and 18, the transition phrase is “comprising” In each case, this should be amended to recite “further comprising” In claims 19 and 20, the claims recite “one or more memory devices storing non-transitory computer-readable instructions…” In both claims, the “non-transitory” wording should be moved to before the “memory devices” (e.g., the memory devices are non-transitory). Therefore, both claims should be amended to recite “one or more non-transitory memory devices storing Appropriate corrections are required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,346,828. Although the claims at issue are not identical, they are not patentably distinct from each other as described below. Current Application USPAT 12,346,828 Comment 1. A computer-implemented method for improved prompting of a machine-learned model, the method comprising: obtaining, by a computing system comprising one or more processors, an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, by the computing system and to a machine-learned model, the instructive sequence and an operative query, wherein the machine-learned model is configured to process the operative query with attention over the instructive sequence; and generating, by the computing system, using the machine-learned model and responsive to the operative query, an operative response. 1. A computer-implemented method for performing image analysis, the method comprising: obtaining, by a computing system comprising one or more processors, an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, by the computing system and to a machine-learned model, the instructive sequence and an operative image processing query comprising image data, wherein the machine-learned model is configured to process the operative image processing query with attention over the instructive sequence; and generating, by the computing system, using the machine-learned model and responsive to the operative image processing query, an operative image processing response. The preambles (primary use case) of the two inventions is different, but obvious variants. Identical limitation language The only difference between the two limitations concerns the type of query (and the contents therein). The present application concerns text, and ‘828 concerns image processing. Again, an obvious variant of the present application. The only difference between the two limitations again concerns the type of query (and therefore the type of response): image vs. text. Therefore, the claims are obvious variants of each other. 2. The computer-implemented method of claim 1, comprising: generating, by the computing system, using the machine-learned model and responsive to the operative query, an operative trace of intermediate states from the operative query to the operative response. 2. The computer-implemented method of claim 1, comprising: generating, by the computing system, using the machine-learned model and responsive to the operative image processing query, an operative trace of intermediate states from the operative query to the operative image processing response. Again, the only difference between the claims is the type of query and type of response (image vs. text). The technical aspects remain obvious variants of each other. 3. The computer-implemented method of claim 1, wherein the instructive sequence is prepended to the operative query. 3. The computer-implemented method of claim 1, wherein the instructive sequence is prepended to the operative image processing query. Again, the only difference between the claims is the type of query and type of response (image vs. text). The technical aspects remain obvious variants of each other. 4. The computer-implemented method of claim 2, wherein the instructive trace comprises a chain of intermediate responses to intermediate queries. 4. The computer-implemented method of claim 2, wherein the instructive trace comprises a chain of intermediate responses to intermediate queries. Identical claim language 5. The computer-implemented method of claim 1, wherein the instructive sequence comprises an input flag and an output flag. 5. The computer-implemented method of claim 1, wherein the instructive sequence comprises an input flag and an output flag. Identical claim language 6. The computer-implemented method of claim 1, wherein the instructive sequence comprises a tokenized representation of a natural language. 6. The computer-implemented method of claim 1, wherein the instructive sequence comprises a tokenized representation of a natural language. Identical claim language 7. The computer-implemented method of claim 1, wherein the instructive trace comprises one or more intermediate states of one or more variables declared by a computer-executable coding language. 7. The computer-implemented method of claim 1, wherein the instructive trace comprises one or more intermediate states of one or more variables declared by a computer-executable coding language. Identical claim language 8. The computer-implemented method of claim 1, wherein generating the operative response comprises: generating, by the computing system and using the machine-learned model, a plurality of operative responses; and determining, by the computing system, the operative response based on a sample of the plurality of operative responses. 8. The computer-implemented method of claim 1, wherein generating the operative response comprises: generating, by the computing system and using the machine-learned model, a plurality of operative responses; and determining, by the computing system, the operative image processing response based on a sample of the plurality of operative responses. Identical preamble language Identical limitation language Again, the only difference between the limitation is the type of response (image vs. text). The technical aspects remain obvious variants of each other. 9. The computer-implemented method of claim 8, wherein determining the operative response comprises: determining, by the computing system, a consistency metric based on the sample of the plurality of operative responses. 9. The computer-implemented method of claim 8, wherein determining the operative image processing response comprises: determining, by the computing system, a consistency metric based on the sample of the plurality of operative responses. The only difference is in the preamble, again concerning the type of response (text vs. query) Identical limitation language 10. The computer-implemented method of claim 8, wherein the sample is based on respective probabilities associated with the plurality of operative responses. 10. The computer-implemented method of claim 8, wherein the sample is based on respective probabilities associated with the plurality of operative responses. Identical claim language 11. The computer-implemented method of claim 9, wherein the consistency metric comprises at least one of: a plurality vote, or a majority vote. 11. The computer-implemented method of claim 9, wherein the consistency metric comprises at least one of: a plurality vote, or a majority vote. Identical claim language 12. The computer-implemented method of claim 9, wherein the consistency metric comprises a vote based on operative responses respectively associated with diverse operative traces. 12. The computer-implemented method of claim 9, wherein the consistency metric comprises a vote based on operative responses respectively associated with diverse operative traces. Identical claim language 13. The computer-implemented method of claim 1, wherein the operative query is a first query component and the operative response is a first response component, and wherein the method comprises: inputting, by the computing system and to the machine-learned model, the instructive sequence, the first query component, the first response component, and a second query component; and generating, by the computing system, using the machine-learned model and responsive to the second query component, a second response component. 13. The computer-implemented method of claim 1, wherein the operative image processing query is a first query component and the operative image processing response is a first response component, and wherein the method comprises: inputting, by the computing system and to the machine-learned model, the instructive sequence, the first query component, the first response component, and a second query component; and generating, by the computing system, using the machine-learned model and responsive to the second query component, a second response component. Again, the only difference between the limitation is the type of query and type of response (image vs. text). The technical aspects remain obvious variants of each other. Identical limitation language Identical limitation language 14. The computer-implemented method of claim 13, comprising: generating, by the computing system and responsive to a target query, one or more query components. 14. The computer-implemented method of claim 13, comprising: generating, by the computing system and responsive to a target query, one or more query components. Identical claim language 15. The computer-implemented method of claim 13, comprising: inputting, by the computing system and to the machine-learned model, a preliminary instructive sequence comprising a preliminary instructive query and a preliminary instructive response, wherein the preliminary instructive response comprises a plurality of preliminary instructive query components. 15. The computer-implemented method of claim 13, comprising: inputting, by the computing system and to the machine-learned model, a preliminary instructive sequence comprising a preliminary instructive query and a preliminary instructive response, wherein the preliminary instructive response comprises a plurality of preliminary instructive query components. Identical claim language 16. The computer-implemented method of claim 13, wherein the first query component and the second query component are generated with a different machine-learned model other than the machine-learned model used to obtain the first response component and the second response component. 16. The computer-implemented method of claim 13, wherein the first query component and the second query component are generated with a different machine-learned model other than the machine-learned model used to obtain the first response component and the second response component. Identical claim language 17. The computer-implemented method of claim 14, wherein the second query component corresponds to the target query. 17. The computer-implemented method of claim 14, wherein the second query component corresponds to the target query. Identical claim language 18. The computer-implemented method of claim 13, comprising, for a plurality of iterations: generating, by the computing system, an updated instructive sequence based on combining one or more prior input sequences with one or more output sequences respectively corresponding thereto; inputting, by the computing system and to the machine-learned model, the updated instructive sequence and an additional query component; and generating, by the computing system, using the machine-learned model and responsive to the additional query component, an additional response component. 18. The computer-implemented method of claim 13, comprising, for a plurality of iterations: generating, by the computing system, an updated instructive sequence based on combining one or more prior input sequences with one or more output sequences respectively corresponding thereto; inputting, by the computing system and to the machine-learned model, the updated instructive sequence and an additional query component; and generating, by the computing system, using the machine-learned model and responsive to the additional query component, an additional response component. Identical claim language 19. One or more memory devices storing non-transitory computer-readable instructions for improved prompting of a machine-learned model, the instructions executable to cause one or more processors to perform operations, the operations comprising: obtaining an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, to a machine-learned model, the instructive sequence and an operative query, wherein the machine-learned model is configured to process the operative query with attention over the instructive sequence; and generating using the machine-learned model and responsive to the operative query, an operative response. 19. One or more memory devices storing non-transitory computer-readable instructions executable to cause one or more processors to perform operations for performing image analysis, the operations comprising: obtaining an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, to a machine-learned model, the instructive sequence and an operative image processing query comprising image data, wherein the machine-learned model is configured to process the operative image processing query with attention over the instructive sequence; and generating, using the machine-learned model and responsive to the operative image processing query, an operative image processing response. The preambles (primary use case) of the two inventions is different, but obvious variants. Identical limitation language Again, the only difference between the limitation is the type of query and type of response (image vs. text). The technical aspects remain obvious variants of each other. Again, the only difference between the limitation is the type of query and type of response (image vs. text). The technical aspects remain obvious variants of each other. 20. A computing system for improved prompting of a machine-learned model, the system comprising: one or more processors; and one or more memory devices storing non-transitory computer-readable instructions that are executable to cause the one or more processors to perform operations, the operations comprising: obtaining an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, to a machine-learned model, the instructive sequence and an operative query, wherein the machine-learned model is configured to process the operative query with attention over the instructive sequence; and generating using the machine-learned model and responsive to the operative query, an operative response. 20. A computing system for performing image analysis, the system comprising: one or more processors; and one or more memory devices storing non-transitory computer-readable instructions that are executable to cause the one or more processors to perform operations, the operations comprising: obtaining an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, to a machine-learned model, the instructive sequence and an operative image processing query comprising image data, wherein the machine-learned model is configured to process the operative image processing query with attention over the instructive sequence; and generating, using the machine-learned model and responsive to the operative image processing query, an operative image processing response The preambles (primary use case) of the two inventions is different, but obvious variants. Identical hardware elements Identical hardware elements Identical limitation language Again, the only difference between the limitation is the type of query and type of response (image vs. text). The technical aspects remain obvious variants of each other. Again, the only difference between the limitation is the type of query and type of response (image vs. text). The technical aspects remain obvious variants of each other. Claims 1-4, 6, 8, 13, 19, and 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6, 14-16, and 20 of co-pending Application No. 18/160,776 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other as described below. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Current Application US App # 18/160,776 Comment 1. A computer-implemented method for improved prompting of a machine-learned model, the method comprising: obtaining, by a computing system comprising one or more processors, an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, by the computing system and to a machine-learned model, the instructive sequence and an operative query, wherein the machine-learned model is configured to process the operative query with attention over the instructive sequence; and generating, by the computing system, using the machine-learned model and responsive to the operative query, an operative response. 1. A computer-implemented method for improved prompting of a machine-learned model, the method comprising: obtaining, by a computing system comprising one or more processors, an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, by the computing system and to the machine-learned model, the instructive sequence and an operative query, wherein the machine-learned model has been pre-trained using a plurality of diversified objectives; and generating, by the computing system, using the machine-learned model and responsive to the operative query, an operative response. Identical preamble Identical limitation language The difference between the two applications is that in the present application, the MLM is configured to process the query with attention, whereas the ‘776 claim only recites the MLM being pretrained using diversified objectives. This is an obvious variant of the present invention. Identical limitation language 2. The computer-implemented method of claim 1, comprising: generating, by the computing system, using the machine-learned model and responsive to the operative query, an operative trace of intermediate states from the operative query to the operative response. 2. The computer-implemented method of claim 1, wherein the machine-learned model is configured to process the operative query with attention over the instructive sequence to generate an operative trace of intermediate states from the operative query to the operative response. The difference between the two claims is that the ‘776 claim now includes the differing language from present claim 1 (process the operative query…). This further reiterates that the two applications are obvious variants of each other. 3. The computer-implemented method of claim 1, wherein the instructive sequence is prepended to the operative query. 4. The computer-implemented method of claim 2, wherein the instructive trace comprises a chain of intermediate responses to intermediate queries. 3. The computer-implemented method of claim 1, wherein: the instructive sequence is prepended to the operative query; and the instructive trace comprises a chain of intermediate responses to intermediate queries. Claim 3 of the ‘776 app is a combination of claims 3 and 4 of the present application. 6. The computer-implemented method of claim 1, wherein the instructive sequence comprises a tokenized representation of a natural language. 4. The computer-implemented method of claim 1, wherein the instructive sequence comprises a tokenized representation of a natural language. Identical claim language 8. The computer-implemented method of claim 1, wherein generating the operative response comprises: generating, by the computing system and using the machine-learned model, a plurality of operative responses; and determining, by the computing system, the operative response based on a sample of the plurality of operative responses. 5. The computer-implemented method of claim 1, wherein generating the operative response comprises: generating, by the computing system and using the machine-learned model, a plurality of operative responses; and determining, by the computing system, the operative response based on a sample of the plurality of operative responses. Identical claim language 13. The computer-implemented method of claim 1, wherein the operative query is a first query component and the operative response is a first response component, and wherein the method comprises: inputting, by the computing system and to the machine-learned model, the instructive sequence, the first query component, the first response component, and a second query component; and generating, by the computing system, using the machine-learned model and responsive to the second query component, a second response component. 6. The computer-implemented method of claim 1, wherein the operative query is a first query component and the operative response is a first response component, and wherein the method comprises: inputting, by the computing system and to the machine-learned model, the instructive sequence, the first query component, the first response component, and a second query component; and generating, by the computing system, using the machine-learned model and responsive to the second query component, a second response component. Identical claim language 19. One or more memory devices storing non-transitory computer-readable instructions for improved prompting of a machine-learned model, the instructions executable to cause one or more processors to perform operations, the operations comprising: obtaining an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, to a machine-learned model, the instructive sequence and an operative query, wherein the machine-learned model is configured to process the operative query with attention over the instructive sequence; and generating using the machine-learned model and responsive to the operative query, an operative response. 14. One or more memory devices storing non-transitory computer-readable instructions for improved prompting of a machine-learned model, the instructions executable to cause one or more processors to perform operations, the operations comprising: obtaining an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, to a machine-learned model, the instructive sequence and an operative query, wherein the machine-learned model is configured to process the operative query with attention over the instructive sequence, and wherein the machine-learned model has been pre-trained using a plurality of diversified objectives; and generating using the machine-learned model and responsive to the operative query, an operative response. The difference between present claim 19 and ‘776 claim 14 is that the second limitation recites the MLM being pretrained using diversified objectives. This is an obvious variant of the present invention. 2. The computer-implemented method of claim 1, comprising: generating, by the computing system, using the machine-learned model and responsive to the operative query, an operative trace of intermediate states from the operative query to the operative response. 15. The one or more memory devices of claim 14, wherein the machine-learned model is configured to process the operative query with attention over the instructive sequence to generate an operative trace of intermediate states from the operative query to the operative response. Different statutory embodiments, but otherwise obvious variants. 3. The computer-implemented method of claim 1, wherein the instructive sequence is prepended to the operative query. 4. The computer-implemented method of claim 2, wherein the instructive trace comprises a chain of intermediate responses to intermediate queries. 16. The one or more memory devices of claim 14, wherein: the instructive sequence is prepended to the operative query; and the instructive trace comprises a chain of intermediate responses to intermediate queries. Different statutory embodiments, but otherwise obvious variants. Again, ‘776 claim 16 is a combination of the language in present application claims 3 and 4. (similar to ‘776 claim 3; see above). 20. A computing system for improved prompting of a machine-learned model, the system comprising: one or more processors; and one or more memory devices storing non-transitory computer-readable instructions that are executable to cause the one or more processors to perform operations, the operations comprising: obtaining an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; inputting, to a machine-learned model, the instructive sequence and an operative query, wherein the machine-learned model is configured to process the operative query with attention over the instructive sequence; and generating using the machine-learned model and responsive to the operative query, an operative response. 20. A computing system for improved prompting of a machine-learned model, the system comprising: one or more processors; and one or more memory devices storing non-transitory computer-readable instructions that are executable to cause the one or more processors to perform operations, the operations comprising: obtaining a chain of thought prompt comprising an instructive trace through a series of intermediate states; inputting, to a machine-learned model, the chain of thought prompt, wherein the machine-learned model has been pre-trained using a plurality of diversified objectives; and generating using the machine-learned model and responsive to the chain of thought prompt, an operative response. The language difference here does not alter the scope of the two claims, as the written description describes a “chain of thought prompt” as an instructive sequence (including query, response, and intermediate states). Thus, the limitations, while using different language, recite essentially the same claim scope. The difference between the claims is that this limitation in ‘776 recites the MLM being pretrained using diversified objectives. This is an obvious variant of the present invention. Again, the difference in language here recite essentially the same claim scope, and thus are obvious variants. Examiner’s Note In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Allowable Subject Matter Claims 1-20 are indicated as allowable over the prior art. The only remaining rejections under 101 for non-statutory double patenting and provisional non-statutory double patenting. The following is a statement of reasons for the indication of allowable subject matter: The Examiner has carefully examined independent claims 1, 19, and 20. The closest prior art references of record are the following Non-Patent Literature references: Ling et al., “Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems,” arXiv:1705.04146v3, published on October 23, 2017 (hereinafter Ling) (IDS reference) Roy et al., “Solving General Arithmetic Word Problems,” arXiv:1608.01413v2, published on August 20, 2016 (hereinafter Roy) (IDS Reference)2 Bansal et al., "A Neural Question Answering System for Basic Questions about Subroutines," arXiv:2101.03999v1, published on January 11, 2021 (hereinafter Bansal). Madaan et al., "Memory-assisted prompt editing to improve GPT-3 after deployment," arXiv:2201.06009v4, published on March 16, 2022 (hereinafter Madaan). Wang et al., "Shepherd Pre-trained Language Models to Develop a Train of Thought: An Iterative Prompting Approach," arXiv:2203.08383v1, published on March 16, 2022 (hereinafter Wang). Claims 1, 19, and 20 are indicated as allowable over Ling, Roy, Bansal, Madaan, and Wang, at least because the cited combination of references does not teach or suggest the following limitations, recited in one form or another, by independent claims 1, 19, and 20: Obtaining, by a computing system comprising one or more processors, an instructive sequence descriptive of an instructive query, an instructive response, and an instructive trace of intermediate states from the instructive query to the instructive response; Inputting, by the computing system and to a machine-learned model, the instructive sequence and an operative query, wherein the machine-learned model is configured to process the operative query with attention over the instructive sequence; Generating, by the computing system, using the machine-learned model and responsive to the operative query, an operative response. The Examiner notes that it is not the above limitations in isolation, but rather these limitations as they appear in the specific combinations recited in the independent claims, which defines the indicated allowability of the claimed invention. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure. See PTO-892 It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). Any inquiry concerning this communication or earlier communications from the Examiner should be directed to ERIC J. BYCER whose telephone number is (571) 270-3741. The Examiner can normally be reached Monday - Thursday 9am-6pm, and alternate Fridays 9am-5pm. Examiner interviews are available via a variety of formats. See MPEP § 713.01. To schedule an interview, Applicants are encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/InterviewPractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, MATT ELL can be reached on (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to the USPTO patent electronic filing system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /ERIC J. BYCER/ Primary Examiner Art Unit 2141 1 This is also the date of the submitted reference from arXiv:1608.01413v2. 2 This is the reference listed in the Specification with a date that differs from the date listed on the IDS and on the provided document. If the earlier date applies, and a corrected version is supplied per the instructions provided above, the indications of allowable subject matter/allowability will be updated.
Read full office action

Prosecution Timeline

Aug 05, 2022
Application Filed
Jan 10, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602237
User Interface Extendability Over Wireless Protocol
2y 5m to grant Granted Apr 14, 2026
Patent 12585983
Methods and Systems for Training a Machine-Learning Method
2y 5m to grant Granted Mar 24, 2026
Patent 12578833
SYSTEMS AND METHODS FOR FACILITATING INTERACTIONS BETWEEN EXPERT AND NON-EXPERT USERS
2y 5m to grant Granted Mar 17, 2026
Patent 12561590
DESIGN DEVICE, DESIGN METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM FOR DESIGING FASTENING POINTS OF A SUBSTRATE
2y 5m to grant Granted Feb 24, 2026
Patent 12547904
ENSURING DATA COMPLETENESS USING CONTEXT AWARE MACHINE LEARNING MODELS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
99%
With Interview (+43.6%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 479 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month