Prosecution Insights
Last updated: April 19, 2026
Application No. 18/481,803

PROMPT COMPLEXITY FOR LARGE LANGUAGE MODELS

Final Rejection §101§103
Filed
Oct 05, 2023
Examiner
MCCORD, PAUL C
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
393 granted / 569 resolved
+7.1% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
41 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§101 §103
DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 remain rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim(s) 1, 11, 15 is/are directed to a system, method, medium for decomposing an input request into component portions, such as for completing the request by addressing each portion. The claims rely on well understood, routine, and conventional structures such as a processor, memory, data structure, etc. to instruct the system along methods by which the input is reified into differently represented more granular data by application of well understood, routine, and conventional instructions such as software routines. The claims are considered a manner by which data resolves more data, in this case a subset of the original data; and are also considered a stand in for human behavior as the claims steps are substantially similar to the manner in which a human being would parse a complex task. As such the claims cannot be considered to integrate the judicial exceptions of an abstract idea such as data per se or programs per se nor the judicial exception of human activity and/or mental processes such as operations performed in the human mind, human activity, human behavior; etc. as the claims do not include substantially more than the performance of such exceptions upon a computer claimed at a high level of generality. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The dependent claims further address additional subject matter which do not remedy as the claimed functionality may be seen as a stand in human behavior such as a human asking for help in the face of complexity, human application of mathematic concepts, human learning of less familiar concepts, etc.. As such claims 2-10, 12-14, 16-20 do not remedy and are similarly rejected. The amendments filed 11/12/25 do not amount to sufficiently more and as such the claims remain interpreted under the judicial exception as corresponding to a process of changing data into subsequent data, and/or of a stand in for human behaviors as discussed supra. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-14 rejected under 35 U.S.C. 103 as being unpatentable over Khot: Decomposed Prompting (provided by Applicant in IDS filed 12/23/2024 and hereinafter Kho) further in view of Sahar: 11941380 hereinafter Sah and further in view of Reza: 20230237277 hereinafter Rez Regarding claim 1 Kho teaches: A method implemented by one or more processors (Kho: Abstract), the method comprising: receiving an input prompt for a large language model, LLM (Kho: Fig 2, 3: such as receipt of a lexical request by an inference model); decomposing the input prompt into a subprompts, the subprompts comprising a plurality of nodes of sub-prompts that form the input prompt, the plurality of nodes (Kho:§ 3.3; Fig 4: system decomposes input into plurality of sub-tasks); comprising: a plurality of nodes corresponding to simple sub-prompts of the input query, a plurality of branching nodes of sub-prompts each corresponding to multiple simple sub-prompts, and a root node corresponding to the input prompt (Kho:§ 3.3; Fig 4: such as for forming leaf processes corresponding to upstream sub-tasks); including the input prompt in a set of training prompts and/or a set of evaluation prompts (Kho:§ 3.3; Fig 4: input prompts, sub-tasks thereof, displayed to a user as part of evaluative processes). Kho measures task accuracy (Kho: § 4.1, 4.3) as a form of complexity and iteratively decomposes complex task prompts into increasing simpler sub-tasks, sub-prompts, etc. to improve the accuracy based on a comparison with a threshold complexity, accuracy, etc. (Kho: Abstract; § 1; Fig 1, 2) until a prompt length is of sufficient accuracy (Kho: § 3.3, 4.2) and based on a length threshold (Kho: § G.2.1) and thus teaches: determining a prompt complexity based on length by comparing the prompt complexity to a threshold complexity (Kho: § G.2.1: prompted tasks decomposed based on a specified minimum task complexity, length, etc.) of the prompt tree; in response to determining, based on the comparing, that the prompt complexity is above the threshold complexity (Kho: § G.2.1: such as by determining a sequence of tasks greater than a threshold length), including the input prompt in a set of evaluation prompts (Kho: § G.2.1: such as by subsequently evaluating additional portions of tasks decomposed based on length). Kho does not explicitly teach determining complexity based on a path length of the prompt tree as Kho embraces but does not rely on a tree like data structure, nor does Kho discuss causing the input prompt to be stored in a set of training prompts and/or a set of evaluation prompts. In a related field of endeavor Sah teaches a system and method for configuring code based on decomposition thereof into subfunctions and performance of subfunction analysis wherein a threshold degree of complexity is expressed as a quantified function length with respect to a syntax tree (Sah: Col 3:20-3:30) wherein the threshold length, complexity, etc. of a syntax tree is variously measured in terms of number of tree branches, branch length, sub-tree size, etc. (Sah: Col 3:20-3:30, 15:17-16:38). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to decompose an input prompt of Kho to utilize a tree like data structure such as that of Sah for at least the purpose of performing tree and/or graph like operations, data analyses steps, etc. thereon; one of ordinary skill in the art would have expected only predictable results therefrom. Kho in view of Sah does not explicitly discuss causing the input prompt to be stored in a set of training prompts and/or a set of evaluation prompts. In a related field of endeavor Rez teaches a system and method for developing improved training prompts based on concatenating an input prompt which has been stored as training data including extracted relevant features of the input and the input prompt used to generate additional training data in the form of a dynamic prompt (Rez: Abstract; ¶ 36-38, 61; Fig 1); comprising receiving an input prompt for a large language model (Rez: ¶ 36-38); generating an additional dynamic input prompt (Rez: ¶ 55, 61; Fig 1, 2); concatenating the dynamic prompt with the input prompt to generate a prompting function which is stored and communicated to a pre-training system such as for training or fine tuning the model (Rez: ¶ 3, 55, 61) and additionally teaches iteratively training the model to arrive or converge upon a threshold accuracy (Rez: ¶76). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to improve the Kho in view of Sah system and method to include augmenting input prompts to generate the Rez taught or suggested prompting function and to save same as additional training data for at least the purpose of training, testing, and/or deploying a system based thereon, improved thereby, etc. by optimizing, fine tuning, etc. the system based on a threshold accuracy, a threshold complexity, and/or a threshold with respect to a threshold accuracy; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 2 Kho in view of Sah in view of Rez teaches or suggests: The method of claim 1, wherein the simple sub-prompts that correspond to the leaf nodes of the prompt tree have at least a target simplicity (Kho:§ 3.3; Fig 4: tasks iteratively decomposed until a threshold accuracy, simplicity, etc.); (Sah: Col 3:20-3:30, 15:17-16:38; system iteratively arrives at a terminal sub-function for execution). Examiner takes official notice that the leaf nodes, representative of terminal or end of chain nodes in a tree, such as at the far terminus of a branch, were well known in the art before the effective filing date and would have comprised an obvious inclusion for at least the purpose of representing a fully decomposed function, terminal prompt, etc.; one of ordinary skill in the art would have expected only predictable results from such an inclusion. The claim is thus considered obvious over Kho as modified by Sah and/or Rez as addressed in the base claim as it would have been obvious to apply the further teaching of Kho, Sah, and/or Rez to the modified device of Kho, Sah, and Rez; one of ordinary skill in the art would have expected only predictable results therefrom.. Regarding claim 3 Kho in view of Sah in view of Rez teaches or suggests: The method of claim 2, wherein decomposing the input prompt into the prompt tree comprises: decomposing the input prompt into a first set of sub-prompts using the LLM; and iteratively decomposing, using the LLM, each sub-prompt into a further set of sub-prompts until the target simplicity is reached (Kho:§ 3.3; Fig 4: tasks iteratively decomposed until a threshold accuracy, that is, until determined sufficient for a highly accurate model); (Sah: Col 3:20-3:30, 15:17-16:38; system iteratively arrives at a terminal sub-function for execution). The claim is thus considered obvious over Kho as modified by Sah and/or Rez as addressed in the base claim as it would have been obvious to apply the further teaching of Kho, Sah, and/or Rez to the modified device of Kho, Sah, and Rez; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 4 Kho in view of Sah in view of Rez teaches or suggests: The method of claim 2, further comprising determining that a given sub-prompt, of the sub-prompts, has the target simplicity, determining that the given sub-prompt has the target simplicity comprising one or more of: determining that no further decomposition of the given sub-prompt is achievable by the LLM; determining that the given sub-prompt falls within a domain of expertise of one or more expert models accessible by the LLM; and/or determining that the LLM classifies the sub-prompt as a simple sub-prompt (Kho:§ 3.3; Fig 4: tasks iteratively decomposed until a threshold accuracy, that is, until determined sufficient for a highly accurate model); (Sah: Col 3:20-3:30, 15:17-16:38). The claim is thus considered obvious over Kho as modified by Sah and/or Rez as addressed in the base claim as it would have been obvious to apply the further teaching of Kho, Sah, and/or Rez to the modified device of Kho, Sah, and Rez; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 5 Kho in view of Sah in view of Rez teaches or suggests: The method of claim 1, wherein determining the prompt complexity based on a path length of the prompt tree comprises: determining the path length of the prompt tree, comprising summing a plurality of leaf path lengths, each leaf path length corresponding to a path from the root node to a respective leaf node (Sah: Col 3:20-3:30, 15:17-16:38: such as for determining a length of branch or branches, number of nodes thereon, size of a sub-tree, etc.). The claim is thus considered obvious over Kho as modified by Sah and/or Rez as addressed in the base claim as it would have been obvious to apply the further teaching of Kho, Sah, and/or Rez to the modified device of Kho, Sah, and Rez; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 6 Kho in view of Sah in view of Rez teaches or suggests: The method of claim 5, wherein determining the prompt complexity based on the path length of the prompt tree comprises: determining a logarithm of the path length (Kho: § E, G.2.1: system operates to decompose over a branch in log (n) time, that is the length of the execution path is logarithmically determined over execution, simulation, etc.). Examiner has taken official notice which Applicant has failed to timely and explicitly travers and it is thus accepted as Admitted Prior Art (APA: please see MPEP 2144.03) that relying on a determined logarithm would have comprised an obvious inclusion such as for limiting an overall compute time, complexity, etc. of a particular coded instruction, subset thereof. The claim is thus considered obvious over Kho as modified by Sah and/or Rez as addressed in the base claim as it would have been obvious to apply the further teaching of Kho, Sah, and/or Rez to the modified device of Kho, Sah, and Rez; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 7 Kho in view of Sah in view of Rez teaches or suggests: The method of claim 1, wherein determining the prompt complexity based on the path length of the prompt tree comprises: averaging the complexity over a plurality of decodings of the input prompt. Examiner has taken official notice which Applicant has failed to timely and explicitly travers and it is thus accepted as Admitted Prior Art (APA: please see MPEP 2144.03) that relying on an average would have comprised an obvious inclusion such as for determining a reasonable manner in which to parse a task. The claim is thus considered obvious over Kho as modified by Sah and/or Rez as addressed in the base claim as it would have been obvious to apply the further teaching of Kho, Sah, and/or Rez to the modified device of Kho, Sah, and Rez; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 8 Kho in view of Sah in view of Rez teaches or suggests: The method of claim 1, wherein the input prompt is included in the set of training prompts in response to determining that the prompt complexity is above the threshold complexity, and the method further comprises: training parameters of the LLM, and/or of an additional LLM, based on the set of training prompts. Examiner has taken official notice which Applicant has failed to timely and explicitly travers and it is thus accepted as Admitted Prior Art (APA: please see MPEP 2144.03) that iterative training based on output parameters of a model would have comprised an obvious inclusion for at least the purpose of operating within well-known machine learning paradigms to generate predictable results. The claim is thus considered obvious over Kho as modified by Sah and/or Rez as addressed in the base claim as it would have been obvious to apply the further teaching of Kho, Sah, and/or Rez to the modified device of Kho, Sah, and Rez; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 9 Kho in view of Sah in view of Rez teaches or suggests: The method of claim 1, wherein the input prompt is included in the set of evaluation prompts in response to determining that the prompt complexity is above the threshold complexity, and the method further comprises: evaluating a performance of the LLM based on the set of evaluation prompts (Kho:§ 3.3; Fig 4: input prompts, sub-tasks thereof, displayed to a user as part of evaluative processes). The claim is thus considered obvious over Kho as modified by Sah and/or Rez as addressed in the base claim as it would have been obvious to apply the further teaching of Kho, Sah, and/or Rez to the modified device of Kho, Sah, and Rez; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 10 Kho in view of Sah in view of Rez teaches or suggests: The method of claim 1, wherein the threshold complexity is a dynamic threshold complexity that is based on a performance of the LLM (Kho: Abstract § 3.1-3.4: system dynamically decomposes a prompt based on a determined threshold); (Sah: Col 3:20-3:30, 15:17-16:38: system dynamically analyses subfunctions such as with respect to execution time , performance, etc. thereof and for the purpose of managing and/or reducing complexity therein; such as to reduce length thereof). The claim is thus considered obvious over Kho as modified by Sah and/or Rez as addressed in the base claim as it would have been obvious to apply the further teaching of Kho, Sah, and/or Rez to the modified device of Kho, Sah, and Rez; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 11—the claims is considered to recite substantially similar subject matter to that of claim 1 and is similarly rejected. Regarding claim 12 Kho in view of Sah teaches or suggests: The system of claim 11, wherein the simple sub-prompts that correspond to the leaf nodes of the prompt tree have at least a target simplicity (Kho: Abstract; 3.3, 3.4 Fig 6, 11: decomposition of prompts into simpler sub-prompts); (Sah: Col 3:20-3:30, 15:17-16:38: decomposition of code into sub-functions). The claim is thus considered obvious over Kho as modified by Sah and/or Rez as addressed in the base claim as it would have been obvious to apply the further teaching of Kho, Sah, and/or Rez to the modified device of Kho, Sah, and Rez; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 13—the claims is considered to recite substantially similar subject matter to that of claim 2 and is similarly rejected.- Regarding claim 14—the claims is considered to recite substantially similar subject matter to that of claim 4 and is similarly rejected. Claims 15, 17-20 rejected under 35 U.S.C. 103 as being unpatentable over Khot: Decomposed Prompting (provided by Applicant in IDS filed 12/23/2024 and hereinafter Kho) further in view of DeLuca: 20180330011 hereinafter Del. Regarding claim 15 Kho teaches: A method implemented by one or more processors (Kho: Abstract), the method comprising: receiving, from a client device, an input prompt for a large language model, LLM (Kho: Fig 2, 3: such as receipt of a lexical request by an inference model); decomposing, using the LLM, the input prompt into a plurality of simple sub-prompts (Kho:§ 3.3; Fig 4: system decomposes input into plurality of sub-tasks); wherein decomposing the input prompt into the plurality of simple sub-prompts comprises decomposing the input prompt until a given sub-prompt, of the plurality of simple sub-prompts, has a target simplicity, (Kho: Abstract; 3.2-3.4; Fig 6, 11: decomposition of prompts into simpler sub-prompts if necessary; that is until it is sufficiently simple; that is the system, method, etc. “allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions) and wherein determining that given sub- prompt has the target simplicity comprises: determining that the given sub-prompt falls within the scope of a particular library function or API call (Khot: decomposed prompt passed to sub task handler such as a further prompt, decomposed prompt and/or symbolic function; wherein the system provides an interface to simpler handler functions in a manner similar to accessing a software library or framework (Khot: pp2, 5; Fig 1, 6) for one or more sub-prompts in the plurality of simple sub-prompts, determining to invoke an external application from a plurality of external application accessible by the LLM, based at least in part on: the one or more simple sub-prompt relating to subject matter within a domain of said external application (Kho: § 3.3, 4.4, Fig 6, 11: such as operating an elasticsearch, google call or other external API call over a sub-task); invoking the external application using the one or more simple sub-prompts (Kho: § 3.3, 4.4, Fig 6, 11: such as operating an elasticsearch, google call or other external API call over a sub-task); receiving, responsive to invoking the external application using the one or more simple sub-prompts, one or more responses from the external application (Kho: ¶ 3.2, 3.3, 4.4; Fig 6, 11: answers, received, combined etc. as part of a sub-task); generating, by the LLM, a response to the input prompt based at least in part on the one or more responses from the external application (Kho: ¶ 3.2, 3.3, 4.4; Fig 6, 11: answers, received, combined, concatenated, merged, etc. etc. as part of a sub-task and in response to the user input); and causing the response to be rendered at the client device (Kho: ¶ 3.2, 3.3, 4.4; Fig 6, 11: appropriately constructed answer presented to a user. Khot does not explicitly teach the system operative to determine that a given sub-prompt falls within a domain of expertise of an external application accessible by the LLM. IN a related field of endeavor Del teaches or suggests a system and method operable to determine a domain of expertise of a prompt, sub-prompt, etc. (Del: Abstract; ¶ 16: detect domain specific language with a query, decomposed query, etc.; wherein each of a plurality of domains has a language domain detection model) and a detected domain, separate from the user system is accessed, such as by a domain search engine or other services (Del: ¶ 16, 21, 27). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to utilize detect a domain of expertise in the manner taught or suggested by Del to determine particular symbolic functions or API calls available upon detected external domains such that determining the domain and availability indicate a necessary level of simplification or decomposing in the manner taught or suggested by Khot and for at least the purpose of assisting a user to engage with a chatbot, website, or other data structure of a camera equipment supplier, such as for finding information about, viewing images of, or purchasing a camera, accessories therefor; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 17 Kho in view of Del teaches or suggests: The method of claim 16, wherein decomposing the input prompt into the plurality of simple sub-prompts comprises: decomposing the input prompt into a first set of sub-prompts using the LLM (Kho:§ 3.2-3.4; Fig 4: system decomposes input into plurality of sub-tasks); and iteratively decomposing, using the LLM, each sub-prompt into a further set of sub-prompts until the target simplicity is reached (Kho: § 3.2-3.4, 4.4; Fig 4, 6, 11: system iteratively decomposes until threshold is reached, a granular sub-task arrived at thereby is processed by an external application, API call, etc.). The claim is thus considered obvious over Kho as modified by Del as addressed in the base claim as it would have been obvious to apply the further teaching of Kho and/or Del to the modified device of Kho and Del; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 18 Kho in view of Del teaches or suggests: The method of claim 16, further comprising determining that a given sub-prompt, of the sub-prompts, has the target simplicity, determining that the given sub-prompt has the target simplicity comprises comprising one or more of: determining that no further decomposition of the given sub-prompt is achievable by the LLM (Kho:§ 3.2-3.4, 4.4, G.2.1; Fig 4: such as by determining the decomposition of a prompt with respect to a complexity threshold, length threshold, etc.); determining that the given sub-prompt falls within a domain of expertise of the external application accessible by the LLM; and/or determining that the LLM classifies the sub-prompt as a simple sub-prompt (Kho:§ 3.2-3.4; Fig 4: system decomposes input into plurality of sub-prompts until the tasks therein do not exceed a determined complexity, length, etc. threshold). The claim is thus considered obvious over Kho as modified by Del as addressed in the base claim as it would have been obvious to apply the further teaching of Kho and/or Del to the modified device of Kho and Del; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 19 Kho in view of Del teaches or suggests: The method of claim 15, further comprising: for an additional sub-prompt in the plurality of simple sub-prompts, generating an additional response based on processing the additional sub-prompt using the LLM and without invoking any external application using the additional sub-prompt; wherein generating, by the LLM, the response to the input prompt is further based at least in part on the additional response (Kho: § § 3.1-3.4, 4G.2.1: system generalates decomposed and nested responses based on the complexity, length, etc. threshold and merges the results thereof, the system optionally but does not necessarily perform a call to an external application). The claim is thus considered obvious over Kho as modified by Del as addressed in the base claim as it would have been obvious to apply the further teaching of Kho and/or Del to the modified device of Kho and Del; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 20 Kho in view of Del teaches or suggests: The method of claim 15, further comprising: for a further sub-prompt in the plurality of simple sub-prompts: determining to invoke iteratively the external application from the plurality of external applications accessible by the LLM (Kho: § 3.1-3.4, 4.4; Figs 6, 11: such as by multiple invocations of an api call or query of an external application), based at least in part on: the further sub-prompt relating to further subject matter within a further domain of said external application (Kho: § 3.1-3.4, 4.4; Figs 6, 11: such as by multiple invocations of an api call or query of an external application); and receiving, responsive to invoking the further external application using the sub-prompt, a second, etc. response from the external application (Kho: § 3.1-3.4, 4.4; Figs 6, 11: decomposed prompts resolve particular portions of granular information); wherein generating, by the LLM, the response to the input prompt is further based at least in part on the further response (Kho: § 3.1-3.4, 4.4; Figs 6, 11: resolved particular portions of granular information merged, etc. into an appropriate response). Kho strongly suggests the resolving of a further external application diverse from the first external application as a plurality of external applications are discussed as external applications such as google, elasticsearch, etc. (Kho: § 3.1-3.4, 4.4; Figs 6, 11). Thus Kho is considered to teach the utility of a variety of external applications as claimed but not to explicitly discuss the employ of a further, second, etc. external application such as for to address a requirement of a sub-prompt however such an approach is considered obvious to try. Kho in view of Del recognizes the problem of addressing requirements of sub-prompts by an external API call, there exist a finite number of ways to accomplish this, such as by limiting the call of a plurality of sub-prompts to a singular external application or by allowing multiple applications to address the various needs of a plurality of sub-prompts. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to experiment with potential solutions such as by calling a plurality external applications for at least the purpose of parsimoniously addressing diverse needs within a plurality of sub-prompts; one of ordinary skill in the art would have expected only predictable results therefrom. The claim is thus considered obvious over Kho as modified by Del as addressed in the base claim as it would have been obvious to apply the further teaching of Kho and/or Del to the modified device of Kho and Del; one of ordinary skill in the art would have expected only predictable results therefrom. Response to Arguments Applicant’s arguments in concert with claim amendments, see Remarks and Claims, filed 11/12/25, with respect to the rejection(s) of claim(s) 1-14 under 35 USC 103 over Khot and Sahar; Claims 15-19 under 35 USC 102 over Khot and claim 20 under 35 USC 103 over Khot have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Khot in view of Shar in view of Reza and/or Khot in view of Deluca. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. /PAUL C MCCORD/Primary Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Oct 05, 2023
Application Filed
Aug 09, 2025
Non-Final Rejection — §101, §103
Nov 12, 2025
Examiner Interview Summary
Nov 12, 2025
Response Filed
Nov 12, 2025
Applicant Interview (Telephonic)
Jan 29, 2026
Final Rejection — §101, §103
Apr 01, 2026
Examiner Interview Summary
Apr 01, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603094
ADAPTIVE PROCESSING WITH MULTIPLE MEDIA PROCESSING NODES
2y 5m to grant Granted Apr 14, 2026
Patent 12592238
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12593192
MEDIA PLAYBACK BASED ON SENSOR DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12572323
DYNAMIC AUDIO CONTENT GENERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12567003
TECHNOLOGIES FOR DECENTRALIZED FLEET ANALYTICS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+26.6%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month