Prosecution Insights
Last updated: April 19, 2026
Application No. 19/170,838

METHODS FOR AUTOMATING THE PROMPTING AND POST-PROCESSING OF AI SYSTEMS FOR INTERPRETATION AND REPORTING OF PUBLIC SAFETY DATA, STATISTICS, AND CONTEXTUAL KNOWLEDGE

Non-Final OA §101§103
Filed
Apr 04, 2025
Examiner
PADUA, NICO LAUREN
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Simsi Inc.
OA Round
1 (Non-Final)
10%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
27%
With Interview

Examiner Intelligence

Grants only 10% of cases
10%
Career Allow Rate
3 granted / 31 resolved
-42.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
51 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
40.0%
+0.0% vs TC avg
§103
30.8%
-9.2% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims 2. This a nonfinal rejection in response to claims filed on 04/04/2025. Claims 1-15 are pending and are examined herein. Priority 3. The claims hold the priority of the filing of the provisional application #63/575,552 filed on 04/05/2024, which is the effective filing date of the present application. Drawings 4. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: Fig. 4, reference number 1400 is used for “external devices.” However, 1400 is not mentioned in the description. The specification uses 140 in [0117] for “external devices.” Therefore, the applicant may either correct the drawings to reflect the specification by using “140” in the drawings instead. Or the applicant may correct the specification to state 1400 instead of “140” in paragraph [0117]. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections – 35 USC § 101 5. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 6. Claims 6 -15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claims are directed to systems/products comprising a “computer readable storage medium” however, neither the claims nor the specification explicitly limit the computer readable storage medium to non-transitory forms. MPEP 2106.03 specifically states, “Even when a product has a physical or tangible form, it may not fall within a statutory category. For instance, a transitory signal, while physical and real, does not possess concrete structure that would qualify as a device or part under the definition of a machine, is not a tangible article or commodity under the definition of a manufacture (even though it is man-made and physical in that it exists in the real world and has tangible causes and effects), and is not composed of matter such that it would qualify as a composition of matter. Nuijten, 500 F.3d at 1356-1357, 84 USPQ2d at 1501-03. As such, a transitory, propagating signal does not fall within any statutory category.” Therefore, since the claims include a scope that encapsulates such transitory forms of signal transmission, the claims fail at step 1 for not being directed to a statutory subject matter category. The applicant may overcome this rejection by limiting the computer-readable media to “non-transitory” forms, provided that the specification would have support for such amended claims. Furthermore, the applicant may amend the claims in any other manner that would ensure that the claims recite an eligible subject matter category. For purposes of compact prosecution, the claims are reanalyzed under the full 2-step eligibility approach, as if they passed step 1. 7. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Is the claim to a Process, Machine, Manufacture, or Composition of Matter? Claims 1-5 are directed to: A method of textual report generation, comprising: Claims 6-10 are directed to: A computer program product for textual report generation, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method, the method comprising: Claims 11-15 are directed to: A system comprising: a computing node comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor of the computing node to cause the processor to perform a method comprising: Claims 1-5 are directed to a process, and claims 6-15 are directed to at least a “machine or manufacture,” therefore it falls under at least one of the potentially eligible subject matter categories: process, product, or machine. Therefore, the claims are to be further analyzed under step 2 of the 2 step eligibility analysis. Step 2a Prong 1: Is the claim directed to a Judicial Exception (A Law of Nature, a Natural Phenomenon (Product of Nature), or An Abstract Idea?) The claims under the broadest reasonable interpretation in light of the specification are analyzed herein. Representative claims 1, 6, and 11 are marked up, isolating the abstract idea from additional elements, wherein the abstract idea is set in bold and the additional elements have been italicized as follows: Claim 1 Preamble: A method of textual report generation, comprising: Claim 6 Preamble: A computer program product for textual report generation, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method, the method comprising: Claim 11 Preamble: A system comprising: a computing node comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor of the computing node to cause the processor to perform a method comprising: Claim 1 body (representative of claims 6 and 11 which are identical): receiving a proposed report description and a report intent; providing the proposed report description and the report intent to a first large language model; prompting the first large language model to output a quality rating of the proposed report description with respect to the report intent; prompting a user to revise the proposed report description until the quality rating exceeds a predetermined threshold; providing the proposed report description and the report intent to a second large language model; And prompting the second large language model to generate a report according to the proposed report description and the report intent. When evaluating the bolded limitations of the claims under the broadest reasonable interpretation in light of the specification, it is clear that representative claims 1, 5 and 11 are directed to the abstract idea category of certain methods of organizing human activity. This abstract idea grouping found in MPEP 2106.04(a)(2)(II) includes claims to “managing personal behavior or relationships or interactions between people.” The present invention is directed to this subcategory which includes social activities, teaching, and following rules or instructions, because the bolded limitations encapsulate the scope of mere rules or instructions to a person to generate a report. For example, receiving a proposed report description and report intent, providing it to a model to output a quality rating of the description with respect to the intent, and prompting the user revise the proposed report description are merely instructions to manage personal behavior of a user. Since the claims cover any use of a model to generate the quality rating and the report, the claims still involve personal behavior and human interactions to perform such steps. When claimed broadly in the manner it is claimed, the limitations are no more than certain methods of organizing human activity and are therefore a recitation of at least one abstract idea category. In addition, because the claims merely “collecting information, analyzing it, and displaying certain results of the collection and analysis” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, the claims recite a mental process. Mental processes are another abstract idea category that includes concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. See MPEP 2106.04(a)(2)(III). Since a person is capable of using observation, evaluations, judgements, and opinions to generate a quality rating, revise a report description until the quality exceeds a threshold, and generate a report, the claims recite an abstract idea. Step 2A Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? The claims include the following additional elements: -first large language model and second large language model (in claims 1, 6, 11) A computer program product comprising a computer readable storage medium having program instructions embodied therewith, (in claims 1, 6, 11) - the program instructions executable by a processor to cause the processor to perform a method (in claims 1, 6, 11) A system comprising: a computing node comprising a computer readable storage medium having program instructions embodied therewith, (in claims 1, 6, 11) - the program instructions executable by a processor of the computing node to cause the processor to perform a method comprising: (in claims 1, 6, 11) The additional elements are no more than a recitation of the words “apply it” (or an equivalent) or mere instructions to implement an abstract idea or other exception on a generic computer. In this case the abstract idea of “receiving a proposed report description and report intent, providing it to a model to output a quality rating of the description with respect to the intent, and prompting the user revise the proposed report description” are merely instructed to be performed as program instructions on a computer program product, computer readable storage medium, processor, or system comprising: a computing node. Please review MPEP 2106.05(f) for more information regarding Mere Instructions To Apply An Exception. Furthermore, it is clear through the specification in at least paragraphs [00111-00115] that the computing infrastructure in which the functions are performed on are generic and do not recite any improvements to computers or computer functionality. See MPEP 2106.05(a) for more information. Furthermore, the additional element of the large language models is an example of generally linking the use of a judicial exception to a particular technological environment or field of use as outlined by MPEP 2106.05(h). The models in which the abstract idea is performed on, are merely black box models in that they only recite at most, the inputs and intended outputs without specifying how the functions arrive at the intended outcome. Therefore, by limiting such models to “large language models” it is merely generally linking the abstract idea to a particular technological environment or field of use. MPEP 2106.05(h) states, “As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Generally linking the abstract idea of generating a quality rating and a report to the field of large language models does not meaningfully limit the claim because the claims do not add any functionalities beyond what is inherently a feature of large language models. Even when considering the additional elements individually or the combination of the large language models and the computing infrastructure, the additional elements still fail to integrate the abstract idea into a practical application because merely performed large language models on a generic computer is still not an improvement to computer technology, nor does it provide any additional functionality other than what is capable on a generic computer. Therefore, none of the additional elements have been found to integrate the abstract idea into a practical application. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? The same additional elements set forth in the Prong 2 rejection are also analyzed for whether they recite an inventive concept, the additional elements being repeated as follows: -first large language model and second large language model (in claims 1, 6, 11) A computer program product comprising a computer readable storage medium having program instructions embodied therewith, (in claims 1, 6, 11) - the program instructions executable by a processor to cause the processor to perform a method (in claims 1, 6, 11) A system comprising: a computing node comprising a computer readable storage medium having program instructions embodied therewith, (in claims 1, 6, 11) - the program instructions executable by a processor of the computing node to cause the processor to perform a method comprising: (in claims 1, 6, 11) The additional elements have also not been found to include significantly more in order to consider it an inventive concept for the same reasons set forth in Prong 2, specifically that the additional elements are either generic computing devices in which the abstract idea is being implemented on (computer program product, computer readable storage medium, processor), and or a technological environment that the abstract idea is generally being linked to (large language models). Furthermore, no improvements to the computers or technology have been purported, because generic computing devices are known to be able to handle the software functions claimed such as using large language models to generate quality ratings and reports. Please review MPEP 2106.05(a) for more information regarding improvements to computing devices(Section I), or technological fields(Section II). Even when viewed as a whole, the elements do not provide significantly more, because they do not meaningfully limit the claim. Therefore, the claims do not include additional elements that provide significantly more in order to be considered as an inventive concept. The dependent claims 2-5, 7-10, and 12-15 are also given the full two-part analysis, individually and in combination with the claims they depend on, in the following analysis: Claims 2, 7, and 12 merely further limit the abstract idea by adding the additional steps of providing a user interface to the user and receiving the proposed report description and report intent from the user via the user interface. The same abstract idea of “certain methods of organizing human activity” still applies because it is merely indicating how the data is collected or received. In this case, the “user interface” is an additional element that is still under “apply it” under MPEP 2106.05(f), because it is a device used in its ordinary capacity to perform economic tasks (such as interface being used to capture input data). Furthermore, MPEP 2106.04(a)(2)(II) states, “Finally, the sub-groupings encompass both activity of a single person (for example, a person following a set of instructions or a person signing a contract online) and activity that involves multiple people (such as a commercial interaction), and thus, certain activity between a person and a computer (for example a method of anonymous loan shopping that a person conducts using a mobile phone) may fall within the "certain methods of organizing human activity" grouping.” Therefore, including the use of a user interface, to collect activity between a person and a computer still falls within the certain methods of organizing human activity grouping. Thus even when considering the additional elements individually, or as a combination the claims still fail to integrate the abstract idea into a practical application. Even when viewed as a whole, nothing in the claims meaningfully limits the claims such that it is significantly more than the abstract idea. Thus claims 2, 7, and 12 remain patent ineligible. Claims 3, 8, and 13 further limit the abstract idea by adding the step of outputting the report to the user. This is merely a display of the data without specifying a particular structure or format in which the results of the data processing steps are to be presented. Thus it is still more of the same abstract idea, and there are no further additional elements to consider. Even when considering these functions in combination with the previous additional elements, it still fails to integrate the abstract idea into a practical application or provide significantly more. Thus claims 3, 8, and 13 remain patent ineligible. Claims 4, 9, and 14 further limits the abstract idea by defining the quality rating as “corresponds to ambiguity.” This is still part of the same abstract idea because it still is broadly reciting how the quality rating is achieved, and fails to identify a particular set of steps, therefore it is still an abstract idea under “certain methods of organizing human activity” and “mental processes” because it is merely instructions to a person to determine a quality rating in a manner than can be performed in the human mind. Furthermore, Thus it is still more of the same abstract idea, and there are no further additional elements to consider. Even when considering these functions in combination with the previous additional elements, it still fails to integrate the abstract idea into a practical application or provide significantly more. Thus claims 4, 9, and 14 remain patent ineligible. Claims 5, 10, and 15 further limits the abstract idea by adding the additional steps of retrieving contextual data and providing the contextual data to the second large language model. This is still more of the same abstract idea because the contextual data is still broad enough to be no more than a claim to "collecting information, analyzing it, and displaying certain results of the collection and analysis," where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind.” Furthermore, the instructions are broad enough such that they encompass instructions to a person to retrieve the contextual data. The additional element of “large language model” even when considering these additional steps, is still an example of “generally linking” the abstract idea to large language models (MPEP 2106.05(h)). Furthermore, even when considered individually or as a combination the claims still fail to integrate the abstract idea into a practical application. Even when viewed as a whole, nothing in the claims meaningfully limits the claims such that it is significantly more than the abstract idea. Thus claims 5, 10, and 15 remain patent ineligible. Claim Rejections - 35 USC § 103 8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 9. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 10. Claims 1-3, 5-8, 10-13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over in Zadeh et al. (US Patent Application Publication US 20250095096 A1, filed on 2024-09-13, priority to US Provisional Application #63/538,760 filed on 2023-09-15 of which a furnished copy of the original disclosure has been furnished.) hereinafter Zadeh, in view of Wahed et al. (US 20250292093 A1) hereinafter Wahed. Regarding Claims 1, 6, and 11: Zadeh discloses an officer-in-the-loop report generation software which uses LLMs and prompt engineering. Zadeh teaches: Claim 1 Preamble: A method of textual report generation, comprising: (Zadeh [0022] In the example architecture of FIG. 1, report generation platform 190 can include semantic search agent 120 to provide semantic search functionality, report database 125 to provide example reports, large language model 130 to generate report contents and/or hallucination gate 140 to provide content guardrails on reports generated by LLM 130.) Claim 6 Preamble: A computer program product for textual report generation, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method, the method comprising: (Zadeh [0066] Computer system 1200 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 1200 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 1200 in response to processor 1204 executing one or more sequences of one or more instructions contained in main memory 1206. Such instructions may be read into main memory 1206 from another storage medium, such as storage device 1210. Execution of the sequences of instructions contained in main memory 1206 causes processor 1204 to perform the process steps described herein [0067] The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media.) Claim 11 Preamble: A system comprising: a computing node comprising a computer readable storage medium having program instructions embodied therewith, (Zadeh [0069] Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 1204 for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.) A remote computer satisfies the limitation “a computing node comprising a computer readable storage medium.” - the program instructions executable by a processor of the computing node to cause the processor to perform a method comprising: (Zadeh [0069] Bus 1202 carries the data to main memory 1206, from which processor 1204 retrieves and executes the instructions. The instructions received by main memory 1206 may optionally be stored on storage device 1210 either before or after execution by processor 1204.) Claim 1 Body (representative of claims 6 and 11): - receiving a proposed report description and a report intent; (Zadeh [0023] The initial input 100 may include free-style text 110 and structured inputs 115. The free-style text 110 may include notes or other unstructured or semi-structured text describing the incident in natural language. The free-style text 110 may be used as input to large language model (LLM) 130. In addition to the free-style text 110, the reporter interface 105 may present to the user (e.g., via a GUI) one or more input fields that can accept text entry and/or one or more fields that provide corresponding dropdown menus for selection of input values; the data from the input fields may be provided to the report generation platform 190 as structured inputs 115. [0027] This process provides retrieved content that closely aligns with the intent and context of the target input. [0030] Prompts for LLM 130 may be constructed to guide LLM 130 in generating the desired content. A prompt structure can comprise, for instance, a general instruction to set the context for LLM 130, an example input, an example report, and target input (which can be demarcated by special characters). As described in greater detail below, prompt maturity can be provided by refining the instructions and examples provided to LLM 130 to align with the desired output.) The broadest reasonable interpretation of a proposed report description is any textual description of a report, such as a report draft, or non-formatted information that is to be placed in a report, such as contextual information. See present specification [0053]. The BRI of report intent is the objective of the prompt, in other words, the task. Since [0023] describes unstructured or semi-structured text describing the incident, the limitation of proposed report description has been satisfied. Since the model also receives the intent and context of the target input, the report intent limitation has been satisfied. - providing the proposed report description and the report intent to a first large language model; (Zadeh [0033] In some implementations, the initial input 100 may be separated into multiple different sections. These sections may be created so that text in each section is related to one particular topic or subject matter. The LLM 130 may generate one or more response(s) using each section as input. That way, when generating a response for a single section, the LLM 130 has a specific, well-defined task that will reduce the likelihood of hallucination. In an example, the LLM 130 may generate multiple responses for each section and select the most desirable among them using the techniques described above. The selected responses for each section may then be assembled into a single response. The LLM 130 may output this single response as an intermediate draft report 135.) The initial inputs are provided to the first LLM above. - prompting the first large language model to determine whether the version of the proposed report description is satisfactory to the user; (Zadeh [0058] At step 1030, the report generation platform 190 may determine whether the version of the generated report 150 is satisfactory to the user of the reporter interface 105. The user can indicate whether or not the version of the generated report 150 is satisfactory by, for instance, interacting with a selectable component of the reporter interface 105.) Contrary to the present limitation, Zadeh does not teach that the LLM is prompted to output a quality rating, however, it does evaluate whether the report is satisfactory to the user. - prompting a user to revise the proposed report description until the proposed report description is satisfactory (Zadeh [0048] When the first version of the incident report 700 is provided to the reporter interface 105, a user may use the reporter interface 105 to provide feedback 710 as described in FIG. 1 with relation to the revised inputs 155. The user may revise the text of the first version of the report 700, edit one or more of the input fields used to generate the first version of the report 700, or provide instructions to the LLM 130 for revising the report. In this example, the feedback 710 includes edited input fields 720 and edited output 730 from the first version of the report 700. [0059] At step 1035, if the generated report is not satisfactory, the reporter interface 105 obtains user generated input corresponding to the reviewable draft version via the user interface(s). The steps are then repeated until the report generation platform 190 determines that the generated report 150 is satisfactory at step 1030.) Zadeh teaches prompting a user to revise the proposed report description until the proposed report description is satisfactory, however, the present claims require more specifically “until the quality rating exceeds a predetermined threshold” which is not taught by Zadeh. - providing the proposed report description and the report intent to a second large language model; and (Zadeh [0059] The user generated input and the reviewable draft version of the report are provided as a prompt to the LLM 130, and the process proceeds back to step 1010 to cause the LLM 130 to generate an updated version of the incident report. [0039] Further, public safety examples include a network of interconnected cells utilizing LLMs within respective user feedback loops. ) Zadeh’s LLM 130, which generates an updated version of the incident report after user feedback, falls within the scope of the second large language model. - prompting the second large language model to generate a report according to the proposed report description and the report intent. (Zadeh [0050] FIG. 9 illustrates a first version of a report 700 and a second version of the report 900. The first version of the report 700 in this example is that shown in FIG. 7, which was generated using the approach described in FIG. 1. The second version of the report 900 is generated using the example prompt 800 shown in FIG. 8. The second version of the report 900 is therefore generated based on the first version of the report 700, the edited input fields 720, and the edited output 730.) However, Zadeh fails to teach: - prompting the first large language model to output a quality rating of the proposed report description with respect to the report intent;(Zadeh does teach evaluating whether the proposed report is satisfactory, but does not specifically recite outputting a quality rating of the proposed reported description with respect to the report intent.) - that prompting a user to revise the proposed report description is done until the quality rating exceeds a predetermined threshold;(Zadeh does teach prompting a user to revise the proposed report description until it is satisfactory but not specifically that it is until a quality rating exceeds a predetermining threshold.) Alternatively, Wahed discloses a prompt optimization technique which uses human-AI collaboration to enable effective and efficient use of LLMs. Wahed teaches: - prompting the first large language model to output a quality rating of the proposed report description with respect to the report intent; (Wahed [0032] In an embodiment, a computing system provides inexpensive and efficient methods of boosting LLM performance to provide enhanced value across a wide array of downstream tasks. Disclosed embodiments leverage a small language model to generate and refine prompts, tailored for a specific task, where the prompts are scored according to the output from a large language model. When the output meets predefined criteria, the computing system accepts the output as successful. Otherwise, the computing system continues the prompt optimizing process for further enhancement. The disclosed dynamic, iterative process can ensure that the computing system generates effective prompts for each specific task, thereby maximizing the potential of a given large language model. [0042] At block 308, system 200 calculating a heuristic, based on responses to candidate prompts describing tasks from LLM 202, for example to determine whether to use human feedback input to optimize the candidate prompts (e.g., using a pairwise similarity matrix of responses).) In Wahed, the prompts are tailored for a specific task, and “task” is mapped to “intent” and a prompt is an example of a “proposed report description.” The prompts are scored according to the output from a large language model, and in [0042] the system calculates a heuristic based on how well the responses to the prompts describe the tasks. Therefore, the limitation has been satisfied. - prompting a user to revise the proposed report description until the quality rating exceeds a predetermined threshold; (Wahed [0052] At block 408, system 200 compares the calculated heuristic s with a predefined threshold value β (e.g. heuristic s<a predefined threshold value) to determine whether to present a prompt for receiving a user feedback input. [0053] In an embodiment, for example, system 200 determines to involve a human in the loop when the calculated heuristic s is less than the predefined threshold value β). Otherwise, optimizing prompts continues without the human based on determining that the calculated heuristic s is equal to or greater than the predefined threshold value β. ) In Wahed, when the heuristic is below a threshold, the user’s feedback input is invoked, otherwise when the heuristic is greater than the threshold, the system continues without the human. Since the heuristic is an example of a “quality rating”, the limitation has been satisfied. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Zadeh with the teachings of Wahed by simply substituting Zadeh’s evaluation of a satisfactory report, and prompting the user to revise the report until it is satisfactory with Wahed’s more specific methods which involve a quality rating in the form of a heuristic and a threshold in which user’s revisions are prompted until the heuristic is above a certain threshold. One would reasonably expect to arrive at the predictable outcome above, since Wahed’s substitution merely specifies how the “satisfactory” determination is performed analytically. One of ordinary skill in the art would have been motivated by the benefit of Wahed’s enhanced processing speed and reduction in computing power. (Wahed [0012] The described techniques can enable enhanced processing speed, reducing an overall computer system time used for implementing optimized prompt generation over traditional baseline prompt generation arrangements. In this manner, the embodiments herein can improve the performance of the computing system executing the LLM.) Regarding Claims 2, 7, and 12: The combination of Zadeh and Wahed teach or suggest: the method of claim 1, The computer program product of claim 6, and system of claim 11: Furthermore, Zadeh teaches claim 1(also representative of claims 7 and 12): - further comprising: providing a user interface to a user; and (Zadeh [0023] In the example of FIG. 1, a user of a reporter interface 105 is responsible for an incident report for a previous incident.) - receiving from the user via the user interface the proposed report description and the report intent. (Zadeh [0023] In the process of responding to the incident, the user interacts with the reporter interface 105 to generate initial input 100, which may be provided to the report generation platform 190. The initial input 100 may include free-style text 110 and structured inputs 115. The free-style text 110 may include notes or other unstructured or semi-structured text describing the incident in natural language. The free-style text 110 may be used as input to large language model (LLM) 130. In addition to the free-style text 110, the reporter interface 105 may present to the user (e.g., via a GUI) one or more input fields that can accept text entry and/or one or more fields that provide corresponding dropdown menus for selection of input values; the data from the input fields may be provided to the report generation platform 190 as structured inputs 115.) Regarding Claims 3, 8, and 13: The combination of Zadeh and Wahed teach or suggest: the method of claim 2, The computer program product of claim 7, and system of claim 12: Furthermore, Zadeh teaches claim 3(also representative of claims 8 and 13): - further comprising: outputting the report to the user.(Zadeh [0060] At step 1040, the report generation platform 190 may output the most recent version of the generated report 150 as the finalized report. The process may then proceed to completion.) Regarding Claims 5, 10, and 15: The combination of Zadeh and Wahed teach or suggest: the method of claim 1, The computer program product of claim 6, and system of claim 11: Furthermore, Zadeh teaches claim 5(also representative of claims 10 and 15): - further comprising: based on the proposed one or more of the report description and report intent, retrieving contextual data corresponding thereto;(Zadeh [0041] In some implementations, the semantic search agent 210 may conduct a separate semantic search for each section of the incident report to be generated. In such cases, the semantic search engine 210 may, for each section of the incident report, may identify sections of one or more example reports from the report database 220 that are most relevant to the initial inputs 100 provided by the user in the reporter interface 105. For example, if the structured inputs 115 in the initial inputs 100 indicate that an incident type is “burglary”, then the semantic search agent 210 can identify example reports for “burglary” incidents. However, the semantic search agent 210 may also consider other contextual information from the initial inputs 100 to identify relevant example reports that may not be classified as “burglary”.) In this excerpt, Zadeh contains a semantic search agent that analyzes the initial inputs (mapped to the proposed report description and report intent) to retrieve contextual information. - providing the contextual data to the second large language model with the proposed report description and the report intent.(Zadeh [0045] The prompt 400 comprises a general instruction 403 to set the context for the LLM 130 and target input 406. The target input 406 includes the free-style text 110 and the structured inputs 115 from the initial input 100 received from the reporter interface 105. [0046] FIG. 5 illustrates an example prompt 500 to generate a second version of a generated report 150. In this example, the prompt 500 has been generated using a context learning approach...The prompt 500 includes an example report 503, such as the example reports (e.g., 230, 231, 232, 233, 234, 235) obtained by the semantic search engine 210 in FIG. 2. Inclusion of the example report 503 in the prompt 500 may provide context to improve the quality of the generated report 150 generated by the LLM 120 by providing a close example of what kind of incident report 150 the LLM 120 should generate.) 11. Claims 4, 9, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over in Zadeh (US 20250095096 A1), in view of Wahed (US 20250292093 A1), further in view of Bianchini et al. (US 20250139151 A1) hereinafter Bianchini. Regarding Claims 4, 9, and 14: The combination of Zadeh and Wahed teach or suggest: the method of claim 1, The computer program product of claim 6, and system of claim 11: However, neither Zadeh nor Wahed teach the teachings of claim 4(also representative of claims 9 and 14): - wherein the quality rating corresponds to ambiguity. Alternatively, Bianchini discloses generating an output in response to a textual input on a chatbot interface, using a large language model to output keywrods, rephrased version of the textual input, and an intent sentence. Bianchini teaches: - wherein the quality rating corresponds to ambiguity.(Bianchini [0005] For example, when receiving user inputs via a chatbot interface for generating an electronic report, the large language model (LLM) may encounter challenges with ambiguous queries, leading to incorrect interpretations or responses. To address the inherent ambiguity of natural language, the computing architecture can integrate a generative AI model to reformulate ambiguous queries into more precise and unambiguous forms. This reformulation process can improve the LLM's ability to accurately interpret user intent. For example, the intent sentence can provide the LLM with a clear understanding of the desired outcome or action within the context of the electronic report, thereby improving the relevance and accuracy of the generated output. [0075] The ontology 600 can associate probability scores with individual nodes or concepts to reflect their relative likelihood or importance within a given dataset. A probability score can correspond to a numerical value that indicates the likelihood of a particular concept or data structure being relevant, frequently encountered, or preferred in a specific context. The probability scores can be based on various factors, such as the frequency of occurrence, which indicates how often a concept appears in the data, the relevance to the domain, which indicates how important the concept is within the specific context, or user preferences, which indicate the patterns or behaviors of users interacting with the ontology, such as commonly selected options or frequently accessed information. The ontology 600 can be used to prioritize and rank the most relevant information based on the associated probability scores. The higher the probability score, the more likely the system may prioritize that concept in tasks such as search queries, report generation, or data analysis.) Since the probability score is based on relevance to the domain, or how important the concept is within the context, which improves ambiguity, then the limitation is met. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combination of Zadeh and Wahed, by adding the teachings of Bianchini that the quality rating is corresponds to ambiguity. While Wahed does mention prompt relevance in [0054], Wahed does not make this connection to ambiguity as Bianchini does in [0005]. One of ordinary skill in the art would have been motivated to combine these concepts as reducing ambiguity in LLM systems increases performance and accuracy. (Bianchini [0004] In this regard, challenges can arise in processing prompts and understanding user intent within a specific domain due to factors such as ambiguity, context, and variations in language. Additionally, efficiently managing and accessing data from various sources, including ontologies and databases, can be challenging, particularly when dealing with large and diverse datasets. Coordinating different components, such as language models, data retrieval systems, and ontologies, to facilitate seamless operation and data exchange across interconnected systems, while addressing issues with performance enhancement and integration across various operational layers, can also present complexities.) Conclusion 12. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: - Steven Brandon Ward (US 20250238888 A1) discloses automatically drafting police reports using artificial intelligence to reduce bias in policing. - Kiran Kumar Bathula (US 20250246188 A1) discloses an autonomous fraud/AML reporting system to automate suspicious activity report narratives using generative AI large language models, which can be generated piece-by-piece by providing examples through prompts and feedback which allows for more specific and targeted narratives and validations of such narratives. 13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICO LAUREN PADUA whose telephone number is (703)756-1978. The examiner can normally be reached Mon to Fri: 8:30 to 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached at (571) 270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICO L PADUA/ Junior Patent Examiner, Art Unit 3626 /JESSICA LEMIEUX/ Supervisory Patent Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Apr 04, 2025
Application Filed
Feb 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586035
INTERACTIVE USER INTERFACE FOR SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12523701
METHOD FOR MANAGING BATTERY RECORD AND APPARATUS FOR PERFORMING THE METHOD
2y 5m to grant Granted Jan 13, 2026
Patent 11881521
SEMICONDUCTOR DEVICE
2y 5m to grant Granted Jan 23, 2024
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
10%
Grant Probability
27%
With Interview (+17.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month