Prosecution Insights
Last updated: April 18, 2026
Application No. 19/187,893

IMPLEMENTING FIRST CONTACT AND PERSONALIZED REMINDER STRATEGIES FOR INFORMATION GATHERING PROCESSES

Non-Final OA §101§102§103
Filed
Apr 23, 2025
Examiner
SANTOS-DIAZ, MARIA C
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Assured Insurance Technologies, Inc.
OA Round
3 (Non-Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
63%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
97 granted / 291 resolved
-18.7% vs TC avg
Strong +30% interview lift
Without
With
+30.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
35 currently pending
Career history
326
Total Applications
across all art units

Statute-Specific Performance

§101
26.3%
-13.7% vs TC avg
§103
27.8%
-12.2% vs TC avg
§102
21.7%
-18.3% vs TC avg
§112
22.3%
-17.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 291 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/29/206 has been entered. Status of the Application This is a Non-Final Action in response to the claim amendments submitted on 01/29/2026. Claims 1-2, 4-5, 7, 9-10, 12-13, 15, 17-18 and 20 are amended Claims 1-20 are pending and examined below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claims are directed to an abstract idea without significantly more. With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted that the claims are directed to at least one potentially eligible category of subject matter (i.e., process and machine, respectively). Thus, Step 1 of the Subject Matter Eligibility test for claims 1-20 is satisfied. With respect to Step 2A Prong One, it is next noted that the claims recite an abstract idea that falls under the “Certain Methods Of Organizing Human Activity” group within the enumerated groupings of abstract ideas set forth in the MPEP 2106 since the claims set forth steps that recite managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). Claims 1, 9 and 17 recites the abstract idea of claim processing in the event of an accident (see paragraph 031). This idea is described by the following claim steps: receiving, information from first source relating to a claim event, the information identifying multiple individuals, other than the first source, to provide additional information for the claim event; implement an information gathering process for each of the multiple individuals by: initiating first contact through communications, with each of the multiple individuals to receive additional information pertaining to the claim event; for at least some of the multiple individuals, after initiating first contact, authenticate the individual; implement, a customized content flow that is customized for the individual based at least in part on the information that the individual has previously provided about the claim event; determining a responsiveness of each of the multiple individuals with the respective customized content flow; communicating a set of reminders to each of the multiple individuals, in accordance with the corresponding optimized reminder strategy for each of the multiple individuals; wherein the corresponding reminder strategy for each of the multiple individuals being dynamically adapted for a communication type and a cadence that is determined, increase engagement form the individual, so as to expedite the information gather process for each of the multiple individuals and the overall claim process. This idea falls within the certain methods of organizing human activity grouping of abstract ideas because it is directed towards managing interactions between people such as that required during communications when filling a claim for an event such as an accident. Because the above-noted limitations recite steps falling within the Certain Methods Of Organizing Human Activity abstract idea groupings of the MPEP 2106, they have been determined to recite at least one abstract idea when evaluated under Step 2A Prong One of the eligibility inquiry. Therefore, because the limitations above set forth activities falling within the Certain Methods Of Organizing Human Activity abstract idea groupings described in the MPEP 2106, the additional elements recited in the claims are further evaluated, individually and in combination, under Step 2A Prong Two and Step 2B below. Claim 12 and 19 recites similar limitations as claim 1 and is therefore determined to recite the same abstract idea. With respect to Step 2A Prong Two, the judicial exception is not integrated into a practical application. The additional elements that fail to integrate the abstract idea into a practical application are: a network communication interface, communicatively coupled to one or more networks; one or more processors, communicatively coupled to the network communication interface; a memory, communicatively coupled to the one or more processors and storing instructions; transmitting a selectable link; implementing a set of user interface features provided on the corresponding computing device; execution of a machine-learning model; a computing device of a user; a computing device of the one or more individuals; a non-transitory computer readable medium. However, using a computer environment such as a network and other recited computer elements amounts to no more than generally linking the use of the abstract idea to a particular technological environment. Filling a claim for an event such as an accident and generating a reminder strategy can reasonably be performed by pencil and paper until limited to a computerized environment by requiring the user of the computing devices. These additional elements have been evaluated, but fail to integrate the abstract idea into a practical application because they amount to using generic computing elements or computer-executable instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), and alternatively serve to link the use of the judicial exception to a particular technological environment. See MPEP 2106.05(f) and 2106.05(h). Regarding the use of machine-learning, the examiner views these additional elements as results-oriented steps given that there is no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result are currently present such that this is viewed as equivalent to “apply it” for merely implementing the abstract idea using generic computing components (See Id.). In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. With respect to Step 2B of the eligibility inquiry, it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As noted above, the claims as a whole merely describes a method, computer system, and computer program product that generally “apply” the concepts discussed in prong 1 above. (See MPEP 2106.05 f (II)) In particular applicant has recited the computing components at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. As the court stated in TLI Communications v. LLC v. AV Automotive LLC, 823 F.3d 607, 613 (Fed. Cir. 2016) merely invoking generic computing components or machinery that perform their functions in their ordinary capacity to facilitate the abstract idea are mere instructions to implement the abstract idea within a computing environment and does not add significantly more to the abstract idea. Accordingly, these additional computer components do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore, even when viewed as a whole, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea and as a result the claim is not patent eligible. In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrates the abstract idea into a practical application. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself. For the reasons identified with respect to Step 2A, prong 2, claims 1, 12 and 19 fail to recite additional elements that amount to an inventive concept. For example, use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a commercial or legal interaction or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more (see MPEP 2106.05(g)). In addition, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application (see MPEP 2106.05(h)). Dependent claims 2-8 and 10-16, and 18-20 recite the same abstract idea as recited in the independent claims, and when evaluated under Step 2A Prong One are found to merely recite details that serve to narrow the same abstract idea recited in the independent claims accompanied by the same generic computing elements or software as those addressed above in the discussion of the independent claims, which is not sufficient to amount to a practical application or add significantly more, or other additional elements that fail to amount to a practical application or add significantly more, as noted above. Regarding claims 7-8, and 15-16 reciting the limitations “determine when an overall threshold of information gathering is met by the information gathering process being implemented for each individual, when the overall threshold of information gathering is met for the claim event, generate an AI prompt corresponding to the claim event; transmit, over the one or more networks, the AI prompt to a remote large language model (LLM) engine; and receive, of the one or more networks, an LLM summary of the claim event; and receive, of the one or more networks, an LLM summary of the claim event. and generate a claim view interface to provide details of the claim process and the LLM summary.” the examiner views these additional elements as results-oriented steps given that there is no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result are currently present such that this is viewed as equivalent to “apply it” for merely implementing the abstract idea using generic computing components (See Id.). The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and the collective functions merely provide high level of generality computer implementation. Therefore, whether taken individually or as an order combination, the claims are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. For more information see MPEP 2106. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 1. Claim(s) 1-4, 9-12 and 17-20 is/are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Patt (US 2023/0115771). Regarding claims 1, 9, 17, Patt discloses a computing system (See Figure 1) comprising: a network communication interface, communicatively coupled to one or more networks (Fig.1, [0042] FIG. 1 is a block diagram illustrating an example computing system 100 implementing targeted event monitoring, alert, loss mitigation, and fraud detection techniques, in accordance with examples described herein. The computing system 100 can include a communication interface 115 that enables communications, over one or more networks 170, with computing devices 190 of users 197 of the various services described throughout the present disclosure.); one or more processors, communicatively coupled to the network communication interface (Fig. 2, processor 240.); and a memory, communicatively coupled to the one or more processors and storing instructions that, when executed by the one or more processors ([037] one These instructions can be stored in one or more memory resources of the computing device. A programmatically performed step may or may not be automatic. See also Fig.2) a non-transitory computer readable medium storing instructions that, when executed by one or more processors of a computing device cause the computing system to (See Figure 1 and claim 8); a computer-implemented method of generating optimized reminder strategies, the method being performed by one or more processors and comprising (abstract): receive, over the one or more networks, information from first source relating to a claim event, (See Fig. 1 and network 170 receiving information from users 197 by means of devices 190 and service aps 196. [0107] In various implementations, the computing system 100 can perform corroboration techniques for insurance claims automatically. In doing so, the computing system 100 can connect with a plurality of data sources 175 to receive additional contextual information corresponding to the claim event (840).), the information identifying multiple individuals, other than the first source, to provide additional information for the claim event ([0108 In further implementations, the computing system 100 can identify one or more individuals that have additional contextual information corresponding to the claimant user 197 and/or the claim event (845). Such individuals may comprise witnesses to the claim event or witnesses to the damage to the user's home (846), neighbors of the user 197 that may have relevant knowledge of the user's character or who may have witnessed the damage to the user's property (847), or passengers or other victims in a vehicle incident (848).); implement an information gathering process for each of the multiple individuals (See Fig. 8B [0109] In various implementations, the computing system 100 can generate an interactive user interface for each of the identified individuals to acquire the additional contextual information (850)) by: initiating first contact, over the one or more networks, with a corresponding computing device of each of the multiple individuals to receive additional information pertaining to the claim event (See Fig. 8B [0109] In various implementations, the computing system 100 can generate an interactive user interface for each of the identified individuals to acquire the additional contextual information (850). As provided herein, the interactive user interface presented to the individuals may be similar to the information gathering features described with respect to the FNOL interface above, and may include a content flow based on the nature of the event and damage claimed by the claimant user 197 (852). The content flow can include a question flow that may ask the individual a series of questions regarding the claimant user 197 and/or the damage to the user's property or injuries sustained by the claimant user 197. For example, the question flow may ask whether the individual witnessed a vehicle incident involving the claimant user 197, and if so, may ask further questions regarding the nature of the incident and seek to corroborate, validate, or invalidate certain claims made by the claimant user 197. As such, the computing system 100 can dynamically adjust the content flow based on the information provided by the individual and/or the engagement of the individual with the content flow (854). ); for at least some of the multiple individuals, after initiating first contact, transmitting a selectable link to the individual, each link being selectable on the corresponding computing device of the individual to authenticate the individual and to link the individual to one or more user interfaces (See [0141] wherein it is disclosed that the system, via the user interfaces (selectable links) requests the user identifiers in order to identify and authenticate the user by performing a lookup in a policy database to determine information about the user. ); in response to any one of the individuals selecting the link on the corresponding computing device and being authenticated, implementing, via a set of user interface features provided on the corresponding computing device a content flow that is customized for the individual based at least in part on information that the individual has previously been provided about the claim event ([0109] For example, the question flow may ask whether the individual witnessed a vehicle incident involving the claimant user 197, and if so, may ask further questions regarding the nature of the incident and seek to corroborate, validate, or invalidate certain claims made by the claimant user 197. As such, the computing system 100 can dynamically adjust the content flow based on the information provided by the individual and/or the engagement of the individual with the content flow (854). [0142] The damage assessment interface can further present the claimant user 197 with a contextual content flow based on the claimant user's vehicle and initial inputs regarding the vehicle incident (1014). For example, the content flow (e.g., see FIG. 9A through FIG. 9L, and FIG. 9M and FIG. 9N) can ask the claimant user 197 a series of questions regarding the vehicle (e.g., whether the vehicle still runs), whether any injuries occurred in the incident (FIG. 9O through FIG. 9Q), and more specific information based on the damage inputs the claimant user 197 provides on the virtual representation of the vehicle. As a further example, if the claimant user 197 indicates damage to the front of the vehicle, the content flow can present one or more queries relevant to the front of the vehicle, such as whether the radiator is leaking, whether the headlight lenses are cracked or destroyed, whether the hood of the vehicle is still intact, whether the windshield is cracked, and the like.); determining a responsiveness of each of the multiple individuals with the respective customized content flow ( [0073] During any interactive session described herein, the live engagement monitor 140 can execute machine learning and/or artificial intelligence techniques to determine responsiveness factors for each individual to which a content flow is provided. The responsiveness factors may be generalized for users based on effective engagement techniques performed for a population of users 197, or may be individually determined based on the individual engagement of the users 197 and other parties relevant to a claim event. As such, the live engagement monitor 140 can determine the various methods of content presentation that provoke response and engagement with the content flows, and create a response profile of each individual or like subsets of individuals that the content generator 130 can utilize to tailor content flows to each individual. [0110] As additional contextual information is gathered, one or more issues may arise regarding contextual information initially provided by the claimant user 197, such as an inconsistency with regard to vehicle damage, property damage, lost property, or injury. In such a scenario, the computing system 100 may perform follow-up operations with the identified individuals to parse out the inconsistency, and either flag the inconsistency as potentially fraudulent or resolve the inconsistency through further investigation. In various implementations, the computing system 100 can further perform engagement monitoring techniques on the additional individuals to dynamically adjust the content flow presentations to either maximize engagement or maximize information gathering until the individual completes the content flow(s) (855). For example, the individual may be busy or disinterested in getting involved in the claim. In such an example, the individual may be provided with reminder notifications to complete the content flow(s) and/or incentives for completing a content flow (e.g., discounted insurance offers).); based on the determined responsiveness of each of the multiple individuals, executing a machine learning model to generate a corresponding reminder strategy for providing reminders to each of the multiple individuals to complete the respective customized content flow (See Fig. 8B [0073] During any interactive session described herein, the live engagement monitor 140 can execute machine learning and/or artificial intelligence techniques to determine responsiveness factors for each individual to which a content flow is provided. The responsiveness factors may be generalized for users based on effective engagement techniques performed for a population of users 197, or may be individually determined based on the individual engagement of the users 197 and other parties relevant to a claim event. As such, the live engagement monitor 140 can determine the various methods of content presentation that provoke response and engagement with the content flows, and create a response profile of each individual or like subsets of individuals that the content generator 130 can utilize to tailor content flows to each individual. [0109] In various implementations, the computing system 100 can generate an interactive user interface for each of the identified individuals to acquire the additional contextual information (850). As provided herein, the interactive user interface presented to the individuals may be similar to the information gathering features described with respect to the FNOL interface above, and may include a content flow based on the nature of the event and damage claimed by the claimant user 197 (852). The content flow can include a question flow that may ask the individual a series of questions regarding the claimant user 197 and/or the damage to the user's property or injuries sustained by the claimant user 197.); transmitting, over the one or more networks, a set of reminders to the corresponding computing device of each of the multiple individuals, in accordance with the corresponding optimized reminder strategy for each of the multiple individuals ([0110] In various implementations, the computing system 100 can further perform engagement monitoring techniques on the additional individuals to dynamically adjust the content flow presentations to either maximize engagement or maximize information gathering until the individual completes the content flow(s) (855). For example, the individual may be busy or disinterested in getting involved in the claim. In such an example, the individual may be provided with reminder notifications to complete the content flow(s) and/or incentives for completing a content flow (e.g., discounted insurance offers).); and wherein the corresponding reminder strategy for each of the multiple individuals is dynamically adapted for a communication type ([0074] Furthermore, the interactive content generator 130 can leverage the various engagement triggers—corresponding to response factors indicating whether individuals engaged with the content flows and the extent of engagement with the content flows—to alter presentations, the timing of notifications, the types of notifications (e.g., text reminders, app notifications, emails, etc.) in order to maximize contextual information received from the individuals with regard to a particular claim event.) and a cadence that is determined, by execution of the machine-learning model, to increase engagement from the individual, so as to expedite the information gather process for each of the multiple individuals and the overall claim process ([0110] In various implementations, the computing system 100 can further perform engagement monitoring techniques on the additional individuals to dynamically adjust the content flow presentations to either maximize engagement or maximize information gathering until the individual completes the content flow(s) (855). For example, the individual may be busy or disinterested in getting involved in the claim. In such an example, the individual may be provided with reminder notifications to complete the content flow(s) and/or incentives for completing a content flow (e.g., discounted insurance offers).). Regarding claims 2, 10, 18, Patt discloses wherein implementing the content flow includes transmitting over the one or more networks, content data to the corresponding computing device of each individual of the multiple individuals, the content data being customized, based on the determined responsiveness of the individual, to induce the individual in providing information about the claim event (See Fig. 8B [0109] In various implementations, the computing system 100 can generate an interactive user interface for each of the identified individuals to acquire the additional contextual information (850). As provided herein, the interactive user interface presented to the individuals may be similar to the information gathering features described with respect to the FNOL interface above, and may include a content flow based on the nature of the event and damage claimed by the claimant user 197 (852)… As such, the computing system 100 can dynamically adjust the content flow based on the information provided by the individual and/or the engagement of the individual with the content flow (854). ). Regarding claims 3, 11, 19, Patt discloses: wherein determining the responsiveness of each of the multiple individuals includes executing an engagement monitoring model to determine a set of response data for each of the multiple individuals (See Fig. 8B and [0109] As provided herein, the interactive user interface presented to the individuals may be similar to the information gathering features described with respect to the FNOL interface above, and may include a content flow based on the nature of the event and damage claimed by the claimant user 197 (852). The content flow can include a question flow that may ask the individual a series of questions regarding the claimant user 197 and/or the damage to the user's property or injuries sustained by the claimant user 197. For example, the question flow may ask whether the individual witnessed a vehicle incident involving the claimant user 197, and if so, may ask further questions regarding the nature of the incident and seek to corroborate, validate, or invalidate certain claims made by the claimant user 197. As such, the computing system 100 can dynamically adjust the content flow based on the information provided by the individual and/or the engagement of the individual with the content flow (854). [0110] As additional contextual information is gathered, one or more issues may arise regarding contextual information initially provided by the claimant user 197, such as an inconsistency with regard to vehicle damage, property damage, lost property, or injury. In such a scenario, the computing system 100 may perform follow-up operations with the identified individuals to parse out the inconsistency, and either flag the inconsistency as potentially fraudulent or resolve the inconsistency through further investigation. In various implementations, the computing system 100 can further perform engagement monitoring techniques on the additional individuals to dynamically adjust the content flow presentations to either maximize engagement or maximize information gathering until the individual completes the content flow(s) (855). For example, the individual may be busy or disinterested in getting involved in the claim. In such an example, the individual may be provided with reminder notifications to complete the content flow(s) and/or incentives for completing a content flow (e.g., discounted insurance offers).). Regarding claims 4, 12, 20 Patt discloses: wherein the executed instructions cause the computing system to adapt the customized content flow, implemented via the set of user interface features on the corresponding computing device of each of the multiple individuals, based on each of the set of response data, information received form the first source, and a type of the claim event (See Fig. 8B and [0109] As provided herein, the interactive user interface presented to the individuals may be similar to the information gathering features described with respect to the FNOL interface above, and may include a content flow based on the nature of the event and damage claimed by the claimant user 197 (852). The content flow can include a question flow that may ask the individual a series of questions regarding the claimant user 197 and/or the damage to the user's property or injuries sustained by the claimant user 197. For example, the question flow may ask whether the individual witnessed a vehicle incident involving the claimant user 197, and if so, may ask further questions regarding the nature of the incident and seek to corroborate, validate, or invalidate certain claims made by the claimant user 197. As such, the computing system 100 can dynamically adjust the content flow based on the information provided by the individual and/or the engagement of the individual with the content flow (854).). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 5-8, 13-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Patt (US 2023/0115771) in view of Henry (US 2025/0193132). Regarding claims 5, 13, Patt discloses wherein the executed instructions further cause the computing system to: implementing the content flow includes transmitting, over the one or more networks, content data to a computing device of each of the multiple individuals (See Fig. 8B and [0109] As provided herein, the interactive user interface presented to the individuals may be similar to the information gathering features described with respect to the FNOL interface above, and may include a content flow based on the nature of the event and damage claimed by the claimant user 197 (852). The content flow can include a question flow that may ask the individual a series of questions regarding the claimant user 197 and/or the damage to the user's property or injuries sustained by the claimant user 197. For example, the question flow may ask whether the individual witnessed a vehicle incident involving the claimant user 197, and if so, may ask further questions regarding the nature of the incident and seek to corroborate, validate, or invalidate certain claims made by the claimant user 197. As such, the computing system 100 can dynamically adjust the content flow based on the information provided by the individual and/or the engagement of the individual with the content flow (854). [0110] As additional contextual information is gathered, one or more issues may arise regarding contextual information initially provided by the claimant user 197, such as an inconsistency with regard to vehicle damage, property damage, lost property, or injury. In such a scenario, the computing system 100 may perform follow-up operations with the identified individuals to parse out the inconsistency, and either flag the inconsistency as potentially fraudulent or resolve the inconsistency through further investigation. In various implementations, the computing system 100 can further perform engagement monitoring techniques on the additional individuals to dynamically adjust the content flow presentations to either maximize engagement or maximize information gathering until the individual completes the content flow(s) (855). For example, the individual may be busy or disinterested in getting involved in the claim. In such an example, the individual may be provided with reminder notifications to complete the content flow(s) and/or incentives for completing a content flow (e.g., discounted insurance offers).). Patt does not explicitly disclose: providing a chatbot with the customized user interface features to further facilitate the individual in responding the content flow. However Henry which similarly teaches a system for facilitating interactions between users to file a claim further teaches: providing a chatbot with the customized user interface features to further facilitate the individual in responding the content flow (abstract: “The UMC session can involve multiple participants, including human users and software agents (e.g., conversational bots, virtual agents, digital assistants, and other dialog interfaces). The UMC platform can facilitate creating and interacting with a digital assistant providing unified multichannel communication.”, [0034] “One example of a communication channel includes a chat communication channel. A chat refers to the process of exchanging messages between two or more users in real-time (or near real-time) over the Internet or other network. The users interacting during a chat can include human users and software agents (e.g., bots). A bot can be a web application that has a conversational interface. Users connect to a bot through one of the communication channels. Examples of bots include, but are not limited to, conversational bots, virtual agents, digital assistants, and other dialog interfaces.” ). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filled to include a chatbot with the customized user interface features to further facilitate the individual in responding the content flow, since such modification is a known improvement in the art that provides the known benefit of analyzing the data received with a chatbot and training a digital assistant using the generated AI prompt, including a type of generative AI that can understand and generate human-like text and inform the AI of the desired content, context, or task, allowing users to guide the AI's text generation to produce tailored responses, explanations, or creative content based on the provided prompt as disclosed by Henry on [005] and [0119]. Regarding claims 6, 14, Patt discloses where the computing system initiates first contact with each of the multiple individuals to achieve a network effect of cascading information gathering for the claim event ([0032] In further implementations, upon receiving a claim trigger, the system can implement an investigative and/or corroborative process to compile a complete contextual record of the claim event and the resultant loss, damage, and/or injury. In doing so, the system can determine other parties to the claim event or parties that may have relevant information related to the claimant (e.g., other victims, witnesses, passengers of a vehicle, neighbors, family members, coworkers, etc.). Upon identifying each of the relevant individuals, the system can utilize various contact methods to remotely engage with the individuals, including text messaging, email, social media messaging, snail mail, etc. In one aspect, the engagement method can include a link to a query interface corresponding to the claim event, which can enable the individual to interact with a question flow that provides a series of interactive questions that seek additional contextual information regarding the claim event. [0110] As additional contextual information is gathered, one or more issues may arise regarding contextual information initially provided by the claimant user 197, such as an inconsistency with regard to vehicle damage, property damage, lost property, or injury. In such a scenario, the computing system 100 may perform follow-up operations with the identified individuals to parse out the inconsistency, and either flag the inconsistency as potentially fraudulent or resolve the inconsistency through further investigation. In various implementations, the computing system 100 can further perform engagement monitoring techniques on the additional individuals to dynamically adjust the content flow presentations to either maximize engagement or maximize information gathering until the individual completes the content flow(s) (855). For example, the individual may be busy or disinterested in getting involved in the claim. In such an example, the individual may be provided with reminder notifications to complete the content flow(s) and/or incentives for completing a content flow (e.g., discounted insurance offers). [0165] Based on the input data provided by each party (e.g., via the accident reconstruction interface and contextual content flows), the computing system 100 can generate a simulation of the vehicle accident (1215). ). Regarding claim 7, Patt does not explicitly disclose: wherein the executed instructions further cause the computing system to: determine when an overall threshold of information gathering is met by the information gathering process being implemented for each individual; when the overall threshold of information gathering is met for the claim event, generate an AI prompt corresponding to the claim event; transmit, over the one or more networks, the AI prompt to a remote large language model (LLM) engine; and receive, of the one or more networks, an LLM summary of the claim event. However Henry which similarly teaches a system for facilitating interactions between users to file a claim further teaches: determine when an overall threshold of information gathering is met by the information gathering process being implemented for each individual; when the overall threshold of information gathering is met for the claim event, generate an AI prompt corresponding to the claim event ([0119] The AI enrichment service 460 can be a generative AI service and include a large language model (LLM). An LLM is a type of generative AI that can understand and generate human-like text, while multi-model generative AI extends this capability to generate a variety of media types, including text, images, audio, video, etc., allowing for more diverse and versatile content creation. In generative AI, such as LLMs, a prompt serves as an input or instruction that informs the AI of the desired content, context, or task, allowing users to guide the AI's text generation to produce tailored responses, explanations, or creative content based on the provided prompt. [0321] The agent service can (1640) generate the digital assistant by processing the artifact and the information associated with the set of parameters. During the generation of the digital assistant, the agent service can communicate (1642) with an AI service to analyze the artifact and the information associated with the set of parameters. The agent service can generate (1644) an AI prompt based on the analysis of the artifact and the information associated with the set of parameters; and train (1646) the digital assistant using the generated AI prompt. ); transmit, over the one or more networks, the AI prompt to a remote large language model (LLM) engine; and receive, of the one or more networks, an LLM summary of the claim event ([0118] The AI enrichment service 460 can include, for example, cloud-based AI services and can provide AI enrichments to a UMC thread. The AI enrichment service 460 can provide machine learning capabilities for analyzing text for emotional sentiment, providing summarization and insights, or analyzing images to recognize objects or faces. [0119] The AI enrichment service 460 can be a generative AI service and include a large language model (LLM). An LLM is a type of generative AI that can understand and generate human-like text, while multi-model generative AI extends this capability to generate a variety of media types, including text, images, audio, video, etc., allowing for more diverse and versatile content creation. In generative AI, such as LLMs, a prompt serves as an input or instruction that informs the AI of the desired content, context, or task, allowing users to guide the AI's text generation to produce tailored responses, explanations, or creative content based on the provided prompt.). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filled to include determine when an overall threshold of information gathering is met by the information gathering process being implemented for each individual; when the overall threshold of information gathering is met for the claim event, generate an AI prompt corresponding to the claim event; transmit, over the one or more networks, the AI prompt to a remote large language model (LLM) engine; and receive, of the one or more networks, an LLM summary of the claim event., since such modification is a known improvement in the art that provides the known benefit of analyzing the data entered with AI and training a digital assistant using the generated AI prompt, including a type of generative AI that can understand and generate human-like text and inform the AI of the desired content, context, or task, allowing users to guide the AI's text generation to produce tailored responses, explanations, or creative content based on the provided prompt as disclosed by Henry on [005] and [0119]. Regarding claim 8, Henry further teaches: wherein the executed instructions further cause the computing system to: generate a claim view interface to provide details of the claim process and the LLM summary ([0119] The AI enrichment service 460 can be a generative AI service and include a large language model (LLM). An LLM is a type of generative AI that can understand and generate human-like text, while multi-model generative AI extends this capability to generate a variety of media types, including text, images, audio, video, etc., allowing for more diverse and versatile content creation. In generative AI, such as LLMs, a prompt serves as an input or instruction that informs the AI of the desired content, context, or task, allowing users to guide the AI's text generation to produce tailored responses, explanations, or creative content based on the provided prompt. [0120] In any of the examples herein, an LLM can take the form of an AI model that is designed to understand and generate human language. Such models typically leverage deep learning techniques such as transformer-based architectures to process language with a very large number (e.g., billions) of parameters. Examples include the Generative Pre-trained Transformer (GPT) developed by OpenAI, Bidirectional Encoder Representations from Transforms (BERT) by Google, A Robustly Optimized BERT Pretraining Approach developed by Facebook AI, Megatron-LM of NVIDIA, or the like. Pretrained models are available from a variety of sources. [0121] In any of the examples herein, prompts can be provided to LLMs to generate responses. Prompts in LLMs can be initial input instructions that guide model behavior. Prompts can be textual cues, questions, or statements that users provide to elicit desired responses from the LLMs. Prompts can act as primers for the model's generative process. Sources of prompts can include user-generated queries, predefined templates, or system-generated suggestions. Technically, prompts are tokenized and embedded into the model's input sequence, serving as conditioning signals for subsequent text generation. Users can experiment with prompt variations to manipulate output, using techniques like prefixing, temperature control, top-K sampling, etc. These prompts, sourced from diverse inputs and tailored strategies, enable users to influence LLM-generated content by shaping the underlying context and guiding the neural network's language generation. For example, prompts can include instructions and/or examples to encourage the LLMs to provide results in a desired style and/or format. ). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filled to include generate a claim view interface to provide details of the claim process and the LLM summary, since such modification is a known improvement in the art that provides the known benefit of analyzing the data entered with AI and training a digital assistant using the generated AI prompt, including a type of generative AI that can understand and generate human-like text and inform the AI of the desired content, context, or task, allowing users to guide the AI's text generation to produce tailored responses, explanations, or creative content based on the provided prompt as disclosed by Henry on [005] and [0119]. Response to Arguments Applicant's arguments filed 09/17/2025 have been fully considered but they are not persuasive. Applicant argues on page 13 “the claims are directed to providing a solution to a technical problem that is identified in the Specification. Specifically, as noted in the Specification, content flow adaption techniques and customized individual reminder strategies serve an objective of expediting the information gathering process. See Specification at Para. [0032]. And as stated on Para. [0038], "examples described herein achieve a technical solution of optimizing information gathering processes, particularly for insurance claims and claim processing for insurance policy providers, in furtherance of a practical application of reducing time from an initial incident to the final step in the claim process (e.g., a settlement or payout). The claims recite the solution and the steps used to provide the solution ("implement an information gathering process so as to expedite the information gather process for each of the multiple individuals and the overall claim process.)" Among other benefits, the technical solution achieved by reducing claim processing time significantly reduce computing time and resources, and is a significant improvement for computer systems that perform information gathering in context of evaluating claims.” Examiner respectfully disagrees that the claims provide a technical solution but rather a business solutions by providing a user with reminders to expedite the data gathering and using the technology merely as a tool to perform the abstract idea, wherein such a process is a process that could be perform in the human mind, perhaps with the aid of pen and paper and therefore constitutes an abstract idea. Paragraphs [032] and [038] provide disclosure related to real time assistance to call center representatives and an automated negotiator in order to expedite the flow of data, wherein such processes could be perform in the human mind, perhaps with the aid of pen and paper and therefore constitutes an abstract idea. Applicant argues on pages 13-14 “Moreover, the claims recite the use of "a set of user interface features" in implementing a content flow that is customized for the individual. This facet of the information gathering process is similar to the situation in CORE WIRELESS LICENSING V. LG ELECS., INC (Fed. Cir. 2018), where the Federal Circuit applied ENFISH and similar cases to hold that "an improved user-interface for an electronic device" may also not be abstract under Alice. Similarly, IN TRADING TECHNOLOGIES INTERNATIONAL, INC. V. CQG, INC. (Fed. Cir. 2017), the Federal Circuit found that a specific, structured GUI that improved speed, accuracy, and usability in trading platforms was "directed to a specific improvement to the way computers operate." The aforementioned features of Claim 1 provide a detailed, technical process where the operations performed to determine accurate information about a vehicle incident. As with TRADING TECHNOLOGIES, the aforementioned features of Claim 1 improve speed, accuracy and usability as compared to conventional computer systems which receive information from the user in connection with vehicle incidents.” The Examiner respectfully disagrees. In contrast with Core Wireless and Trading Technologies the claimed invention at hand does not modify the user interface elements but rather the information provided to the user, based on user input. The claims at hand functions as a typical user interface that based on user input or selection provides the next set of data elements. Furthermore, Applicant failed to provide an articulated reasoning as to how the claims at hand improve speed, accuracy and usability, as argued. Applicant argues on page 14 “The "additional elements" integrate the alleged abstract idea into a practical application, and as such, confer subject matter eligibility onto the claims. Further, under Step 2B, subject matter eligibility can be found by "consider[ing] the elements of each claim both individually and 'as an ordered combination' to determine whether the additional elements 'transform the nature of the claim' into a patent-eligible application." ALICE, 573 U.S. at 217. Given the specificity and technological solution provided by the claims, the claims should also be deemed subject matter eligible under Step 2B.” Examiner asserts that the Applicant have failed to provide an articulated reasoning as to how, in this application, the “additional elements” are “unconventional and non-generic combination of known elements,” or “when viewed as an ordered combination, the claim limitations amount to significantly more than the alleged abstract idea”. Applicant simply and generically states that the additional elements integrate the abstract idea into a practical application. However, the Examiner views the combination of additional elements as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) and Generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). There is no improvement to the functioning of a computer, or to any other technology or technical field - see MPEP 2106.05(a). By merely providing a technology based solution to a claim event process, the system is simply using the technology as a tool to perform the abstract process. In regards to the 35 USC 102 and 35 USC 103, Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. HENRY, US 20250193144, UNIFIED MULTICHANNEL COMMUNICATION PLATFORM. Systems and techniques for facilitating unified multichannel communication are provided. The described systems and techniques improve communication technology through an encompassing, channel-agnostic approach which unifies disparate communication modes into a singular coherent thread. A unified multichannel communication (“UMC”) service of a UMC platform can initialize a UMC thread for a UMC session, where the UMC thread can be used to facilitate unified multichannel communication. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA C SANTOS-DIAZ whose telephone number is (571)272-6532. The examiner can normally be reached Monday-Friday 8:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sarah Monfeldt can be reached at 571-270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARIA C SANTOS-DIAZ/Primary Examiner, Art Unit 3629
Read full office action

Prosecution Timeline

Apr 23, 2025
Application Filed
Jun 14, 2025
Non-Final Rejection — §101, §102, §103
Sep 17, 2025
Response Filed
Oct 23, 2025
Final Rejection — §101, §102, §103
Jan 29, 2026
Request for Continued Examination
Feb 04, 2026
Response after Non-Final Action
Apr 01, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602633
DATA CENTER GUIDE CREATION AND COST ESTIMATION FOR AUGMENTED REALITY HEADSETS
2y 5m to grant Granted Apr 14, 2026
Patent 12602632
WORK CHAT ROOM-BASED TASK MANAGEMENT APPARATUS AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602628
EVALUATING ACTION PLANS FOR OPTIMIZING SUSTAINABILITY FACTORS OF AN ENTERPRISE
2y 5m to grant Granted Apr 14, 2026
Patent 12572882
SYSTEM OF AND METHOD FOR OPTIMIZING SCHEDULE DESIGN VIA COLLABORATIVE AGREEMENT FACILITATION
2y 5m to grant Granted Mar 10, 2026
Patent 12555082
SMART WASTING STATION FOR MEDICATIONS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
63%
With Interview (+30.0%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 291 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month