DETAILED ACTION
This action is responsive to the application filed on 01/02/2026. Claims 1-20 are pending and have been examined.
This action is Non-final.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C.
120, 121, 365(c), or 386(c) is acknowledged.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/02/2026 has been entered.
Response to Arguments
Argument 1: The applicant argues that the rejection under 35 U.S.C. 103 is improper, as set forth on page 3 of the remarks, because the applied references fail to disclose or suggest several key limitations of the amended claims. Specifically, the applicant contends that Florey does not teach identifying, from vehicle incident data, at least a second person of multiple persons, but instead relies on a predefined set of individuals provided during claim initiation, and that Jiang and Han do not remedy this deficiency. The applicant further argues that the references do not disclose generating a collision reconstruction interface based at least in part on an LLM summarization, asserting that Han’s reconstruction is not based on LLM output. Additionally, the applicant contends that the references fail to disclose providing the collision reconstruction interface to a computing device associated with a policy provider during a real-time call session in which a claim is being processed. Accordingly, the applicant asserts that the applied references, individually and in combination, do not teach or render obvious the claimed invention.
Examiner Response to Argument 1: The examiner has considered the argument set forth above, however, these arguments are not persuasive. For one, the applicant argues that the applied references fail to teach (i) identifying, from vehicle incident data, at least a second person and contacting that person based on such identification, and (ii) generating a collision reconstruction interface based at least in part on an LLM summarization and providing that interface during a real-time claim processing session. However, Florey expressly teaches analyzing incident data to determine additional persons involved in an accident, for example, “play back videos of the scene… and try to determine if there were any witnesses… or participants” (Florey, page 19, col. 14, lines 44-49), which the examiner interprets as identifying, from vehicle incident data, at least a second person because both involve using collected incident data to determine additional individuals associated with the incident, and Florey further teaches contacting such identified persons, for example, “if there are witnesses… instruct the insured… to approach one of the witnesses” (Florey, page 19, col 14, lines 50-55), which is directed to making contact with the second person to obtain incident data, thereby teaching the claimed cascading process of identifying and then contacting additional persons. With respect to the LLM-based reconstruction and real-time session, Jiang teaches generating outputs from an LLM based on incident data, e.g., “the constructed prompts are directly fed into the LLM… [and] the API returns a raw query… based on the prompt input” Jiang, page 6, sec 4.3.2), which is directed to an LLM summarization of incident data, and Han teaches generating a reconstruction interface, e.g., “reconstructing an actual image-based 3D traffic accident scene” (Han, [0097]), which is directed to a collision reconstruction interface; the rejection properly relies on the combination, and it would have been obvious to use Jiang’s LLM-generated outputs as input to Han’s reconstruction since both process and present incident data, representing a predictable integration of processed data into a visualization interface. Further, Florey teaches providing accident-related information during a real-time session, e.g., initiating “a video conference with the ECRP” (Florey, page 16, col. 7-8, lines 6-12, 24-28), which is directed to a real-time claim processing session with a policy provider. Accordingly, Florey, Jiang, and Han, in combination, teach or render obvious the claimed limitations, and the applicant’s arguments are not persuasive.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this
Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not
identically disclosed as set forth in section 102, if the differences between the claimed invention and the
prior art are such that the claimed invention as a whole would have been obvious before the effective filing
date of the claimed invention to a person having ordinary skill in the art to which the claimed invention
pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are
summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US10630732B2, by Florey et. al. (referred herein as Florey) in view of NPL reference “Xpert: Empowering Incident Management with Query Recommendations via Large Language Models” by Jiang et. al. (referred herein by Jiang) further in view of US20150029308A1, by Han et. al. (referred herein as Han).
Regarding claim 1, Florey teaches:
A computing system comprising: a network communication interface, communicatively coupled to a data network; one or more processors, communicatively coupled to the network communication interface; and a memory, communicatively coupled to the one or more processors, comprising: ([Florey, page 22, col. 19-20, lines 63-67, 4-7] “a remote video hosting service capable of providing live video to entities communicating with it…a communication program adapted to provide the audio sounds and video feeds to the remote video hosting server” and “a control unit adapted to run a predetermined application…requests that a video conference immediately be set up”, wherein the examiner interprets the cited service that enables live network communication, the program that sends media to a remote server, and the control unit that runs applications and initiates a video session to be the same as a system having a network interface to a data network, processing hardware coupled to that interface, and memory storing executable instructions because they are both directed to network-connected processing components executing stored software to establish and conduct communications.)
a dynamic content generator, configured to, when executed by the one or more processors, implement a cascading information gathering process to receive vehicle incident data corresponding to an incident from multiple persons, including a first person and a second person, by (i) receiving vehicle incident data from a first person of the multiple persons; ([Florey, page 19, col 14, lines 33-43], “After the appropriate emergency response personnel have been requested, information at the accident scene should be acquired before it is changed or lost. Therefore, it would be helpful to video the accident scene 3019 to acquire the positions of the vehicles, and their orientations, as indicated above. Also, witnesses 3003 typically leave the scene 3019 quickly, and if they do not leave contact information. Later it is very difficult to find and contact them. In step 3933, the ECRU 300 prompts the ECRP 3013 to ask the insured 3001 if there were any witnesses.” wherein the examiner interprets "ECRU 300 prompts the ECRP 3013 to ask the insured 3001 if there were any witnesses" combined with "information at the accident scene should be acquired" and "video the accident scene 3019 to acquire the positions of the vehicles, and their orientations" to be the same as a dynamic content generator, configured to, when executed by the one or more processors, implement a cascading information gathering process to receive vehicle incident data corresponding to an incident from multiple persons because both are directed to automated systems that execute sequential information collection procedures at accident scenes by first gathering data from an initial person (the insured) and then prompting for additional information from other persons (witnesses), thereby implementing a cascading multi-person data acquisition process. The examiner further interprets "the ECRU 300 prompts the ECRP 3013 to ask the insured 3001" to be the same as receiving vehicle incident data from a first person of the multiple persons because both are directed to initially acquiring accident information from a primary individual at the incident scene before proceeding to gather information from additional witnesses or other persons involved).
(ii) identifying, from the vehicle incident data, at least the second person of the multiple persons; ([Florey, page 19, col 14, lines 44-49], “The ECRP 3013 may cause the ERCU 3300 to play back videos of the scene 3019 and try to determine if there were any witnesses 3003, or participants 3005 which may be the driver of the other vehicle 3017, or its passengers. The participants 3005 may also include any passengers in the insured's vehicle 3015.”, and [Florey, page 19, col 14, lines 50-55], “If there are witnesses, then in step 3935, the ECRU 300 directs the ECRP 3013 to instruct the insured 3001 to approach one of the witnesses 3003 and offer the insured's computing device 3200 to the witness 3003.”, wherein the examiner interprets "play back videos of the scene 3019 and try to determine if there were any witnesses 3003, or participants 3005" combined with "If there are witnesses, then in step 3935, the ECRU 300 directs the ECRP 3013 to instruct the insured 3001 to approach one of the witnesses 3003" to be the same as identifying, from the vehicle incident data, at least the second person of the multiple persons because both are directed to analyzing previously collected incident information (video data from the accident scene) to detect and identify additional persons beyond the initial reporting individual, and then using that identification to facilitate subsequent data collection from those identified persons, thereby implementing the cascading information gathering process where identification of subsequent persons is derived from the vehicle incident data already captured).
(iii) making contact, over the network communication interface, with the second person through a computing device of the second person, and communicate a vehicle incident request for vehicle incident data from a perspective of the second person to the computing device of the second person; and ([Florey, col 5, lines 4-9], “In addition to providing video communications between strangers with little or no setup, the current system is well suited for use after an accident. Due to its simplicity, it can quickly and easily provide a video link between a person who has gone through a traumatic event, such as an Accident, and is currently disoriented and upset.”, and [Florey, col 14-15, lines 52-67, 1-5], “If the witness 3003 accepts the insured's computing device 3200, the ECRP 3013 is directed to notify the witness 3003 that (s)he is employed by the insurance company and this video is being recorded, and requests that the witness 3003 simply indicate what (s)he saw. If the witness 3003 agrees, the video feed of the witness 3003 is recorded. The ECRP 3013, prompted by the ECRU 3300 interacts with the witness 3003 to acquire information. When finished, in step 3939 it is determined if there are other witnesses 3003 which have not yet had a chance to indicate what they have seen. When all witnesses 3003 have been asked, the process shown in steps 3941-3947 is repeated for the participant in the accident. Finally, in step 3949, the ECRU 3300 prompts ECRP 3013 to request a video statement from insured 3001”, wherein the examiner interprets "provide a video link" and "the witness 3003 accepts the insured's computing device 3200" to be the same as making contact, over the network communication interface, with the second person through a computing device of the second person because both are directed to establishing network-based communication connections between the system and additional persons at the accident scene through their computing devices. The examiner further interprets "the ECRP 3013 is directed to notify the witness 3003 that (s)he is employed by the insurance company and this video is being recorded, and requests that the witness 3003 simply indicate what (s)he saw" to be the same as communicate a vehicle incident request for vehicle incident data from a perspective of the second person to the computing device of the second person because both are directed to transmitting requests through the computing device interface to solicit the witness's personal observations and account of the accident incident from their individual perspective).
(iv) receiving, based on the vehicle incident request communicated to the second person, vehicle incident data from the second person; ([Florey, col 15, lines 19-28] “In step 3955 it is determined that if there are other video feed required or if there are people on the scene that the ECRP 3013 would like to talk with (“yes”), then step 3953 will be repeated. In step 3955 if it is believed that no required information that is currently available has been omitted (“no”), then the insured 3001 is notified that an Adjuster of the carrier insurance company will use the acquired information and videos and follow up with the insured 3001 within a few days.” wherein the examiner interprets "the Adjuster of the carrier insurance company will use the acquired information and videos" to be the same as receiving, based on the vehicle incident request communicated to the second person, vehicle incident data from the second person because both are directed to collecting and obtaining accident data from persons at the scene in response to requests made to those persons, with the "acquired information and videos" representing the vehicle incident data that has been received from the multiple persons (including witnesses and participants) following the requests communicated to them during the cascading information gathering process described in the preceding steps).
… (ii) provide the collision reconstruction interface to a computing device associated with a policy provider during a real-time call session in which a claim relating to the incident is processed. ([Florey, page 16, col. 7-8, lines 6-12, 24-28] “In step 807, user 1 connects through the selected browser running on computing device 200, to a website linking the user’s computing device 200 to a director 500, which may be a web server. In step 809, user 1 provides input indicating that the user would like to have a video conference with the ECRP 13…In this embodiment, director 1500 sends a link to computing device 200, allowing computing device 200 to directly connect to video conferencing platform 3, instead of connecting through director 1500.”, wherein the examiner interprets "director 1500 sends a link to computing device 200, allowing computing device 200 to directly connect to video conferencing platform 3" to be the same as provide the collision reconstruction interface to a computing device because both are directed to delivering an interactive interface platform to a user's computing device that enables visualization and interaction with accident-related information. The examiner further interprets "have a video conference with the ECRP 13" combined with the previously cited accident information gathering process to be the same as to a computing device associated with a policy provider during a real-time call session in which a claim relating to the incident is processed because both are directed to establishing live communication sessions between accident participants and insurance company personnel (ECRP being Emergency Claims Response Personnel employed by the insurance carrier) wherein accident claims are actively handled and processed through real-time video conferencing interactions).
Florey does not teach an AI prompt generator, configured to, when executed by the one or more processors, based on the vehicle incident data received during the cascading information gathering process, (i) configure an artificial intelligence (AI) prompt having at least a selected portion of the vehicle incident data received during the cascading information gathering process; (ii) transmit, over the network communication interface, the AI prompt to a computing system executing a large language model (LLM); and (iii) receive, over the network communication interface, from the computing system, an LLM summarization of the vehicle incident data; and a collision reconstruction engine to, when executed by the one or more processors,(i) generate, based at least in part on the LLM summarization, a collision reconstruction interface;.
Jiang teaches an AI prompt generator, configured to, when executed by the one or more processors, based on the vehicle incident data received during the cascading information gathering process, ([Jiang, page 6, sec 4.3.1-2] “prompt construction…construct the prompt sequence for the LLM” and “The constructed prompts are directly fed into the LLM to facilitate KQL query recommendation through the OpenAI API”, wherein the examiner interprets constructing a prompt sequence and feeding the constructed prompts to the LLM via an API to be the same as generating an AI prompt that includes selected incident data because they are both directed to building a prompt from incident context to an LLM endpoint).
(i) configure an artificial intelligence (AI) prompt having at least a selected portion of the vehicle incident data received during the cascading information gathering process; ([Jiang, page 6, sec 4.2], “The incident data processor gathers comprehensive information from the incident ticket and performs appropriate pre-processing to optimize the utilization of this data, as elaborated below., …. 4.2.1 Information Collection. To equip the LLM with sufficient information for effective query recommendation, Xpert employs a comprehensive approach in collecting rich incident data from various resources within the incident management system [38]. These resources encompass: (i) Metadata, which entails fundamental incident details such as the creation time, the service which triggers the incident, and other essential information. (ii) Title of the incident, which may be system-generated or written by an engineer. (iii) Summary of the incident, serving as a high-level overview either generated by the monitoring system or written by an engineer. (iv) Discussion pertaining to the incident, encompassing system logs related to the incident as well as discussions among the engineers.” wherein the examiner interprets "The incident data processor gathers comprehensive information from the incident ticket and performs appropriate pre-processing to optimize the utilization of this data" combined with "To equip the LLM with sufficient information for effective query recommendation" to be the same as configure an artificial intelligence (AI) prompt having at least a selected portion of the vehicle incident data because both are directed to preparing and structuring incident information collected from multiple sources for input into an artificial intelligence system (LLM being a large language model), where the pre-processing and collection of selected data elements optimizes the AI system's ability to process and utilize the incident information. The examiner further interprets "collecting rich incident data from various resources" including "Metadata," "Title of the incident," "Summary of the incident," and "Discussion pertaining to the incident" to be the same as vehicle incident data received during the cascading information gathering process because both are directed to systematically acquiring incident information from multiple sources and participants over time, where different types of data (metadata, descriptions, discussions) are gathered sequentially from various contributors in a comprehensive multi-stage collection process).
(ii) transmit, over the network communication interface, the AI prompt to a computing system executing a large language model (LLM); and ([Jiang, page 6, sec 4.3.2] “The constructed prompts are directly fed into the LLM to facilitate KQL query recommendation through the OpenAI API. The API returns a raw query that has been generated based on the prompt input.”, wherein the examiner interprets “the constructed prompts are directly fed into the LLM… through the OpenAI API” to be directed to transmitting an AI prompt to a computing system executing a large language model (LLM), because both involve sending a constructed prompt over a network interface to a remote system for processing by an LLM.)
(iii) receive, over the network communication interface, from the computing system, an LLM summarization of the vehicle incident data; and ([Jiang, page 6, sec 4.3.2] “The constructed prompts are directly fed into the LLM to facilitate KQL query recommendation through the OpenAI API. The API returns a raw query that has been generated based on the prompt input.”, wherein the examiner interprets the API returning a generated output from the LLM in response to the sent prompt to be the same as receiving, over the network communication interface, from the computing system, an LLM summarization of the vehicle incident because they are both directed to obtaining an LLM-generated result from a remote service after transmitting an input prompt.)
Jiang does not teach a collision reconstruction engine to, when executed by the one or more processors, (i) generate, based at least in part on the LLM summarization, a collision reconstruction interface;.
Han teaches a collision reconstruction engine to, when executed by the one or more processors, (i) generate, based at least in part on the LLM summarization, a collision reconstruction interface; ([Han, [0097] “reconstructing an actual image-based 3D traffic accident scene.” and [0014] “a reproduction unit for reproducing the scene of the traffic accident…the scene of the traffic accident being reproduced so that the 3D moving object is moved.”, wherein the examiner interprets reconstructing and reproducing a 3D traffic accident scene with moving objects to be the same as generating a collision reconstruction interface because they are both directed to producing an interface that presents a reconstruction of the collision scene for analysis and review.)
Florey, Jiang, Han, and the instant application are analogous art because they are all directed to computing systems for claim or incident processing that gather incident data from multiple sources.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the computing system for real-time claim information gathering and handling via multiple persons disclosed by Florey to include the LLM prompt construction disclosed by Jiang. One would be motivated to do so to efficiently facilitate model-generated outputs from collected incident data for downstream claim handling, as suggested by Jiang ([Jiang, page 6, sec 4.3.2] “The constructed prompts are directly fed into the LLM to facilitate KQL query recommendation through the OpenAI API”). It would have been further obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the computing system for real-time claim information gathering and handling via multiple persons disclosed by Florey to include the 3D scene reconstruction process disclosed by Han. One would be motivated to do so to effectively present a collision reconstruction interface that aids assessment and decision-making during live claim processing, as suggested by Han ([Han, [0097] “reconstructing an actual image-based 3D traffic accident scene”). Claims 8 and 15 are analogous to claim 1 (just varying in claim types), and thus the same rejection can be applied as set forth above.
Regarding claim 2, Florey, Jiang, and Han teaches The computing system of claim 1, (see rejection of claim 1).
Han further teaches wherein the collision reconstruction engine generates the collision reconstruction interface to comprise a corpus of facts based on an entirety of the vehicle incident data. ([Han, [0014] “an information collection unit for receiving images and sounds of a scene of a traffic accident…constructing a 3D accident environment…[0057] “combines the detected motions of the moving object into the 3D accident environment”…[Abstract] “reproduces the scene of the traffic accident at corresponding time based on results of combination in response to a time-based playback request, the scene of the traffic accident being reproduced so that the 3D moving object is moved.”, wherein the examiner interprets receiving images and sounds from the accident, constructing a 3D environment, combining motions, and reproducing the scene over time to be the same as generating a collision reconstruction interface that comprises a corpus of facts based on an entirety of the vehicle incident data because they are both directed to assembling and presenting, within a single interface, the full set of accident information (multi-source imagery, object states, and temporal motion) that together represent the whole incident for analysis.)
Florey, Jiang, Han, and the instant application are analogous art because they are all directed to generating and presenting a collision-focused reconstruction interface that aggregates facts from vehicle-incident data for use in live insurance claim handling.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the computing system of claim 1 disclosed by Florey, Jiang, and Han to include the image scene reconstruction technique disclosed by Han. One would be motivated to do so to effectively provide a collision reconstruction interface that aggregates and displays the complete incident facts for analysis during claim processing, as suggested by Han (Han, [0097] “reconstructing an actual image-based 3D traffic accident scene…[Abstract] “the scene of the traffic accident being reproduced so that the 3D moving object is moved.”). Claims 9 and 16 are analogous to claim 2, and thus the same rejection can be applied as set forth above.
Regarding claim 3, Florey, Jiang, and Han teaches The computing system of claim 1, (see rejection of claim 1).
Jiang further teaches wherein the computing system performs pre-processing on the vehicle incident data to configure the AI prompt. ([Jiang, page 6, sec 4.2.2] “Upon collecting all information from the incident tickets, the data is concatenated into a text sequence. Xpert performs two pre-processing steps on the incident context: (i) Repetitive information that appears multiple times in the context is removed. (ii) If the incident context exceeds a certain token threshold, the sample is clipped to avoid over-length. This is necessary as the input of the LLM is subject to token limitations. This pre-processing ensures improved information utilization in the data while adhering to the LLM’s input constraints on token length.”, wherein the examiner interprets “Upon collecting all information from the incident tickets, the data is concatenated into a text sequence. Xpert performs two pre-processing steps on the incident context:” to be the same as performing pre-processing on incident data for a new, generated AI prompt.)
Florey, Jiang, Han, and the instant application are analogous art, because they are all directed to performing pre-processing on incident data to be generated.
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the method of claim 1 disclosed by Florey, Jiang, and Han to include the “pre-processing ensures improved information utilization in the data while adhering to the LLM’s input constraints on token length” disclosed by Jiang. One would be motivated to do to effectively perform preprocessing steps on incident contexts using the Xpert framework as suggested by Jiang (see [Jiang, page 5, sec 4.2.2] quotes above). Claims 10 and 17 are analogous to claim 3, therefore the mapping provided above will apply to 10 and 17 as well.
Regarding claim 4, Florey, Jiang, and Han teaches The computing system of claim 3 (see rejection of claim 3).
Jiang further teaches wherein the pre-processing comprises automatically editing the incident data based on a set of output metrics of the LLM ([Jiang, page 5, sec 4.2.2] “Upon collecting all information from the incident tickets, the data is concatenated into a text sequence. Xpert performs two pre-processing steps on the incident context: (i) Repetitive information that appears multiple times in the context is removed. (ii) If the incident context exceeds a certain token threshold, the sample is clipped to avoid over-length. This is necessary as the input of the LLM is subject to token limitations.” AND [Jiang, page 11, sec 9.2], “Xpert presents a pioneering framework for automatically recommending DSL queries to support incident management tasks.”, wherein the examiner interprets removing repetitive content based on LLM “token limitations” and “automatically recommending DSL queries to support incident management tasks” to be the same as “automatically editing the incident data based on a set of output metrics of the LLM”.)
Florey, Jiang, Han, and the instant application are analogous art, because they are all directed to pre-processing of incident data through automation of editing incident data based on output metrics.
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the method of claim 3 disclosed by Florey, Jiang, and Han to include the “Xpert presents a pioneering framework for automatically recommending DSL queries to support incident management tasks”. One would be motivated to do so to efficiently edit incident data using a set of outputs by removing repetitive information in the incident context based on token limitations as suggested by Jiang (see [Jiang, page 5, sec 4.2.2] quote above). Claims 11 and 18 are analogous to claim 4, therefore the mapping provided above will apply to 11 and 18 as well.
Regarding claim 5, Florey, Jiang, and Han teaches The computing system of claim 4 (see rejection of claim 4).
Jiang further teaches:
wherein the computing system executes a machine learning model on the incident data ([Jiang, page 5, sec 4.1] “an embedding model is employed to vectorize the incident context and conduct a search for similar historical incidents along with their corresponding KQL queries.”, wherein the examiner interprets “an embedding model is employed to vectorize the incident context” to be the same as “executing a machine learning model on the incident data.)
to automatically edit the incident data, the machine learning model being trained on the set of output metrics of the LLM. ([Jiang, page 5, sec 4.2.2] “Upon collecting all information from the incident tickets, the data is concatenated into a text sequence. Xpert performs two pre-processing steps on the incident context: (i) Repetitive information that appears multiple times in the context is removed. (ii) If the incident context exceeds a certain token threshold, the sample is clipped to avoid over-length. This is necessary as the input of the LLM is subject to token limitations.” AND [Jiang, page 11, sec 9.2], “Xpert presents a pioneering framework for automatically recommending DSL queries to support incident management tasks.”, wherein the examiner interprets removing repetitive content based on LLM “token limitations” and “automatically recommending DSL queries to support incident management tasks” to be the same as “automatically editing the incident data”, for which the machine learning model will be trained on.)
Florey, Jiang, Han, and the instant application are analogous art, because they are all directed to executing a model on incident data and automatically editing incident data and training a model on output metrics of the LLM.
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the method of claim 4 disclosed by Florey, Jiang, and Han to include the “pre-processing steps on the incident context: (i) Repetitive information that appears multiple times in the context is removed… automatically recommending DSL queries to support incident management tasks” as disclosed by Jiang. One would be motivated to do so to efficiently using the pre-processing steps and automatic recommendation DSL queries to support incident management tasks like editing incident data and train a model as suggested by Jiang (see [Jiang, page 5, sec 4.2.2] and [Jiang, page 11, sec 9.2] quotes above). Claims 12 and 19 are analogous to claim 5, therefore the mapping provided above will apply to 12 and 19 as well.
Regarding claim 6, Florey, Jiang, and Han teaches The computing system of claim 1, (see rejection of claim 1).
Jiang further teaches wherein the executed instructions further cause the computing system to: execute a machine-learning model on the LLM summarization to perform post-processing on the LLM summarization, the post-processing comprising automatically editing the LLM summarization. ([Jiang, page 6, sec 4.4] “To address the issue of potentially non-executable or grammatically incorrect KQL queries generated by LLMs, which may arise due to noise in retrieval data or mispredictions, we have integrated a post-processor into Xpert. The post-processor plays a crucial role in checking the validity of generated queries and rectifying any issues whenever possible. It comprises two key components: • Post-Validator: This component performs a grammar and syntax check on the query using the intrinsic compiler abstract syntax tree (AST) [49]. By analyzing the data flow of the query, it determines if the query is executable. If the query fails this check, it is passed on to the post-rectifier for revision. • Post-Rectifier: The post-rectifier carries out a two-step revision process to rectify invalid queries. In the first step, it cleans extraneous tokens from the query, such as spacing and tabs that might have been mistakenly generated. If the query still remains invalid, the post-rectifier proceeds to the second step, where we provide the LLM with the incident context, retrieved examples, the invalid query, error messages from the post-validator, and select usage handbook of the KQL. We then prompt the LLM to attempt fixing the query, resolving more complex cases that cannot be addressed by simple token removal. This post-processing mechanism ensures that the KQL queries generated by Xpert are refined and enhanced to achieve executability and grammatical correctness, minimizing the need for manual intervention by OCEs”, wherein the examiner interprets the “post-processor” composed of a post-validator and post-rectifier that is made to check and clean LLM outputs including prompting the LLM to fix to be the same as “execute a machine-learning model on the LLM summarization to perform post-processing on the LLM summarization, and prompting the LLM for revision to be the same as “automatically editing” LLM summarization.)
Florey, Jiang, Han, and the instant application are analogous art, because they are all directed to executing a machine learning model on LLM summarization to perform post-processing.
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the method of claim 1 disclosed by Florey, Jiang, and Han to include the “a post-processor into Xpert. The post-processor plays a crucial role in checking the validity of generated queries and rectifying any issues”, the implementation of a Post-validator and post-rectifier as disclosed by Jiang. One would be motivated to do so to perform post-processing on a large-language model (LLM) as suggested by Jiang (see [Jiang, page 6, sec 4.4] quote above). Claims 13 and 20 are analogous to claim 6, therefore the mapping provided above will apply to 13 and 20 as well.
Regarding claim 7, Florey, Jiang, and Han teaches The computing system of claim 1, (see rejection of claim 1).
Jiang further teaches wherein the post-processing is performed by the machine-learning model based on a logic-based ruleset of a policy provider. ([Jiang, page 6, sec 4.4] “The post-processor plays a crucial role in checking the validity of generated queries and rectifying any issues whenever possible. It comprises two key components: • Post-Validator: This component performs a grammar and syntax check on the query using the intrinsic compiler abstract syntax tree (AST) [49]. By analyzing the data flow of the query, it determines if the query is executable. If the query fails this check, it is passed on to the post-rectifier for revision. • Post-Rectifier: The post-rectifier carries out a two-step revision process to rectify invalid queries. In the first step, it cleans extraneous tokens from the query, such as spacing and tabs that might have been mistakenly generated. If the query still remains invalid, the post-rectifier proceeds to the second step, where we provide the LLM with the incident context, retrieved examples, the invalid query, error messages from the post-validator, and select usage handbook of the KQL. We then prompt the LLM to attempt fixing the query, resolving more complex cases that cannot be addressed by simple token removal.”, wherein the examiner interprets the use of “grammar and syntax check on the query using the intrinsic compiler abstract syntax tree (AST)” and “analyzing the data flow of the query” to be the same as performing post-processing based on a “logic-based ruleset” and the AST and query validation mechanisms to be analogous to a ruleset of a “policy provider”.)
Florey, Jiang, Han, and the instant application are analogous art, because they are all directed to post-processing based on a policy-provider, logic-based ruleset.
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the method of claim 1 disclosed by Florey, Jiang, and Han to include the post-processing steps and “grammar and syntax check on the query using the intrinsic compiler abstract syntax tree (AST)…analyzing the data flow of the query “as disclosed by Jiang. One would be motivated to do so efficiently follow the post-processing method and use an abstract syntax tree and the check on grammar, syntax, and analyze data flow of a query as a means to perform post-processing based on a ruleset as suggested by Jiang (see [Jiang, page 6, sec 4.4] quote above). Claim 14 is analogous to claim 7, therefore the mapping provided above will apply to claim 14 as well.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVAN KAPOOR whose telephone number is (703)756-1434. The examiner can normally be reached Monday - Friday: 9:00AM - 5:00 PM EST (times may vary).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DEVAN KAPOOR/Examiner, Art Unit 2126
/DAVID YI/Supervisory Patent Examiner, Art Unit 2126