Prosecution Insights
Last updated: April 19, 2026
Application No. 18/605,602

SUMMARIZING COMPUTER SYSTEM ALERTS USING GENERATIVE MACHINE LEARNING MODELS

Final Rejection §101§103
Filed
Mar 14, 2024
Examiner
SMITH, SEAN THOMAS
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Cisco Technology Inc.
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+21.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
37 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§101 §103
DETAILED ACTION This Office Action is responsive to amendments and arguments filed on February 12th, 2026. Claims 1-2, 11-12 and 16-17 are amended, claim 20 is cancelled and claim 21 is added. Claims 1-19 and 21 are pending and have been examined, hence, this action is made FINAL. Any objections/rejections not mentioned in this Office Action have been withdrawn by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority The present application claims benefit of earlier filed application 63/618492, filed January 8th, 2024. Claims 1-20 have been afforded the benefit of the earlier filing date. Information Disclosure Statement The information disclosure statements (IDS) submitted on March 14 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Response to Amendments and Arguments With regard to rejections made under 35 U.S.C. 101, Applicant discloses, “during the Interview, Applicant understood the Examiner to agree that the proposed independent claim amendments presented herein overcomes the § 101 rejections in the Office Action,” (page 10 of Remarks). As noted in the Interview Summary, Examiner “suggested adding language to describe a trained model, otherwise describe the technical improvement provided by a trained model.” A claim is beyond a mental process when it relies upon a machine learning model trained for a specific task, employed to achieve a specific outcome or create a specific work product, or otherwise describe interoperable systems that are beyond human organization or functions of the human mind. The claims as amended broadly recite that the model therein is “trained using an auto-regressive approach,” which fails to indicate any specific training goal, method or outcome. Accordingly, the rejections under 101 are maintained. Further details are provided below. With regard to rejection made under 35 U.S.C. 103, Applicant discloses, “during the Interview, Applicant understood the Examiner to agree that the claim 1 amendment presented herein overcomes the § 103 rejection of that claim,” (page 11 of Remarks). The claims as amended teach a validation method wherein two counts are compared, which is outside the teachings of reference Narayan; however, Applicant’s request for withdrawal is moot, as new grounds of rejection are raised in view of previously disclosed reference Padmashali. Further details are provided below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a mental process that may be performed in the human mind or with the aid of pen and paper. This judicial exception is not integrated into a practical application because the recited generic computer elements do not add a meaningful limitation to the practice of the abstract idea, and . The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because no element would preclude the performance of the idea in the mind or with the aid of pen and paper. Regarding claim 1, the claim recites “A method comprising:receiving, by a processor, a set of alert logs associated with a security incident, wherein:the set of alert logs are associated with a set of alert groups, the set of alert logs comprise a first alert log associated with a first alert group and a second alert log associated with a second alert group, and the security incident is associated with a computer system;determining, by the processor and based on the set of alert logs, a first prompt, wherein the first prompt comprises text data requesting summarization of the set of alert logs;determining, by the processor, a first count of the set of alert groups;providing, by the processor, the first prompt to a generative machine learning model, wherein the generative machine learning model is trained using an auto-regressive approach;receiving, by the processor, a first model output from the generative machine learning model;determining, by the processor and based on the first model output, a second count of alert groups described by the first model output;determining, by the processor, that the first model output is valid based on determining that the first count matches the second count;based on determining that the first model output is valid, determining, by the processor, a summary based on the first model output; andproviding, by the processor, the summary using an output interface.” These limitations as drafted cover mental activities which can be performed in the mind or with the aid of pen and paper, but for the recitation of a generic processor. Taken individually, or as a whole, these limitations describe acts which are equivalent to human mental work of reading logs, identifying topics or categories, and writing and validating summaries. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be performed mentally, and no additional features in the claims would preclude them from being performed as such. The recited “generative machine learning model” performs acts which may be embodied by an individual paraphrasing documents, and therefore adds only generic computer hardware to a mental process. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 2, the claim depends from claim 1, and thus recites the limitations of claim 1, “wherein determining the summary comprises:providing the first prompt to the generative machine learning model;receiving a second model output from the generative machine learning model;determining a third count of alert groups associated with the second model output;determining, based on the first count and the third count, that the second model output is valid;based on determining that the second model output is valid, determining a first score associated with the first model output based on a first metric and a second score associated with the second model output; anddetermining the summary based on the first score and the second score.” These limitations as drafted cover mental activities which can be performed in the mind or with the aid of pen and paper. Taken individually, or as a whole with claim 1, these limitations describe acts which are equivalent to human mental work of observation and judgement, by counting topics or categories and ranking documents. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 3, the claim depends from claim 2, and thus recites the limitations of claims 1 and 2, “wherein the first metric represents a count of tokens associated with the first model output.” Taken individually, or as a whole with the preceding claims, these limitations describe acts which are equivalent to human mental work of judging a document based on length. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 4, the claim depends from claim 2, and thus recites the limitations of claims 1 and 2, “wherein the first metric represents a count of hostnames associated with the first model output.” Taken individually, or as a whole with the preceding claims, these limitations describe acts which are equivalent to human mental work of judging a document based on identifiable names. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 5, the claim depends from claim 2, and thus recites the limitations of claims 1 and 2, “wherein the first metric represents a count of network addresses associated with the first model output.” Taken individually, or as a whole with the preceding claims, these limitations describe acts which are equivalent to human mental work of judging a document based on identifiable names or sequences. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 6, the claim depends from claim 2, and thus recites the limitations of claims 1 and 2, “wherein: the first prompt specifies a structure, and the first metric represents a count of tokens associated with a first segment of the first model output as defined by the structure.” Taken individually, or as a whole with the preceding claims, these limitations describe acts which are equivalent to human mental work of providing instructions and judging a document based on relevance. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 7, the claim depends from claim 1, and thus recites the limitations of claim 1, “wherein determining that the first model output is valid comprises:determining that the first model output is valid based on a third count of alert logs associated with the first alert group in the first prompt and a fourth count of alert logs associated with the first alert group in the first model output.” Taken individually, or as a whole with claim 1, these limitations describe acts which are equivalent to human mental work of judging a document based on identifiable names or sequences. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 8, the claim depends from claim 1, and thus recites the limitations of claim 1, “wherein determining that the first model output is valid comprises:determining a first structure specified by the first prompt;determining a second structure associated with the first model output; anddetermining whether the first structure corresponds to the second structure.” Taken individually, or as a whole with claim 1, these limitations describe acts which are equivalent to human mental work of judging a document based on structure or arrangement. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 9, the claim depends from claim 1, and thus recites the limitations of claim 1, “wherein determining that the first model output is valid comprises:determining a third count of hostnames associated with the first prompt;determining a fourth count of hostnames associated with the first model output; anddetermining whether the third count matches the fourth count.” Taken individually, or as a whole with claim 1, these limitations describe acts which are equivalent to human mental work of judging a document based on identifiable names, or merely comparing numbers. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 10, the claim depends from claim 1, and thus recites the limitations of claim 1, “wherein determining that the first model output is valid comprises:determining a third count of network addresses associated with the first prompt;determining a fourth count of network addresses associated with the first model output; anddetermining whether the third count matches the fourth count.” Taken individually, or as a whole with claim 1, these limitations describe acts which are equivalent to human mental work of judging a document based on identifiable sequences, or merely comparing numbers. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 21, the claim depends from claim 1, and thus recites the limitations of claim 1, wherein “the first prompt comprises first data representing an output constraint, anddetermining that the first model output is valid is based on determining that the first model output satisfies the output constraint represented by the first prompt.” Taken individually, or as a whole with claim 1, these limitations describe acts which are equivalent to human mental work of requesting a summary and checking that summary’s acceptability as an act of judgement. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claims 11-15, system claims 11-15 and method claims 1-5 are related as a method and system of using the same, with each system element’s function corresponding to the method step. Accordingly, claims 11-15 are similarly rejected under the same rationale as applied to claims 1-5. Regarding claims 16-19, computer-readable medium claims 16-19 and method claims 1-5 are related as method and computer-readable medium for performing the same, with each computer-readable medium element’s function corresponding to the method step. Accordingly, claims 16-19 are similarly rejected under the same rationale as applied to claims 1-5. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-3, 6, 8, 11-13, 16-18 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over "An Assessment of ChatGPT on Log Data" by Mudgal and Wouhaybi (hereinafter, "Mudgal") in view of U.S. Patent 12,008,332 to Gardner et al. (hereinafter, "Gardner") and in view of U.S. Patent Application Publication 2025/0148020 to Padmashali et al. (hereinafter, "Padmashali"). Regarding claims 1, 11 and 16, Mudgal teaches a method comprising: receiving, by a processor, a set of alert logs associated with a security incident, wherein: the set of alert logs are associated with a set of alert groups, the set of alert logs comprise a first alert log associated with a first alert group and a second alert log associated with a second alert group, and the security incident is associated with a computer system (page 7, 3.2 Dataset, "To perform our experiments, we use the datasets provided from the Loghub benchmark [13,34]. This benchmark covers log data from various systems, including, windows and linux operating systems, distributed systems, mobile systems, server applications, and standalone software," and page 9, Security and privacy, "In this experiment, we focus on addressing RQ4 and investigate if ChatGPT can identify the URLs, IPs, and logged users from the logs and extract knowledge about malicious activities.") Mudgal does not teach a method, system or computer-readable media comprising “determining, by the processor and based on the set of alert logs, a first prompt, wherein the first prompt comprises text data requesting summarization of the set of alert logs,” “providing, by the processor, the first prompt to a generative machine learning model, wherein the generative machine learning model is trained using an auto-regressive approach,” “receiving, by the processor, a first model output from the generative machine learning model,” “based on determining that the first model output is valid, determining, by the processor, a summary based on the first model output,” or “providing, by the processor, the summary using an output interface,” and thus, Gardner is introduced. Gardner teaches a method, system and computer-readable media comprising determining, by the processor and based on the set of alert logs, a first prompt, wherein the first prompt comprises text data requesting summarization of the set of alert logs (column 13, lines 19-24, "At operation 306, a prompt is automatically engineered for the LLM. In example embodiments, the prompt engineering applies techniques like sentence reordering, entity replacement, keyword insertion, example output framing, and instructions guiding the LLM to hit the abstraction targets."); providing, by the processor, the first prompt to a generative machine learning model, wherein the generative machine learning model is trained using an auto-regressive approach (column 13, line 37, "At operation 308, the prompt is provided to the LLM," and column 6, line 7, "The content summarization application(s) 120 may be connected to one or more LLM or other artificial intelligence machine(s) 111... LLMs like GPT-3 contain billions of parameters and are trained using self-supervised learning on internet-scale corpora."); receiving, by the processor, a first model output from the generative machine learning model (column 13, lines 43-47, "At operation 310, a response is received from the LLM. The response contains a second content item representing the first content item. The representation omits or simplifies sub-content items included in the first content item based on the abstraction level."); based on determining that the first model output is valid, determining, by the processor, a summary based on the first model output (column 13, lines 43-47, "At operation 310, a response is received from the LLM. The response contains a second content item representing the first content item. The representation omits or simplifies sub-content items included in the first content item based on the abstraction level."); and providing, by the processor, the summary using an output interface (column 13, lines 62-67, "Operation 312, the representation is applied (e.g., used to control output communicated to a target device. Example target devices may include screens, speakers, haptic interfaces, augmented reality displays, etc. The representation may be converted to speech, displayed text, graphics, video, animations, or other formats."). Mudgal and Gardner are considered analogous because they are each concerned with extraction and summarization. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Mudgal with the teachings of Gardner for the purpose of improving summary accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. The combination of Mudgal and Gardner does not teach “determining, by the processor, a first count of the set of alert groups ,” “determining, by the processor and based on the first model output, a second count of alert groups described by the first model output,” or “determining, by the processor, that the first model output is valid based on determining that the first count matches the second count,” and thus, Padmashali is introduced. Padmashali teaches determining, by the processor, a first count of the set of alert groups (paragraph [0050], "At operation 112, the extracted text is segmented or divided into meaningful groupings in order to facilitate downstream processing and analysis. This segmentation process enables the handling of large, complex documents that cannot be efficiently analyzed in their entirety."); determining, by the processor and based on the first model output, a second count of alert groups described by the first model output (paragraph [0312], "At operation 2518, a summary is generated for each combined segment. The document assistant system uses machine learning models to generate summaries that include key facts and key metrics extracted from each group of segments. Each summary generated for each group of segments includes the key facts and the key metrics extracted from the group of segments."); and determining, by the processor, that the first model output is valid based on determining that the first count matches the second count (paragraph [0313], "At operation 2520, the summaries are validated. The document assistant system checks the generated summaries to ensure that the key facts and the key metrics included in the summaries match the key facts and the key metrics extracted from the corresponding group of segments."). Mudgal, Gardner and Padmashali are considered analogous because they are each concerned with extraction and summarization. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Mudgal and Gardner with the teachings of Padmashali for the purpose of improving summary accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 2, 12 and 17, Gardner teaches a method, system and computer-readable media wherein determining the summary comprises: providing the first prompt to the generative machine learning model (column 6, lines 45-49, "The prompts can include the original text to summarize along with instructions tailored to elicit the target summary characteristics from the LLM. The system sends the prompts through the API and ingests the LLM-generated summaries to present to the user.");receiving a second model output from the generative machine learning model (column 61, lines 32-35, "The system can generate multiple candidate summaries using different models for the same prompt. An ensemble model compares the outputs and selects the best response based on consensus validation.");based on determining that the second model output is valid, determining a first score associated with the first model output based on a first metric and a second score associated with the second model output (column 9, lines 23-25, "The evaluation module 270 analyzes summarization outputs using metrics like compression rate, fidelity, coherence, and redundancy."); anddetermining the summary based on the first score and the second score (column 14, lines 12-16, "In example embodiments, iterative prompt engineering guides the LLM to gradually increase abstraction levels. The process terminates when the representation satisfies metrics like length, entity density, lexical complexity, etc. compared to the source."). Neither Mudgal nor Gardner teach “determining a third count of alert groups associated with the second model output,” or “determining, based on the first count and the third count, that the second model output is valid,” however, Padmashali teaches determining a third count of alert groups associated with the second model output (paragraph [0312], "At operation 2518, a summary is generated for each combined segment. The document assistant system uses machine learning models to generate summaries that include key facts and key metrics extracted from each group of segments. Each summary generated for each group of segments includes the key facts and the key metrics extracted from the group of segments."); and determining, based on the first count and the third count, that the second model output is valid (paragraph [0313], "At operation 2520, the summaries are validated. The document assistant system checks the generated summaries to ensure that the key facts and the key metrics included in the summaries match the key facts and the key metrics extracted from the corresponding group of segments."). Mudgal, Gardner and Padmashali are considered analogous because they are each concerned with extraction and summarization. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Mudgal and Gardner with the teachings of Padmashali for the purpose of improving summary accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 3, 13 and 18, Gardner teaches a method, system and computer-readable media wherein the first metric represents a count of tokens associated with the first model output (column 14, lines 12-16, "In example embodiments, iterative prompt engineering guides the LLM to gradually increase abstraction levels. The process terminates when the representation satisfies metrics like length, entity density, lexical complexity, etc. compared to the source."). Mudgal, Gardner and Padmashali are considered analogous because they are each concerned with extraction and summarization. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Mudgal and Padmashali with the teachings of Gardner for the purpose of improving summary quality. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 6, Gardner teaches a method, system and computer-readable media wherein: the first prompt specifies a structure, and the first metric represents a count of tokens associated with a first segment of the first model output as defined by the structure (column 18, lines 25-34, "To achieve this functionality, the system may automatically generate prompts designed to produce a desired level of abstraction from the large language model (LLM) that generates the summaries. At high positive zoom levels, the prompt may instruct the LLM to restate the key elements of the content in a specific word limit, forcing extreme paraphrasing and abstraction. As the zoom level decreases, the word limit may be gradually relaxed. At the 0% neutral level, the prompt may tell the LLM to comprehensively summarize the content as concisely as possible," and column 51, lines 5-9, "For example, the system may be pre-configured with templates for common summarization tasks like: Summarize article into 200 word overview; Extract key points from report in bullet list; and/or Explain concepts from textbook chapter simply."). Mudgal, Gardner and Padmashali are considered analogous because they are each concerned with extraction and summarization. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Mudgal and Padmashali with the teachings of Gardner for the purpose of improving summary quality. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 8, Gardner teaches a method, system and computer-readable media wherein determining that the first model output is valid comprises:determining a first structure specified by the first prompt (column 17, lines 7-11, "The system may train customized models to construct optimized prompts using neural networks, reinforcement learning, and human collaboration. These prompts dynamically guide LLMs to generate summaries with the desired abstraction level, length, structure, and style.");determining a second structure associated with the first model output (column 39, lines 46-49, "The system determines desired lengths or sizes of the responses for the LLM. For example, the system determines the lengths or sizes based on the abstraction level in comparison to the original content."); anddetermining whether the first structure corresponds to the second structure (column 39, lines 53-56, "The system then automatically generates a series of prompts corresponding to these zoom levels, sends them to the LLM, and receives responses from the LLM."). Mudgal, Gardner and Padmashali are considered analogous because they are each concerned with extraction and summarization. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Mudgal and Padmashali with the teachings of Gardner for the purpose of improving summary quality. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 21, Gardner teaches the method of claim 1, wherein: the first prompt comprises first data representing an output constraint (column 13, lines 19-24, "At operation 306, a prompt is automatically engineered for the LLM. In example embodiments, the prompt engineering applies techniques like sentence reordering, entity replacement, keyword insertion, example output framing, and instructions guiding the LLM to hit the abstraction targets."), and determining that the first model output is valid is based on determining that the first model output satisfies the output constraint represented by the first prompt (column 14, lines 16-20, "Checkpoints can validate intermediate summaries. In example embodiments, if the abstraction level is −20%, the prompt may be engineered to request inferences with over 80% confidence and/or appropriate external sources."). Claim 4-5, 14-15 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mudgal, Gardner and Padmashali as applied to claims 2, 12 and 17 above, and further in view of U.S. Patent Application Publication 2023/0105087 to Borges (hereinafter, "Borges"). Regarding claims 4, 14 and 19, the combination of Mudgal, Gardner and Padmashali does not teach a method, system or computer-readable media “wherein the first metric represents a count of hostnames associated with the first model output,” and thus, Borges is introduced. Borges teaches the first metric represents a count of hostnames associated with the first model output (paragraph [0030], "The feature extraction engine can develop/extract partial values/attributes of the telemetry and tokenize these values/attributes. For example, the feature extraction engine can decompose partial values and attributes of various telemetry, such as processes, network connections, domain names, URLs, files/scripts/macros and operations thereon, terminal commands, kernel objects, named pipes, event tracings, module/library loads, thread injections, system/hypervisor calls, memory analysis, scheduled tasks, shortcuts, service names, registry keys, digital certificates, authentication events, and/or other suitable values/attributes, each associated with a respective timestamp."). Mudgal, Gardner, Padmashali and Borges are considered analogous because they are each concerned with extraction and summarization. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Mudgal, Gardner and Padmashali with the teachings of Borges for the purpose of improving summary accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 5, 15 and 20, the combination of Mudgal, Gardner and Padmashali does not teach a method, system or computer-readable media “wherein the first metric represents a count of network addresses associated with the first model output,” however, Borges teaches the first metric represents a count of network addresses associated with the first model output (paragraph [0030], "The feature extraction engine can develop/extract partial values/attributes of the telemetry and tokenize these values/attributes. For example, the feature extraction engine can decompose partial values and attributes of various telemetry, such as processes, network connections, domain names, URLs, files/scripts/macros and operations thereon, terminal commands, kernel objects, named pipes, event tracings, module/library loads, thread injections, system/hypervisor calls, memory analysis, scheduled tasks, shortcuts, service names, registry keys, digital certificates, authentication events, and/or other suitable values/attributes, each associated with a respective timestamp."). Mudgal, Gardner, Padmashali and Borges are considered analogous because they are each concerned with extraction and summarization. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Mudgal, Gardner and Padmashali with the teachings of Borges for the purpose of improving summary accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Allowable Subject Matter Claims 7, 9 and 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Brazil Application Publication 112021011377 to Matus and Sudarsan. U.S. Patent 8,752,178 to Coates et al. U.S. Patent 11,769,017 to Gray et al. U.S. Patent 11,934,781 to He et al. U.S. Patent 12,289,324 to Gove, Jr. U.S. Patent Application Publication 2015/0154501 to Boddhu et al. U.S. Patent Application Publication 2018/0293308 to Miller et al. U.S. Patent Application Publication 2021/0117617 to Blaya et al. U.S. Patent Application Publication 2021/0117842 to Smith et al. U.S. Patent Application Publication 2022/0237102 to Bugdayci et al. U.S. Patent Application Publication 2023/0054068 to Zheng et al. U.S. Patent Application Publication 2023/0139000 to Apger et al. U.S. Patent Application Publication 2023/0229852 to Muralidharan et al. U.S. Patent Application Publication 2023/0247043 to Luttwak et al. U.S. Patent Application Publication 2024/0330348 to Zawadowskiy et al. U.S. Patent Application Publication 2025/0124059 to Chajewska et al. U.S. Patent Application Publication 2025/0173506 to Butvinik. U.S. Patent Application Publication 2025/0284724 to Du et al. U.S. Patent Application Publication 2025/0190459 to Conway et al. “LogGPT: Exploring ChatGPT for Log-Based Anomaly Detection” by Qi et al. “Linguistic Summarization of Event Logs – A Practical Approach” by Dijkman and Wilbik. “Logsummary: Unstructured Log Summarization for Software Systems” by Meng et al. “Latent Semantics Approach for Network Log Analysis: Modeling and its Application” by Otomo et al. “Multi-LLM Text Summarization” by Fang et al. “Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization” by Narayan et al. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN T SMITH whose telephone number is (571)272-6643. The examiner can normally be reached Monday - Friday 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PIERRE-LOUIS DESIR can be reached at (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN THOMAS SMITH/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Mar 14, 2024
Application Filed
Nov 03, 2025
Non-Final Rejection — §101, §103
Feb 03, 2026
Interview Requested
Feb 09, 2026
Examiner Interview Summary
Feb 09, 2026
Applicant Interview (Telephonic)
Feb 13, 2026
Response Filed
Mar 16, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602540
LEVERAGING A LARGE LANGUAGE MODEL ENCODER TO EVALUATE PREDICTIVE MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12530534
SYSTEM AND METHOD FOR GENERATING STRUCTURED SEMANTIC ANNOTATIONS FROM UNSTRUCTURED DOCUMENT
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month