Detailed Office Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-20 are rejected under 35 U.S.C. 103
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Agrawal et al. (U.S. Publication No. 2025/0147832 A1), hereinafter referred to as Agrawal, in view of Joshi et al. (U.S. Publication No. 2025/0036800 A1), hereinafter referred to as Joshi.
Regarding Claim 1, Agrawal teaches:
A device, comprising:
at least one processor; and at least one memory that stores executable instructions that, when executed by the at least one processor, facilitate performance of operations, comprising: ([0142]);
receiving a group of alert messages within a predefined time window, wherein each alert message of the group of alert messages comprises: ([0036]; regarding, “the system may receive or access a log (e.g., an error log) that includes one or more error messages.”; [0038]; regarding, “The log may be generated by a software when implementing an application or a service (e.g., running a data processing pipeline, compiling a software package, or the like).”)
a respective description of a respective alert associated with the alert message in a text modality, and respective timestamp data indicative of a respective time at which the alert message was generated; ([0038]; regarding, “The log may include information related to a code error (also referred to as “an error”), such as an error message that informs the error, a type of the error (e.g., compile time error, run time error, a syntax error, an overflow error, or the like), the name and or file-path of the code associated with the error, timestamps associated with the error, or other information related to the error.”);
for each alert message of the group of alert messages, based on the respective timestamp data, retrieving, from a telemetry store, respective temporal context data that are stored according to a temporal modality, ([0036]; regarding, “the system may receive or access a log (e.g., an error log) that includes one or more error messages… Based on the log and/or the error message, the system may further search and determine a context associated with the error. The context may include the code, portions of the log that are close or more related to the error message, portions of one or more documents associated with the code, and/or additional information (e.g., ontology associated with a service that utilizes the code, search results from search engines) relevant to the error or useful for one or more LLMs to explain the error message. For example, the additional information may be generated by retrieving data (e.g., certain text relevant to the log, the code, and/or the code error) stored in the ontology associated with the service that utilizes the code. The retrieved data may be included in the context. Additionally, the system may generate detailed and/or specific instructions”);
wherein temporal context data comprises the respective temporal context data for each alert message of the group of alert messages, wherein the temporal context data of the group of alert messages are indicative of respective states of an associated system at times at which alert messages of the group of alert messages were generated; ([0038]; regarding, “The log may include information related to a code error (also referred to as “an error”), such as an error message that informs the error, a type of the error (e.g., compile time error, run time error, a syntax error, an overflow error, or the like), the name and or file-path of the code associated with the error, timestamps associated with the error, or other information related to the error.”);
generating a prompt chain comprising a sequence of prompts, wherein each prompt of the sequence of prompts is associated with a respective alert message from the group of alert messages, and wherein each prompt is generated based on: text data based on the alert message associated with the prompt, ([0070]; regarding, “The prompt generation module 110 is configured to generate one or more prompts to one or more language models, such as LLM 130a and/or LLM 130b. The prompt generation module 110 may generate a prompt that includes an error message indicating a code error and context associated with the error for the LLM 130a and/or LLM 130b to explain the error message.”);
submitting the sequence of prompts to the second artificial intelligence model. ([0071]; regarding, “the error analysis system 102 may be capable of interfacing with multiple LLMs. This allows for experimentation, hot-swapping and/or adaptation to different models based on specific use cases or requirements, providing versatility and scalability to the system. In various implementations, the error analysis system 102 may interface with a second LLM 130b in order to, for example, generate some of context associated with an error for the first LLM 130a to explain the error.”).
Agrawal fails to explicitly disclose but Joshi teaches:
and performing, using a first artificial intelligence model, a temporal embedding that encodes features of the temporal context data in the temporal modality into respective first embedding vectors according to an embedding-space of a second artificial intelligence model that is configured to receive input according to the text modality, wherein the second artificial intelligence model is different from the first artificial intelligence model; ([0036]; regarding, “The embeddings unit 302 is configured to access the prompt information stored in the privacy preserving datastore 204 and to provide each prompt as an input to the embeddings language model 304 to obtain corresponding embeddings vectors. The embeddings language model 304 is a natural language model (NLP) that is configured to analyze a textual input and to output embeddings vectors”; [0037]; regarding, “The data clustering unit 308 analyzes the embedding vectors using one or more privacy preserving clustering algorithms 310. The privacy preserving clustering algorithms 310 group embedding vectors representing the user prompts into groups having similar characteristics. In some implementations a k-means clustering algorithm that partitions the embedding vectors into k clusters in which each embedding vector belongs to a cluster having a nearest mean to the embedding vector.”);
…each prompt is generated based on:… a respective first embedding vector of the respective first embedding vectors generated from the temporal context data associated with the alert message associated with the prompt; ([0032]; regarding, “The prompt anonymization pipeline 124 analyzes and anonymizes the prompt data stored in the privacy preserving datastore 204 using the prompt data analysis unit 206. The prompt data analysis unit 206 accesses the prompt data stored in the privacy preserving datastore 204, determines embedding vectors for the prompt data, analyzes the embeddings to identify clusters, and summarizes the clusters to generate anonymized prompt data to be stored in the anonymized prompt data datastore 208.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to combine Agrawal with the teachings of Joshi. Doing so can improve predictions provided by the model. (Joshi, [0054]).
Regarding Claim 2, Agrawal in view of Joshi teaches the device of claim 1 as cited above. Agrawal in view of Joshi further teaches:
wherein the group of alert messages are received from a health monitoring device of the associated system and received in response to a number of alert message in the group being greater than or equal to a defined threshold. (Agrawal, [0036]; regarding, “the system may receive or access a log (e.g., an error log) that includes one or more error messages.”; [0038]; regarding, “The log may be generated by a software when implementing an application or a service”).
Regarding Claim 3, Agrawal in view of Joshi teaches the device of claim 1 as cited above. Agrawal in view of Joshi further teaches:
wherein the temporal context data comprises time series data. (Agrawal, [0043]).
Regarding Claim 4, Agrawal in view of Joshi teaches the device of claim 1 as cited above. Agrawal in view of Joshi further teaches:
wherein the second artificial intelligence model comprises a large language model. (Joshi, [0039]; regarding, “In one implementation, the summarization model 314 is an LLM that is fine-tuned using DP on each cluster of embedding vectors.”).
Regarding Claim 5, Agrawal in view of Joshi teaches the device of claim 1 as cited above. Agrawal in view of Joshi further teaches:
wherein the first artificial intelligence model comprises a deep learning model. (Joshi, [0036]; regarding, “The embeddings language model 304 is a natural language model (NLP)”).
Regarding Claim 6, Agrawal in view of Joshi teaches the device of claim 1 as cited above. Agrawal in view of Joshi further teaches:
wherein the second artificial intelligence model encodes the text data of the prompts into respective second embedding vectors associated with the alert messages. (Joshi, [0038]; regarding, “The cluster summarization unit 312 utilizes the summarization model 314 to summarize the embeddings”; [0039]; regarding, “The LLM is prompted by the cluster summarization unit 312 to generate a plurality of synthetic representative data points that represent embedding vectors associated with each cluster.”).
Regarding Claim 7, Agrawal in view of Joshi teaches the device of claim 6 as cited above. Agrawal in view of Joshi further teaches:
wherein the second artificial intelligence model, for each alert message concatenates the first embedding vector and the second embedding vector associated with the alert message. (Joshi, [0038]; regarding, “The cluster summarization unit 312 utilizes the summarization model 314 to summarize the embeddings associated with at least a subset of the clusters to determine a “theme” representing the subject matter of the cluster.”).
Regarding Claim 8, Agrawal in view of Joshi teaches the device of claim 1 as cited above. Agrawal in view of Joshi further teaches:
wherein the operations further comprise, for each alert message of the group of alert messages, in response to examination of a knowledge store, determining, using a third artificial intelligence model, respective alert context data that indicates additional context information regarding the alert message in the text modality, and wherein the third artificial intelligence model is different from the first artificial intelligence model and the second artificial intelligence model. (Agrawal, [0080]; regarding, “the context generation module 106 may access portions of a log that are close or more related to the error message to determine the context associated with the error.”; [0083]; regarding, “the context generation module 106 may execute the similarity search using at least one of a language model, an artificial intelligence (“AI”) model, a generative model, a machine learning (“ML”) model, a neural network (“NN”), or an LLM.”).
Regarding Claim 9, Agrawal in view of Joshi teaches the device of claim 8 as cited above. Agrawal in view of Joshi further teaches:
wherein the text data associated with the prompt further comprises the respective alert context data associated with the alert message associated with the prompt. (Agrawal, [0037]; regarding, “The prompt may include the error message, the context associated with the error…”).
Regarding Claim 10, Agrawal in view of Joshi teaches the device of claim 9 as cited above. Agrawal in view of Joshi further teaches:
wherein the third artificial intelligence model comprises a retrieval augmented generation model. (Agrawal, [0035]; regarding, “various implementations of the systems and methods of the present disclosure can advantageously employ one or more LLMs”; [0036]; regarding, “The context may include… portions of the log that are close or more related to the error message… and/or additional information… relevant to the error or useful for one or more LLMs to explain the error message…. the additional information may be generated by retrieving data”; [0037]; regarding, “Based on the error message, the context associated with the error, and/or the instructions, the system may generate a prompt for an LLM.”; [0082]; regarding, “he context generation module 106 may further vectorize the plurality of portions of the set of documents to generate a plurality of vectors.”).
Regarding Claim 11, Agrawal in view of Joshi teaches the device of claim 1 as cited above. Agrawal in view of Joshi further teaches:
wherein the text data associated with the prompt further comprises output data generated by the second artificial intelligence model based on at least one previous prompt in the sequence of prompts. (Agrawal, [0048]; regarding, “In some implementations, the system may store some data (e.g., prompts to the LLM, outputs from the LLM, and/or errors) to a cache associated with the system. Storing the data to the cache may allow the system to reuse some of the data to improve system efficiency.”)
Regarding Claim 12, Agrawal teaches:
A method, comprising:
receiving, by a device comprising at least one processor, a group of alert messages within a predefined time window, wherein each alert message of the group of alert messages comprises: ([0036]; regarding, “the system may receive or access a log (e.g., an error log) that includes one or more error messages.”; [0038]; regarding, “The log may be generated by a software when implementing an application or a service (e.g., running a data processing pipeline, compiling a software package, or the like).”)
a respective description of a respective alert associated with the alert message in a text modality, and respective timestamp data indicative of a respective time at which the alert message was generated; ([0038]; regarding, “The log may include information related to a code error (also referred to as “an error”), such as an error message that informs the error, a type of the error (e.g., compile time error, run time error, a syntax error, an overflow error, or the like), the name and or file-path of the code associated with the error, timestamps associated with the error, or other information related to the error.”);
for each alert message of the group of alert messages, based on the respective timestamp data, retrieving, by the device, respective temporal context data from a telemetry store, ([0036]; regarding, “the system may receive or access a log (e.g., an error log) that includes one or more error messages… Based on the log and/or the error message, the system may further search and determine a context associated with the error. The context may include the code, portions of the log that are close or more related to the error message, portions of one or more documents associated with the code, and/or additional information (e.g., ontology associated with a service that utilizes the code, search results from search engines) relevant to the error or useful for one or more LLMs to explain the error message. For example, the additional information may be generated by retrieving data (e.g., certain text relevant to the log, the code, and/or the code error) stored in the ontology associated with the service that utilizes the code. The retrieved data may be included in the context. Additionally, the system may generate detailed and/or specific instructions”);
wherein temporal context data comprises the respective temporal context data for each alert message of the group of alert messages, wherein the temporal context data of the group of alert messages are indicative of respective states of an associated system at times at which alert messages of the group of alert messages were generated; ([0038]; regarding, “The log may include information related to a code error (also referred to as “an error”), such as an error message that informs the error, a type of the error (e.g., compile time error, run time error, a syntax error, an overflow error, or the like), the name and or file-path of the code associated with the error, timestamps associated with the error, or other information related to the error.”);
generating a prompt chain comprising a sequence of prompts, wherein each prompt of the sequence of prompts is associated with a respective alert message from the group of alert messages, and wherein each prompt is generated based on: text data based on the alert message associated with the prompt, ([0070]; regarding, “The prompt generation module 110 is configured to generate one or more prompts to one or more language models, such as LLM 130a and/or LLM 130b. The prompt generation module 110 may generate a prompt that includes an error message indicating a code error and context associated with the error for the LLM 130a and/or LLM 130b to explain the error message.”);
inputting, by the device, the sequence of prompts to the second artificial intelligence model. ([0071]; regarding, “the error analysis system 102 may be capable of interfacing with multiple LLMs. This allows for experimentation, hot-swapping and/or adaptation to different models based on specific use cases or requirements, providing versatility and scalability to the system. In various implementations, the error analysis system 102 may interface with a second LLM 130b in order to, for example, generate some of context associated with an error for the first LLM 130a to explain the error.”).
Agrawal fails to explicitly disclose but Joshi teaches:
encoding, by the device, using a first artificial intelligence model, features of the temporal context data in the temporal-based modality into an embedding space of a second artificial intelligence model that is configured to receive input according to the text-based modality, wherein the second artificial intelligence model is different from the first artificial intelligence model; ([0036]; regarding, “The embeddings unit 302 is configured to access the prompt information stored in the privacy preserving datastore 204 and to provide each prompt as an input to the embeddings language model 304 to obtain corresponding embeddings vectors. The embeddings language model 304 is a natural language model (NLP) that is configured to analyze a textual input and to output embeddings vectors”; [0037]; regarding, “The data clustering unit 308 analyzes the embedding vectors using one or more privacy preserving clustering algorithms 310. The privacy preserving clustering algorithms 310 group embedding vectors representing the user prompts into groups having similar characteristics. In some implementations a k-means clustering algorithm that partitions the embedding vectors into k clusters in which each embedding vector belongs to a cluster having a nearest mean to the embedding vector.”);
…each prompt is generated based on:… a respective first embedding vector of the respective first embedding vectors generated from the respective temporal context data associated with the alert message associated with the prompt; ([0032]; regarding, “The prompt anonymization pipeline 124 analyzes and anonymizes the prompt data stored in the privacy preserving datastore 204 using the prompt data analysis unit 206. The prompt data analysis unit 206 accesses the prompt data stored in the privacy preserving datastore 204, determines embedding vectors for the prompt data, analyzes the embeddings to identify clusters, and summarizes the clusters to generate anonymized prompt data to be stored in the anonymized prompt data datastore 208.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to combine Agrawal with the teachings of Joshi. Doing so can improve predictions provided by the model. (Joshi, [0054]).
Claims 13-15 are rejected under 35 U.S.C. 103 under the same grounds of rejection as claims 8-9 and 11 respectively.
Regarding Claim 16, Agrawal teaches:
A non-transitory computer-readable medium comprising instructions that, in response to execution, cause a system comprising a processor to perform operations, comprising:
receiving a group of alert messages within a predefined time window, wherein each alert message of the group of alert messages comprises: ([0036]; regarding, “the system may receive or access a log (e.g., an error log) that includes one or more error messages.”; [0038]; regarding, “The log may be generated by a software when implementing an application or a service (e.g., running a data processing pipeline, compiling a software package, or the like).”)
a respective description of a respective alert associated with the alert message in a text modality, and respective timestamp data indicative of a respective time at which the alert message was generated; ([0038]; regarding, “The log may include information related to a code error (also referred to as “an error”), such as an error message that informs the error, a type of the error (e.g., compile time error, run time error, a syntax error, an overflow error, or the like), the name and or file-path of the code associated with the error, timestamps associated with the error, or other information related to the error.”);
for each alert message of the group of alert messages, based on the respective timestamp data, retrieving, from a temporal context store, respective temporal context data that are stored according to a temporal modality, wherein the respective temporal context data are indicative of a respective state of an associated system at the respective time; ([0036]; regarding, “the system may receive or access a log (e.g., an error log) that includes one or more error messages… Based on the log and/or the error message, the system may further search and determine a context associated with the error. The context may include the code, portions of the log that are close or more related to the error message, portions of one or more documents associated with the code, and/or additional information (e.g., ontology associated with a service that utilizes the code, search results from search engines) relevant to the error or useful for one or more LLMs to explain the error message. For example, the additional information may be generated by retrieving data (e.g., certain text relevant to the log, the code, and/or the code error) stored in the ontology associated with the service that utilizes the code. The retrieved data may be included in the context. Additionally, the system may generate detailed and/or specific instructions”; [0038]; regarding, “The log may include information related to a code error (also referred to as “an error”), such as an error message that informs the error, a type of the error (e.g., compile time error, run time error, a syntax error, an overflow error, or the like), the name and or file-path of the code associated with the error, timestamps associated with the error, or other information related to the error.”);
generating a prompt chain comprising a sequence of prompts, wherein each prompt of the sequence of prompts is associated with a respective alert message from the group of alert messages, and wherein each prompt is generated based on: text data based on the alert message associated with the prompt, ([0070]; regarding, “The prompt generation module 110 is configured to generate one or more prompts to one or more language models, such as LLM 130a and/or LLM 130b. The prompt generation module 110 may generate a prompt that includes an error message indicating a code error and context associated with the error for the LLM 130a and/or LLM 130b to explain the error message.”);
submitting the sequence of prompts to the second artificial intelligence model. ([0071]; regarding, “the error analysis system 102 may be capable of interfacing with multiple LLMs. This allows for experimentation, hot-swapping and/or adaptation to different models based on specific use cases or requirements, providing versatility and scalability to the system. In various implementations, the error analysis system 102 may interface with a second LLM 130b in order to, for example, generate some of context associated with an error for the first LLM 130a to explain the error.”).
Agrawal fails to explicitly disclose but Joshi teaches:
encoding, by the device, using a first artificial intelligence model, features of the temporal context data in the temporal-based modality into an embedding space of a second artificial intelligence model that is configured to receive input according to the text-based modality, wherein the second artificial intelligence model is different from the first artificial intelligence model; ([0036]; regarding, “The embeddings unit 302 is configured to access the prompt information stored in the privacy preserving datastore 204 and to provide each prompt as an input to the embeddings language model 304 to obtain corresponding embeddings vectors. The embeddings language model 304 is a natural language model (NLP) that is configured to analyze a textual input and to output embeddings vectors”; [0037]; regarding, “The data clustering unit 308 analyzes the embedding vectors using one or more privacy preserving clustering algorithms 310. The privacy preserving clustering algorithms 310 group embedding vectors representing the user prompts into groups having similar characteristics. In some implementations a k-means clustering algorithm that partitions the embedding vectors into k clusters in which each embedding vector belongs to a cluster having a nearest mean to the embedding vector.”);
…each prompt is generated based on:… a respective first embedding vector of the respective first embedding vectors generated from the respective temporal context data associated with the alert message associated with the prompt; ([0032]; regarding, “The prompt anonymization pipeline 124 analyzes and anonymizes the prompt data stored in the privacy preserving datastore 204 using the prompt data analysis unit 206. The prompt data analysis unit 206 accesses the prompt data stored in the privacy preserving datastore 204, determines embedding vectors for the prompt data, analyzes the embeddings to identify clusters, and summarizes the clusters to generate anonymized prompt data to be stored in the anonymized prompt data datastore 208.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to combine Agrawal with the teachings of Joshi. Doing so can improve predictions provided by the model. (Joshi, [0054]).
Claims 17-19 are rejected under 35 U.S.C. 103 under the same grounds of rejection as claims 8-9 and 11 respectively.
Regarding Claim 20, Agrawal in view of Joshi teaches the medium of claim 17 as cited above. Agrawal in view of Joshi further teaches:
wherein the first artificial intelligence model comprises a deep learning model, (Joshi, [0036]; regarding, “The embeddings language model 304 is a natural language model (NLP)”).
the second artificial intelligence model comprises a large language model, (Joshi, [0039]; regarding, “In one implementation, the summarization model 314 is an LLM that is fine-tuned using DP on each cluster of embedding vectors.”).
and the third artificial intelligence model comprises a retrieval augmented generation model. (Agrawal, [0035]; regarding, “various implementations of the systems and methods of the present disclosure can advantageously employ one or more LLMs”; [0036]; regarding, “The context may include… portions of the log that are close or more related to the error message… and/or additional information… relevant to the error or useful for one or more LLMs to explain the error message…. the additional information may be generated by retrieving data”; [0037]; regarding, “Based on the error message, the context associated with the error, and/or the instructions, the system may generate a prompt for an LLM.”; [0082]; regarding, “he context generation module 106 may further vectorize the plurality of portions of the set of documents to generate a plurality of vectors.”).
Response to Arguments
Applicant’s arguments filed 12/31/2025, have been fully considered.
With respect to the 35 U.S.C. 101 rejections of claims 5-7, 13-14, and 17-19, the amendments overcome the rejections.
With respect to the 103 rejections of independent Claim 1, and similarly Claims 12 and 16 have been considered and a new grounds of rejection has been provided. Please see the above detailed rejection.
Newly cited reference Joshi in combination with Agrawal, teaches the newly claimed matter.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATHEW GUSTAFSON whose telephone number is (571)272-5273. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at (571) 272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.D.G./Examiner, Art Unit 2113
/BRYCE P BONZO/Supervisory Patent Examiner, Art Unit 2113