DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s submission filed on 12/09/2025 has been entered. Applicant’s submission overcomes prior objections to the drawings. Applicant’s submission overcomes prior rejections of claims 1-9 under 35 USC § 112(b). Applicant’s submission overcomes prior rejections of claims 18-20 under 35 USC § 101. Claims 1-20 are pending.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 09/16/2025 and 12/09/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ramakrishna et al. (US 2018/0121808), hereinafter “Ramakrishna”, in view of Bhalla et al. (US 2020/0259700), hereinafter “Bhalla”, and further in view of Yogerst et al. (US 2024/0202457), hereinafter “Yogerst”.
Regarding claims 1, 10, 18, Ramakrishna teaches:
A computer-implemented method for diagnosing errors in a communication system (see Ramakrishna, Fig. 7, par. [0098]: FIG. 7 illustrates an example simplified procedure for suggesting a corrective action via a chatbot session in accordance with one or more embodiments described herein), or
a system for diagnosing errors in a communication system, comprising:
a user interface (see Ramakrishna, Fig. 3, par. [0051]: while administrator device 306 is shown as executing UI process 322 that communicates with chatbot server 304, chatbot server 304 may instead execute UI process 322, itself, in other embodiments);
one or more processors (see Ramakrishna, Fig. 2, par. [0029]: Device 200 comprises one or more network interfaces 210, one or more processors 220); and
a non-transitory, computer-readable memory coupled to the one or more processors and the user interface (see Ramakrishna, Figs. 2 and 3, par. [0029]: Device 200 comprises one or more network interfaces 210, one or more processors 220, and a memory 240 interconnected by a system bus 250), the memory storing instructions thereon that, when executed by the one or more processors, cause the one or more processors (see Ramakrishna, Fig. 2, par. [0031]: The memory 240 comprises a plurality of storage locations that are addressable by the processor(s) 220 and the network interfaces 210 for storing software programs and data structures associated with the embodiments described herein. The processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures 245. An operating system 242 (e.g., the Internetworking Operating System, or IOS®, of Cisco Systems, Inc., another operating system, etc.), portions of which are typically resident in memory 240 and executed by the processor(s), functionally organizes the node by, inter alia, invoking network operations in support of software processors and/or services executing on the device. These software processors and/or services may comprise a chatbot process 248), or
a tangible machine readable medium comprising instructions for diagnosing errors in a communication system that, when executed, cause a machine (see Ramakrishna, Fig. 2, par. [0029]: Device 200 comprises one or more network interfaces 210, one or more processors 220, and a memory 240 interconnected by a system bus 250, and see par. [0031]: The memory 240 comprises a plurality of storage locations that are addressable by the processor(s) 220 and the network interfaces 210 for storing software programs and data structures associated with the embodiments described herein. The processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures 245. An operating system 242 (e.g., the Internetworking Operating System, or IOS®, of Cisco Systems, Inc., another operating system, etc.), portions of which are typically resident in memory 240 and executed by the processor(s), functionally organizes the node by, inter alia, invoking network operations in support of software processors and/or services executing on the device. These software processors and/or services may comprise a chatbot process 248) to at least:
receive, from the communication system, an error indication representing an error associated with a communication system component (see Ramakrishna, Figs. 3 and 4A, par. [0044]: monitoring agent 308 may send symptom alerts 310 to chatbot server 304. In some embodiments, monitoring agent 308 may use any number of predefined thresholds, to trigger the sending of a symptom alert 310 to chatbot server 304. For example, if the CPU usage of monitored device 302 exceeds a reporting threshold, monitoring agent 308 may generate and send a symptom alert 310 to chatbot server 304. In further embodiments, symptom alerts 310 may include only raw measurements from monitored device 302 and chatbot server 304 may determine whether or not the measurements are symptomatic of a malfunction. In yet another embodiment, symptom alerts 310 may include an explicit set of one or more symptoms entered by the user of monitored device 302 to monitoring agent 308. For example, the user of monitored device 302 may enter a bug report that is then passed to chatbot server 304 as a symptom alert 310, and see par. [0052]: chatbot server 304 may initially operate in a mode referred to herein as a symptom(s) collection phase. In this phase, chatbot server 304 may receive and process incoming chat message notifications or alerts sent by external agents integrated into the chatbot platform (e.g., agent 308 of monitored device 302, etc.). These messages will have key characteristics that identify the situation, the impacted entity (e.g., host device, application service, etc.), the state of the entity, the severity of the situation, and any additional metadata; in this case, a symptom corresponds to an error indication),
generate, by executing a machine learning (ML) chatbot, an error diagnosis corresponding to the error indication (see Ramakrishna, Fig. 2, par. [0033]: Chatbot process 248 includes computer executable instructions that, when executed by processor(s) 220, cause device 200 to operate as part of a monitoring and diagnostic infrastructure within the network. In various embodiments, chatbot process 248 may utilize machine learning techniques, to perform diagnostic and recommendation functions as part of the infrastructure, and see Fig. 3, par. [0048]: triaging process 312 may use a machine learning model to predict a corrective action/step that should be taken in view of the symptoms reported in the chatbot session to administrator device 306; in this case, a corrective action corresponds to an error diagnosis), wherein the ML chatbot is trained with a plurality of training error indications as inputs to generate a plurality of training error diagnoses as outputs (see Ramakrishna, Fig. 3, par. [0048]: Such a model may be trained (e.g., periodically, on demand, continuously, etc.), for example, based on a history of previously reported symptoms in symptom storage 316, a history of actions previously taken to address the previously reported symptoms in action storage 318, and a history of triage session messages in message storage 320. In other words, triaging process 312 may model the associations between reported symptoms, actions taken to address the reported symptoms, and feedback regarding the taken actions (e.g., from message storage 320), to predict a suggested corrective action given an input set of one or more symptoms),
However, Ramakrishna does not teach:
generating, by one or more processors, one or more embeddings associated with the error indication, the one or more embeddings being vector representations of a text string included in the error indication;
wherein generating the error diagnosis corresponding to the error indication based upon the one or more embeddings,
display the error diagnosis on the user interface for viewing by a user.
Bhalla, in the same field of endeavor, teaches:
display the error diagnosis on the user interface for viewing by a user (see Bhalla, Fig. 5, par. [0079]: FIG. 5 is an operator view screenshot that is brought up by clicking on Operator View in FIG. 4. This brings up a list of current issues. The list can include whether the issue is predicted or ongoing, a ticket ID, a time, a description, and a severity level, and see Fig. 8, par. [0082]: FIG. 8 is a screenshot of the restoration action recommended for the ongoing issue. Here, the user selects the star next to the root cause analysis to bring up a list of restoration actions. This includes step-by-step actions in detail for an operator to troubleshoot to resolve the ongoing issue).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the computer-implemented method or system or tangible machine-readable medium of Ramakrishna with the displaying the error diagnosis of Bhalla with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing the need for subject matter experts in network management (see Bhalla, par. [0026]).
However, the combination of Ramakrishna in view of Bhalla does not teach:
generating, by one or more processors, one or more embeddings associated with the error indication, the one or more embeddings being vector representations of a text string included in the error indication;
wherein generating the error diagnosis corresponding to the error indication based upon the one or more embeddings,
Yogerst, in the same field of endeavor, teaches:
generating, by one or more processors, one or more embeddings associated with the error indication, the one or more embeddings being vector representations of a text string included in the error indication (see Yogerst, par. [0040]: The system may utilize, generate, or handle feature inputs. In disclosed embodiments, a feature input may include any individual measurable property or characteristic of a phenomenon. For example, a feature input may include numeric data, such as statistical data, or may include structural features, such as strings or graphs. In some embodiments, a feature input may include a dataset that includes data prepared for processing through a machine learning model. For example, a feature input may include a plurality of text strings of sentences within chatbot interactions that are related to an error type. In some embodiments, a feature input may be vectorized within a vector space known as a feature space. A feature input may directly refer to specific datasets (such as those listed in data structure 100 under dataset identifiers 120), and may have a respective label (such as those listed in data structure 100 under dataset labels 122). By generating and processing feature inputs through machine learning models, the system enables evaluation of datasets for the purpose of machine learning training and inputs. For example, processing feature inputs through a machine learning model enables calculation of model error indicators for a given dataset through a machine learning model, enabling more accurate and efficient labeling decisions for future labels, and see par. [0055]: The system may use vector representations of data. In disclosed embodiments, a vector representation may include a representation of data in vector space. For example, vector representations may include word embedding of textual data into vectors; in this case, the text strings as feature input correspond to embeddings);
wherein generating the error diagnosis corresponding to the error indication based upon the one or more embeddings (see Yogerst, par. [0041]: The system may generate an output for the machine learning model. In disclosed embodiments, an output may include a dataset or a datum that has been produced from a machine learning model. In some embodiments, outputs may include predictions to phenomena based on feature inputs. For example, a machine learning model may receive, as input, chatbot text data from a user regarding satisfaction with the chatbot service. The system may categorize this input data as being related to user satisfaction and may further process this input data to predict a satisfaction level for the user. In some embodiments, an output may be used for training purposes. For example, an output may include a prediction that is compared against a training dataset to determine an error for the prediction. Based on this error, the system may, alternatively or additionally, determine a model error indicator, and see par. [0055]: The system may use vector representations of data. In disclosed embodiments, a vector representation may include a representation of data in vector space. For example, vector representations may include word embedding of textual data into vectors; in this case, errors are predicted (i.e. errors diagnosed) based on the input which includes text strings (i.e. embeddings)),
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the computer-implemented method or system or tangible machine-readable medium of the combination of Ramakrishna in view of Bhalla with the embeddings of Yogerst with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improved labeling and improved machine learning outcomes (see Yogerst, par. [0052]).
Regarding claims 2, 11, the combination of Ramakrishna in view of Bhalla, and further in view of Yogerst, teaches the computer-implemented method or system. Ramakrishna further teaches:
wherein the error diagnosis includes (ii) a predicted solution to the error (see Ramakrishna, Fig. 2, par. [0033]: Chatbot process 248 includes computer executable instructions that, when executed by processor(s) 220, cause device 200 to operate as part of a monitoring and diagnostic infrastructure within the network. In various embodiments, chatbot process 248 may utilize machine learning techniques, to perform diagnostic and recommendation functions as part of the infrastructure, and see Fig. 3, par. [0048]: triaging process 312 may use a machine learning model to predict a corrective action/step that should be taken in view of the symptoms reported in the chatbot session to administrator device 306; in this case, a predicted corrective action corresponds to a predicted solution).
Ramakrishna does not teach, but Bhalla teaches:
wherein the error diagnosis includes (i) a predicted source of the error associated with the communication system (see Bhalla, Figs. 6 and 7, pars. [0080-0081]: FIG. 6 is a detailed view of the ongoing issue from FIG. 5. Here, the user selects the ongoing issue in the list. FIG. 6 includes details on the ongoing issue, including an overall chart, data collection, root cause analysis, device details, and a topology view. The chart illustrates a PM distribution over time, and a user can select different PM values. The values are shown for detected values, predicted values, a threshold, etc. The device details provide details of the network device affected by the ongoing issue. FIG. 7 is a detailed view of the root cause analysis from FIG. 6. Here, the user selects the root cause analysis. In FIG. 7, the PNO software application 50 determines what root cause best fits the ongoing issue. Here, the diagnostic is that there is a fiber break or an intermediate connector disconnected between neighboring sites. Also, the PNO software application 50 shows a 90% predictability factor meaning there is high confidence this is the root cause. Note, if there were an equipment failure, the root cause would not be designated as a fiber break or connector; in this case, a predicted root cause corresponds to a predicted source of the error).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the error diagnosis of Ramakrishna with the error diagnosis including a predicted source of the error of Bhalla with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing the need for subject matter experts in network management (see Bhalla, par. [0026]).
Regarding claims 3, 12, the combination of Ramakrishna in view of Bhalla, and further in view of Yogerst, teaches the computer-implemented method or system. Ramakrishna further teaches:
wherein the instructions, when executed, further cause the one or more processors (see Ramakrishna, Fig. 2, par. [0031]: The memory 240 comprises a plurality of storage locations that are addressable by the processor(s) 220 and the network interfaces 210 for storing software programs and data structures associated with the embodiments described herein. The processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures 245. An operating system 242 (e.g., the Internetworking Operating System, or IOS®, of Cisco Systems, Inc., another operating system, etc.), portions of which are typically resident in memory 240 and executed by the processor(s), functionally organizes the node by, inter alia, invoking network operations in support of software processors and/or services executing on the device. These software processors and/or services may comprise a chatbot process 248) to:
receive a user input including (ii) a second indication corresponding to an implementation of the predicted solution to the error (see Ramakrishna, Fig. 4E, pars. [0074-0076]: in FIG. 4E, the user of administrator device 306 may also request performance of recommended action 408. For example, assume that chatbot server 304 suggests rebooting the malfunctioning host device. In turn, the user of administrator device 306 may issue an action request 410 as follows: User: Host fluentd13.zeus.io RESTART. In turn, chatbot server 304 may issue action command 412 to monitored device 302, to cause monitored device 302 to restart); and
re-train the ML chatbot based upon the user input (see Ramakrishna, Fig. 4F, pars. [0080-0083]: the user of administrator device 306 may provide feedback 416 to chatbot server304 regarding the taken action. For example, if the user believes that the issue has been resolved by the action, the user may simply provide a summary of the triage session and end the session as follows: User: Triage session Summary—Host fluentd13.ciscozeus.io—HIGH Memory, SSH not accessible, PING not reachable, Host RESTART. User: Triage session END. However, if the actions did not satisfactorily address the issue, the user of administrator device 306 may request performance of additional actions, as needed (e.g., via feedback 416). In turn, chatbot server 304 may use the contents of the triage session for the reported symptoms to train or retrain the machine learning model, as needed. For example, chatbot server 304 may update the model based on the assessed symptom(s) reported to the user, the steps/actions taken during the triage session for the symptom(s), and potentially any feedback from the user regarding the steps/actions (e.g., indicating whether a given action addressed the underlying issue)).
Ramakrishna does not teach, but Bhalla teaches:
receive a user input including (i) a first indication corresponding to the predicted source of the error associated with the communication system (see Bhalla, Figs. 6 and 7, pars. [0080-0081]: FIG. 6 is a detailed view of the ongoing issue from FIG. 5. Here, the user selects the ongoing issue in the list. FIG. 6 includes details on the ongoing issue, including an overall chart, data collection, root cause analysis, device details, and a topology view. The chart illustrates a PM distribution over time, and a user can select different PM values. The values are shown for detected values, predicted values, a threshold, etc. The device details provide details of the network device affected by the ongoing issue. FIG. 7 is a detailed view of the root cause analysis from FIG. 6. Here, the user selects the root cause analysis. In FIG. 7, the PNO software application 50 determines what root cause best fits the ongoing issue)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the user input of Ramakrishna with the user input including an indication corresponding to the predicted source of error of Bhalla with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing the need for subject matter experts in network management (see Bhalla, par. [0026]).
Regarding claims 4, 13, 20, the combination of Ramakrishna in view of Bhalla, and further in view of Yogerst, teaches the computer-implemented method or system or tangible machine-readable medium. Ramakrishna further teaches:
wherein the error is a first error type of a plurality of error types (see Ramakrishna, par. [0043]: monitored device 302 may execute a monitoring agent 308 that is configured to monitor the state of device 302 for symptoms of malfunctions (e.g., misconfigurations, misbehaving applications or hardware, network problems, user-reported issues, or the like). Such state information may include, but is not limited to, device resource usage (e.g., CPU usage, memory usage, etc.), application-specific information (e.g., response times, device driver signaling, etc.), network-related information (e.g., upstream or downstream bandwidth, losses, jitter, etc.), or any other information that may be indicative of the health of monitored device 302; in this case, there may be many types of symptoms (i.e. a plurality of error types)), the error is a first instance of the first error type (see Ramakrishna, par. [0044]: monitoring agent 308 may send symptom alerts 310 to chatbot server 304. In some embodiments, monitoring agent 308 may use any number of predefined thresholds, to trigger the sending of a symptom alert 310 to chatbot server 304. For example, if the CPU usage of monitored device 302 exceeds a reporting threshold, monitoring agent 308 may generate and send a symptom alert 310 to chatbot server 304; in this case, a current symptom (i.e. a first instance) may be reported), and the predicted solution to the error includes (i) a predicted solution to the first instance of the first error type (see Ramakrishna, par. [0050]: chatbot server 304 may automatically initiate a corrective action, in response to receiving a symptom alert 310. For example, the user of administrator device 306 may establish one or more rules on chatbot server 304 that cause server 304 to send out a corresponding action command 326 based on the received symptom(s); in this case, an action command (i.e. predicted solution) is determined based on a current symptom) and (ii) a predicted solution to the first error type (see Ramakrishna, Figs. 4C and 4D, pars. [0064-0065]: chatbot server 304 may use the current symptom(s) under review as input to its machine learning model that tracks previously reported symptoms, triage actions taken, and their effectiveness to resolve the symptoms. In some cases, chatbot server 304 may do so in response to receiving request 404 to enter the triage mode. In other embodiments, chatbot server 304 may do so beforehand (e.g., in response to first receiving the symptoms from the monitored device). As shown in FIG. 4D, based on the analysis of the reported symptoms, chatbot 404 may send the predicted action to administrator device 306 as a recommended action 408 via the chatbot session and/or provide advice to administrator device 306; in this case, using previously reported symptoms as part of the machine learning for predicted actions corresponds to the predicted solution including a predicted solution to an error type in general).
Regarding claims 5, 14, the combination of Ramakrishna in view of Bhalla, and further in view of Yogerst, teaches the computer-implemented method or system.
Ramakrishna does not teach, but Bhalla teaches:
wherein the instructions, when executed, further cause the one or more processors to generate the error diagnosis by:
generating one or more embeddings associated with the error indication (see Bhalla, Fig. 3, par. [0049]: The workflow generally includes an incident occurring or predicted to occur in the network 12. The PNO software application 50 assigns an issue file to an incident. The issue file is used for the remediation of the issue. Feedback is gathered through the process, and the issue file can be updated based on the feedback. Generally, operations personnel are guided by the issue file to resolve an incident whereas experienced personnel can edit, add, revise issue files as well as rate and assign issue files to quarantined incidents (ones that the PNO software application 50 has trouble assigning issue files to); in this case, the issue file corresponds to one or more embeddings);
comparing the one or more embeddings to a dictionary of embeddings (see Bhalla, par. [0069]: The workflow management module 58 can create new issue files either manually, based on an existing incident, or automatic. The manual creation can be based on user prompting. For either the manual creation or editing existing incidents, logs, PMs, and/or KPIs are selected, AI/ML, can evaluate existing issue files for similarities, and see par. [0071]: The automatic creation can be based on analytics using AI/ML to determine pattern evaluation found in the network automatically. If it is determined to be closely related to a known issue, it can select the known issue; in this case, the existing issue files correspond to a dictionary of embeddings); and
generating the error diagnosis based upon the comparing (see Bhalla, Fig. 17, par. [0089]: responsive to a selection of an issue that is one of the ongoing issues and the predicted issues in the list, presenting a root cause analysis of the issue including one or more diagnosis step S3).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the generating the error diagnosis of Ramakrishna with the generating the error diagnosis based upon comparing embeddings of Bhalla with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing the need for subject matter experts in network management (see Bhalla, par. [0026]).
Regarding claims 6, 15, the combination of Ramakrishna in view of Bhalla, and further in view of Yogerst, teaches the computer-implemented method or system. Ramakrishna further teaches:
wherein the ML chatbot is further trained using a plurality of user inputs corresponding to the plurality of training error indications (see Ramakrishna, Fig. 3, par. [0048]: Such a model may be trained (e.g., periodically, on demand, continuously, etc.), for example, based on a history of previously reported symptoms in symptom storage 316, a history of actions previously taken to address the previously reported symptoms in action storage 318, and a history of triage session messages in message storage 320. In other words, triaging process 312 may model the associations between reported symptoms, actions taken to address the reported symptoms, and feedback regarding the taken actions (e.g., from message storage 320), to predict a suggested corrective action given an input set of one or more symptoms, and see par. [0044]: symptom alerts 310 may include an explicit set of one or more symptoms entered by the user of monitored device 302 to monitoring agent 308. For example, the user of monitored device 302 may enter a bug report that is then passed to chatbot server 304 as a symptom alert 310; in this case, symptoms used for training may be symptoms entered by a user (i.e. user inputs)), and the instructions, when executed, further cause the one or more processors (see Ramakrishna, Fig. 2, par. [0031]: The memory 240 comprises a plurality of storage locations that are addressable by the processor(s) 220 and the network interfaces 210 for storing software programs and data structures associated with the embodiments described herein. The processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures 245. An operating system 242 (e.g., the Internetworking Operating System, or IOS®, of Cisco Systems, Inc., another operating system, etc.), portions of which are typically resident in memory 240 and executed by the processor(s), functionally organizes the node by, inter alia, invoking network operations in support of software processors and/or services executing on the device. These software processors and/or services may comprise a chatbot process 248) to:
generate, by executing the ML chatbot, the error diagnosis based upon the error indication and a user input (see Ramakrishna, Fig. 2, par. [0033]: Chatbot process 248 includes computer executable instructions that, when executed by processor(s) 220, cause device 200 to operate as part of a monitoring and diagnostic infrastructure within the network. In various embodiments, chatbot process 248 may utilize machine learning techniques, to perform diagnostic and recommendation functions as part of the infrastructure, and see Fig. 3, par. [0048]: triaging process 312 may use a machine learning model to predict a corrective action/step that should be taken in view of the symptoms reported in the chatbot session to administrator device 306, and see par. [0044]: symptom alerts 310 may include an explicit set of one or more symptoms entered by the user of monitored device 302 to monitoring agent 308. For example, the user of monitored device 302 may enter a bug report that is then passed to chatbot server 304 as a symptom alert 310; in this case, a corrective action corresponds to an error diagnosis).
Regarding claims 7, 16, the combination of Ramakrishna in view of Bhalla, and further in view of Yogerst, teaches the computer-implemented method or system.
Ramakrishna does not teach, but Bhalla teaches:
wherein the user input includes at least one of (i) a verbal input or (ii) a textual input (see Bhalla, Fig. 24, par. [0133]: The I/O interfaces 604 may be used to receive user input from and/or for providing system output to one or more devices or components. User input may be provided via, for example, a keyboard, touchpad, and/or a mouse), and the instructions, when executed, further cause the one or more processors to:
generate, by executing the ML chatbot, the error diagnosis in a style representative of the user input (see Bhalla, Fig. 23, par. [0131]: The process 400 can further include obtaining operator input before the one of allow, block, and modify the actions; and providing the operator input to the ML algorithms for feedback therein, and see par. [0065]: The feedback can be used to receive a specific Issue description, a specific root cause, new/different root causes, a selection on the helpfulness of the prescription, an impact factor, etc. The idea of the feedback is to obtain data from the user and feed this back to impact the issue prescription and root cause identification, and see par. [0090]: The proactive network operations process 80 can include, responsive to feedback on the root cause analysis and the prescriptive actions, updating the one or more diagnosis in the corresponding issue file; in this case, operator input in the form of feedback may be used for generating diagnoses, corresponding to the error diagnosis being generated in a style representative of the user input).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the user input and error diagnosis generation of Ramakrishna with the user input being a text input and error diagnosis generation in a style representative of the user input of Bhalla with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing the need for subject matter experts in network management (see Bhalla, par. [0026]).
Regarding claims 8, 17, the combination of Ramakrishna in view of Bhalla, and further in view of Yogerst, teaches the computer-implemented method or system. Ramakrishna further teaches:
wherein the instructions, when executed, further cause the one or more processors to (see Ramakrishna, Fig. 2, par. [0031]: The memory 240 comprises a plurality of storage locations that are addressable by the processor(s) 220 and the network interfaces 210 for storing software programs and data structures associated with the embodiments described herein. The processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures 245. An operating system 242 (e.g., the Internetworking Operating System, or IOS®, of Cisco Systems, Inc., another operating system, etc.), portions of which are typically resident in memory 240 and executed by the processor(s), functionally organizes the node by, inter alia, invoking network operations in support of software processors and/or services executing on the device. These software processors and/or services may comprise a chatbot process 248) generate the error diagnosis by:
inputting a plurality of documentation corresponding to the communication system into the ML chatbot (see Ramakrishna, par. [0048]: triaging process 312 may use a machine learning model to predict a corrective action/step that should be taken in view of the symptoms reported in the chatbot session to administrator device 306. Such a model may be trained (e.g., periodically, on demand, continuously, etc.), for example, based on a history of previously reported symptoms in symptom storage 316, a history of actions previously taken to address the previously reported symptoms in action storage 318, and a history of triage session messages in message storage 320; in this case, a history of previously reported symptoms being used for training corresponds to inputting documentation into the ML chatbot); and
generating, by executing the ML chatbot, the error diagnosis based upon the error indication and the plurality of documentation (see Ramakrishna, Fig. 2, par. [0033]: Chatbot process 248 includes computer executable instructions that, when executed by processor(s) 220, cause device 200 to operate as part of a monitoring and diagnostic infrastructure within the network. In various embodiments, chatbot process 248 may utilize machine learning techniques, to perform diagnostic and recommendation functions as part of the infrastructure, and see Fig. 3, par. [0048]: triaging process 312 may use a machine learning model to predict a corrective action/step that should be taken in view of the symptoms reported in the chatbot session to administrator device 306. Such a model may be trained (e.g., periodically, on demand, continuously, etc.), for example, based on a history of previously reported symptoms in symptom storage 316, a history of actions previously taken to address the previously reported symptoms in action storage 318, and a history of triage session messages in message storage 320. In other words, triaging process 312 may model the associations between reported symptoms, actions taken to address the reported symptoms, and feedback regarding the taken actions (e.g., from message storage 320), to predict a suggested corrective action given an input set of one or more symptoms).
Regarding claim 9, the combination of Ramakrishna in view of Bhalla, and further in view of Yogerst, teaches the computer-implemented method. Ramakrishna further teaches:
wherein the communication system component is a chatbot configured to provide a user with automated responses (see Ramakrishna, par. [0038]: assume that a user is engaged in a chat session, such as an Internet Relay Chat (IRC) session or any other form of chat session. At the other end of the session may be a Chabot application that is configured to respond to communications sent by the user (e.g., the chatbot may respond with an answer to the user's question, etc.). Such a technology holds promise in the field of network monitoring and troubleshooting, as chatbots can be built to send automated alert event notifications from the monitored system to specific chat channels, and see par. [0050]: chatbot server 304 may automatically initiate a corrective action, in response to receiving a symptom alert 310. For example, the user of administrator device 306 may establish one or more rules on chatbot server 304 that cause server 304 to send out a corresponding action command 326 based on the received symptom(s))
Ramakrishna does not teach, but Bhalla teaches:
wherein providing a user with automated responses to at least one of: (i) verbal queries or (ii) textual queries (see Bhalla, Fig. 23, par. [0131]: The process 400 can further include obtaining operator input before the one of allow, block, and modify the actions; and providing the operator input to the ML algorithms for feedback therein, and see Fig. 24, par. [0133]: The I/O interfaces 604 may be used to receive user input from and/or for providing system output to one or more devices or components. User input may be provided via, for example, a keyboard, touchpad, and/or a mouse, and see par. [0045]: The PNO software application can track each ticket and determine probable root causes based on previous training. The probable root causes can be provided to the operator for selection, and the selection can provide feedback by editing existing root causes or adding new ones. Based on the root cause selection, the correct remediation actions can be presented for execution, either manually or automatically, such as using the controller 22).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the automated responses of Ramakrishna with the responses being in response to a textual query of Bhalla with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing the need for subject matter experts in network management (see Bhalla, par. [0026]).
Regarding claim 19, the combination of Ramakrishna in view of Bhalla, and further in view of Yogerst, teaches the tangible machine-readable medium.
wherein the error diagnosis includes (ii) a predicted solution to the error (see Ramakrishna, Fig. 2, par. [0033]: Chatbot process 248 includes computer executable instructions that, when executed by processor(s) 220, cause device 200 to operate as part of a monitoring and diagnostic infrastructure within the network. In various embodiments, chatbot process 248 may utilize machine learning techniques, to perform diagnostic and recommendation functions as part of the infrastructure, and see Fig. 3, par. [0048]: triaging process 312 may use a machine learning model to predict a corrective action/step that should be taken in view of the symptoms reported in the chatbot session to administrator device 306; in this case, a predicted corrective action corresponds to a predicted solution), and the instructions, when executed, further cause the machine to (see Ramakrishna, Fig. 2, par. [0031]: The memory 240 comprises a plurality of storage locations that are addressable by the processor(s) 220 and the network interfaces 210 for storing software programs and data structures associated with the embodiments described herein. The processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures 245. An operating system 242 (e.g., the Internetworking Operating System, or IOS®, of Cisco Systems, Inc., another operating system, etc.), portions of which are typically resident in memory 240 and executed by the processor(s), functionally organizes the node by, inter alia, invoking network operations in support of software processors and/or services executing on the device. These software processors and/or services may comprise a chatbot process 248) at least:
receive a user input including (ii) a second indication corresponding to an implementation of the predicted solution to the error (see Ramakrishna, Fig. 4E, pars. [0074-0076]: in FIG. 4E, the user of administrator device 306 may also request performance of recommended action 408. For example, assume that chatbot server 304 suggests rebooting the malfunctioning host device. In turn, the user of administrator device 306 may issue an action request 410 as follows: User: Host fluentd13.zeus.io RESTART. In turn, chatbot server 304 may issue action command 412 to monitored device 302, to cause monitored device 302 to restart); and
re-train the ML chatbot based upon the user input (see Ramakrishna, Fig. 4F, pars. [0080-0083]: the user of administrator device 306 may provide feedback 416 to chatbot server304 regarding the taken action. For example, if the user believes that the issue has been resolved by the action, the user may simply provide a summary of the triage session and end the session as follows: User: Triage session Summary—Host fluentd13.ciscozeus.io—HIGH Memory, SSH not accessible, PING not reachable, Host RESTART. User: Triage session END. However, if the actions did not satisfactorily address the issue, the user of administrator device 306 may request performance of additional actions, as needed (e.g., via feedback 416). In turn, chatbot server 304 may use the contents of the triage session for the reported symptoms to train or retrain the machine learning model, as needed. For example, chatbot server 304 may update the model based on the assessed symptom(s) reported to the user, the steps/actions taken during the triage session for the symptom(s), and potentially any feedback from the user regarding the steps/actions (e.g., indicating whether a given action addressed the underlying issue)).
Ramakrishna does not teach, but Bhalla teaches:
wherein the error diagnosis includes (i) a predicted source of the error associated with the communication system (see Bhalla, Figs. 6 and 7, pars. [0080-0081]: FIG. 6 is a detailed view of the ongoing issue from FIG. 5. Here, the user selects the ongoing issue in the list. FIG. 6 includes details on the ongoing issue, including an overall chart, data collection, root cause analysis, device details, and a topology view. The chart illustrates a PM distribution over time, and a user can select different PM values. The values are shown for detected values, predicted values, a threshold, etc. The device details provide details of the network device affected by the ongoing issue. FIG. 7 is a detailed view of the root cause analysis from FIG. 6. Here, the user selects the root cause analysis. In FIG. 7, the PNO software application 50 determines what root cause best fits the ongoing issue. Here, the diagnostic is that there is a fiber break or an intermediate connector disconnected between neighboring sites. Also, the PNO software application 50 shows a 90% predictability factor meaning there is high confidence this is the root cause. Note, if there were an equipment failure, the root cause would not be designated as a fiber break or connector; in this case, a predicted root cause corresponds to a predicted source of the error),
receive a user input including (i) a first indication corresponding to the predicted source of the error associated with the communication system (see Bhalla, Figs. 6 and 7, pars. [0080-0081]: FIG. 6 is a detailed view of the ongoing issue from FIG. 5. Here, the user selects the ongoing issue in the list. FIG. 6 includes details on the ongoing issue, including an overall chart, data collection, root cause analysis, device details, and a topology view. The chart illustrates a PM distribution over time, and a user can select different PM values. The values are shown for detected values, predicted values, a threshold, etc. The device details provide details of the network device affected by the ongoing issue. FIG. 7 is a detailed view of the root cause analysis from FIG. 6. Here, the user selects the root cause analysis. In FIG. 7, the PNO software application 50 determines what root cause best fits the ongoing issue)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the error diagnosis and the user input of Ramakrishna with the error diagnosis including a predicted source of the error and the user input including an indication corresponding to the predicted source of error of Bhalla with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing the need for subject matter experts in network management (see Bhalla, par. [0026]).
Response to Arguments
Applicant’s arguments with respect to claims 1, 10, and 18 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Basam (US 2021/0166244) teaches a method for a proactive customer support system including monitoring messages, determining error type, generating an issue ticket, and sending instructions.
Chandran et al. (US 2024/0430702) teaches a method for building a machine learning model for detecting when a distributed unit will shut down and performing remedial action.
Lozano et al. (US 2022/0019496) teaches an error documentation system including tools to collect and analyze application error data for individual development teams and tools to share documented defects and solutions across development teams during any stage of development cycle.
Lucioni et al. (US 2024/0256429) teaches techniques that use machine learning to triage issues by classifying the issues into impact levels to obtain natural language descriptions of issues.
Sun et al. (US 2020/0133756) teaches a method of application error diagnosis may include obtaining information related to an error in an application, generating a code of the information related to the error in the application, determining a similarity between the code and at least one predetermined code.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CALEB J BALLOWE whose telephone number is (571)270-0410. The examiner can normally be reached MON-FRI 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nishant B. Divecha can be reached at (571) 270-3125. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.J.B./Examiner, Art Unit 2419
/Nishant Divecha/Supervisory Patent Examiner, Art Unit 2419