DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This action is in response to the following communication: Amendment to application No. 18/402,479 filed on 12/15/2025.
3. Claims 1, 8 and 14 have been amended.
Claims 1-20 now remain pending.
Claims 1, 8 and 14 are independent claims.\
Specification Objection
4. Prior objection is overcome by corrections.
Claim Rejections - 35 USC § 102
5. Prior objection is overcome by corrections.
Response to Arguments
6. Applicant’s arguments with respect to newly amended independent claims 1, 8 and 14 and claims 2-7, 9-13 and 15-20 on pages 10-14 of the response have been fully considered but they are not persuasive are moot in view of the new ground(s) of rejection - see Balasubramanian Art newly made of record) as applied below, as they further teach such use.
Claim Rejections - 35 USC § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 1, 2 and 4-13 are rejected under 35 U.S.C. 103 as being unpatentable over Harutyunyan et al., U.S. Patent No. 12,056,002 (hereinafter Harutyunyan) in view of Balasubramanian et al., US Patent No. 12,197,912 (hereinafter Balasubramanian).
In regards to claim 1, Harutyunyan teaches:
A system for analyzing software platform health, the system comprising: one or more memories; and one or more processors, communicatively coupled to the one or more memories, configured to: (Abstract, see the automated methods use machine learning to obtain rules defining relationships between probabilities of event types of in log messages and performance problems identified by a key performance indictor (“KPI”) of the object. When a KPI violates a corresponding threshold, the rules are used to evaluate run time log messages that describe the probable root cause of the performance problem).
receive at least one documentation file associated with a software platform(column 8, lines 25-30, see the example log write instruction 602 also includes text strings and natural-language words and phrases that identify the level of importance of the log message 610 and type of event that triggered the log write instruction, such as “Repair session” argument 612).
receive a set of property indications associated with the software platform; provide the set of property indications to a clustering model to receive a second health indicator (column 7, line 66 – column 8, line 4, see in practice, a log write instruction may also include the name of the source of the log message (e.g., name of the application program, operating system and version, server computer, and network device) and may include the name of the log file to which the log message is recorded).
receive at least one log file associated with the software platform; provide the at least one log file to a machine learning model to receive a third health indicator (Abstract, see the automated methods use machine learning to obtain rules defining relationships between probabilities of event types of in log messages and performance problems identified by a key performance indictor (“KPI”) of the object), (column 8, lines 25-30, see the example log write instruction 602 also includes text strings and natural-language words and phrases that identify the level of importance of the log message 610 and type of event that triggered the log write instruction, such as “Repair session” argument 612) and (column 6, lines 44-54, see the analytics engine 312 performs system health assessments b monitoring key performance indicators (“KPIs”) for problems with applications or other data center objects, maintains dynamic thresholds of metrics, and generates alerts in response to KPIs that violate corresponding thresholds. The analytics engine 312 uses machine learning (“ML”) as described below to generate models that are used to generate rules for interpreting degradation of a KPI or indicate the most influential dimensions features for a long-term explanation of those degradations).
receive a set of notifications associated with failed builds, manual changes, software incidents, or a combination thereof (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor).
apply a set of rules to the set of notifications to generate a suggested change to the software platform (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
output instructions for a user interface (UI) that includes the first health indicator, the second health indicator, the third health indicator, and the suggested change (column 4, lines 22-24, see FIGS. 26A-26B show an example graphical use interface that displays a list of objects, KPI threshold violations, log messages, and remedial measures) and (Fig. 26B, List of Objects in Data Center, Object 01-07, Alert, Remedial measures 2624, Object 03- Key Performance Indicators, Threshold violation) and (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
Harutyunyan doesn’t explicitly teach:
apply natural language processing to the at least one documentation file to generate a first health indicator that reflects readability or understandability associated with the documentation file.
However, Balasubramanian teaches such use: (column 8, lines 49-61, see the documentation assessor 107 also determine the consistency of usage of terminologies and readability using natural language processing techniques. Based on the determination the document assessor 107 may allot terminologies and readability score and a flow score for the documentation. Further, the completeness of expected sections in the documentation is also evaluated based on the system define standard structure model. Based on the evaluation, a completeness of expected sections score may be determined. The above determined terminologies and readability score, the flow score, and the completeness of expected sections scores are saved in the database).
Harutyunyan and Balasubramanian are analogous art because they are from the same field of endeavor, analysis of program execution.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Harutyunyan and Balasubramanian before him or her, to modify the system of Harutyunyan to include the teachings of Balasubramanian, as a system for scoring software documentation, and accordingly it would enhance the system of Harutyunyan, which is focused on resolving performance issues using machine language, because that would provide Harutyunyan with the ability for scoring software documentation in terms of relevance for the software usage, as suggested by Balasubramanian (column 8, lines 49-61, column 14, lines 43-48).
In regards to claim 2, Harutyunyan teaches:
the at least one documentation file includes a webpage, a portable document format file, or a word processor file (column 8, lines 25-30, see the example log write instruction 602 also includes text strings and natural-language words and phrases that identify the level of importance of the log message 610 and type of event that triggered the log write instruction, such as “Repair session” argument 612).
In regards to claim 4, Harutyunyan teaches:
the one or more processors are configured to: (Abstract, see the automated methods use machine learning to obtain rules defining relationships between probabilities of event types of in log messages and performance problems identified by a key performance indictor (“KPI”) of the object. When a KPI violates a corresponding threshold, the rules are used to evaluate run time log messages that describe the probable root cause of the performance problem).
generate synthetic monitoring setups using a generative adversarial network, wherein the machine learning model is trained using the synthetic monitoring setups (column 29, claim 3, see using hyperparameter tuning with machine learning to train a random forest model that defines relationships between the event types and the class labels… using rule learning to form the rules).
In regards to claim 5, Harutyunyan teaches:
the one or more processors are configured to: (Abstract, see the automated methods use machine learning to obtain rules defining relationships between probabilities of event types of in log messages and performance problems identified by a key performance indictor (“KPI”) of the object. When a KPI violates a corresponding threshold, the rules are used to evaluate run time log messages that describe the probable root cause of the performance problem).
transmit an indication of the suggested change using a communication channel selected based on a user preference (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
In regards to claim 6, Harutyunyan teaches:
the one or more processors are configured to: (Abstract, see the automated methods use machine learning to obtain rules defining relationships between probabilities of event types of in log messages and performance problems identified by a key performance indictor (“KPI”) of the object. When a KPI violates a corresponding threshold, the rules are used to evaluate run time log messages that describe the probable root cause of the performance problem).
generate a prediction, associated with the software platform, based on the set of property indications and the at least one log file; the UI further includes the prediction (column 15, lines 49-57, see cross-validation tests the ability of a trained random forest model to correctly predict the classification of a probability distribution that was not used to train the random forest model, in order to identify problems and to obtain insight on how well the random forest model performs for a particular combination of parameters (K.sub.j, D.sub.j, E.sub.j). Each round of cross-validation includes partitioning the data frame 1206 into a training and a validation dataset of probability distributions), (Abstract, see the automated methods use machine learning to obtain rules defining relationships between probabilities of event types of in log messages and performance problems identified by a key performance indictor (“KPI”) of the object), (column 28, lines 34-40, see FIG. 30 is a flow diagram illustrating an example implementation of the “determine run-time event-type probabilities of log messages recorded in a run-time interval” procedure performed in block 2804 of FIG. 28. In block 3001, log messages of a log file with time stamps in a run-time interval that ends at a time stamp of the KPI value that violates the KPK threshold are identified) and (Fig. 26A, Event-type probabilities, Remedial measures 2624) and (column 2, lines 51-54, se in block 2808, remedial measures to resolve the performance problem associated with the KPI threshold violation detected in block 2803 are executed).
In regards to claim 7, Harutyunyan teaches:
the suggested change is further based on the prediction (column 15, lines 49-57, see cross-validation tests the ability of a trained random forest model to correctly predict the classification of a probability distribution that was not used to train the random forest model, in order to identify problems and to obtain insight on how well the random forest model performs for a particular combination of parameters (K.sub.j, D.sub.j, E.sub.j). Each round of cross-validation includes partitioning the data frame 1206 into a training and a validation dataset of probability distributions), (Abstract, see the automated methods use machine learning to obtain rules defining relationships between probabilities of event types of in log messages and performance problems identified by a key performance indictor (“KPI”) of the object), (column 28, lines 34-40, see FIG. 30 is a flow diagram illustrating an example implementation of the “determine run-time event-type probabilities of log messages recorded in a run-time interval” procedure performed in block 2804 of FIG. 28. In block 3001, log messages of a log file with time stamps in a run-time interval that ends at a time stamp of the KPI value that violates the KPK threshold are identified) and (Fig. 26A, Event-type probabilities, Remedial measures 2624) and (column 2, lines 51-54, se in block 2808, remedial measures to resolve the performance problem associated with the KPI threshold violation detected in block 2803 are executed).
In regards to claim 8, Harutyunyan teaches:
A method of analyzing software platform health, comprising: (Abstract, see the automated methods use machine learning to obtain rules defining relationships between probabilities of event types of in log messages and performance problems identified by a key performance indictor (“KPI”) of the object. When a KPI violates a corresponding threshold, the rules are used to evaluate run time log messages that describe the probable root cause of the performance problem).
receiving, at a health system, a set of property indications associated with a software platform (column 7, line 66 – column 8, line 4, see in practice, a log write instruction may also include the name of the source of the log message (e.g., name of the application program, operating system and version, server computer, and network device) and may include the name of the log file to which the log message is recorded).
receiving, at the health system, a set of notifications associated with failed builds, manual changes, software incidents, or a combination thereof (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor).
applying, by the health system, a set of rules to the set of notifications to generate a suggested change to the software platform (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
outputting, from the health system, instructions for a user interface (UI) that includes the health indicator and the suggested change (column 4, lines 22-24, see FIGS. 26A-26B show an example graphical use interface that displays a list of objects, KPI threshold violations, log messages, and remedial measures) and (Fig. 26B, List of Objects in Data Center, Object 01-07, Alert, Remedial measures 2624, Object 03- Key Performance Indicators, Threshold violation) and (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
Harutyunyan doesn’t explicitly teach:
providing, by the health system, the set of property indications to a clustering model to receive a health indicator that reflects readability or understandability associated with the software platform.
However, Balasubramanian teaches such use: (column 8, lines 49-61, see the documentation assessor 107 also determine the consistency of usage of terminologies and readability using natural language processing techniques. Based on the determination the document assessor 107 may allot terminologies and readability score and a flow score for the documentation. Further, the completeness of expected sections in the documentation is also evaluated based on the system define standard structure model. Based on the evaluation, a completeness of expected sections score may be determined. The above determined terminologies and readability score, the flow score, and the completeness of expected sections scores are saved in the database) and (column 9, lines 3-16, see the project metrics doc builder 109 creates machine learning models trained with pre-selected project's documentation and its source code. The training data is from the projects which are validated for their good documentation quality. In an example, the project metrics builder 109 extracts details from various documentation having good quality scores. The details may be about the structure of the documentation, language used in the documentation, clarity of the documentation, ease of the understanding of the documentation etc. Such details may comprises the training data for generating machine learning models. The training data is prepared with the parsed documentation to the system defined standard structure model for each technology stack and the open source software projects source code metrics).
Harutyunyan and Balasubramanian are analogous art because they are from the same field of endeavor, analysis of program execution.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Harutyunyan and Balasubramanian before him or her, to modify the system of Harutyunyan to include the teachings of Balasubramanian, as a system for scoring software documentation, and accordingly it would enhance the system of Harutyunyan, which is focused on resolving performance issues using machine language, because that would provide Harutyunyan with the ability for scoring software documentation in terms of relevance for the software usage, as suggested by Balasubramanian (column 8, lines 49-61, column 14, lines 43-48).
In regards to claim 9, Harutyunyan teaches:
the health indicator includes a plurality of ratings corresponding to a plurality of categories (column 1, line 60 – column 2, line 6, see management tools collect metrics, such as CPU usage, memory usage, disk space available, and network throughput of applications. Data center tenants and system administrators rely on key performance indicators (“KPIs”) to monitor the overall health and performance of applications executing in a data center. A KPI can be constructed from one or more metrics. KPIs that do not depend on metrics can also be used to monitor performance of applications. For example, a KPI for an online shopping application could be the number of shopping carts successfully closed per unit time. A KPI a website may be response times to user requests. Other KPIs can be used to monitor performance of various services provided by different microservices of a distributed application).
In regards to claim 10, Harutyunyan teaches:
the suggested change indicates a remediation for an incident associated with a notification in the set of notifications (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
In regards to claim 11, Harutyunyan teaches:
the suggested change indicates an additional monitoring software to deploy (column 5, lines 53-58, see the operations manager 132 is an automated computer implemented tool that aids IT administrators with monitoring, troubleshooting, and managing the health and capacity of the data center virtual environment).
In regards to claim 12, Harutyunyan teaches:
receiving, at the health system, a confirmation of the suggested change; and applying, by the health system, the suggested change in response to the confirmation (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object) and (column 9, lines 62-67, see the analytics engine 312 detects performance problems by comparing a values of a KPI to a corresponding KPI threshold, denoted by ThKPI. The corresponding KPI threshold ThKPI can be a dynamic threshold that is automatically adjusted by the analytics engine 312 to changes in the application behavior over time).
In regards to claim 13, Harutyunyan teaches:
outputting the instructions for the UI comprises: transmitting the instructions for the UI to an administrator device (column 4, lines 22-24, see FIGS. 26A-26B show an example graphical use interface that displays a list of objects, KPI threshold violations, log messages, and remedial measures) and (Fig. 26B, List of Objects in Data Center, Object 01-07, Alert, Remedial measures 2624, Object 03- Key Performance Indicators, Threshold violation) and (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
9. Claims 14-16 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Harutyunyan in view of Karlsson et al., US 2023/0092819 (hereinafter Karlsson) in view of Balasubramanian.
In regards to claim 14, Harutyunyan teaches:
A non-transitory computer-readable medium storing a set of instructions for analyzing software platform health, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: (Abstract, see the automated methods use machine learning to obtain rules defining relationships between probabilities of event types of in log messages and performance problems identified by a key performance indictor (“KPI”) of the object. When a KPI violates a corresponding threshold, the rules are used to evaluate run time log messages that describe the probable root cause of the performance problem).
receive a set of notifications associated with failed builds, manual changes, software incidents, or a combination thereof, for the software platform(column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor).
generate, using a machine learning model, a suggested change to the software platform based on the set of notifications (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
output instructions for a user interface (UI) that includes the health indicator and the suggested change (column 4, lines 22-24, see FIGS. 26A-26B show an example graphical use interface that displays a list of objects, KPI threshold violations, log messages, and remedial measures) and (Fig. 26B, List of Objects in Data Center, Object 01-07, Alert, Remedial measures 2624, Object 03- Key Performance Indicators, Threshold violation) and (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
determine a possible solution to an incident associated with a notification in the set of notifications (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
transmit an indication of the possible solution using a communication channel selected based on a user preference (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
receive feedback associated with the possible solution; update the machine learning model based on the feedback (column 29, claim 3, see using hyperparameter tuning with machine learning to train a random forest model that defines relationships between the event types and the class labels… using rule learning to form the rules).
output instructions to update the UI with the additional suggested change (column 4, lines 22-24, see FIGS. 26A-26B show an example graphical use interface that displays a list of objects, KPI threshold violations, log messages, and remedial measures) and (Fig. 26B, List of Objects in Data Center, Object 01-07, Alert, Remedial measures 2624, Object 03- Key Performance Indicators, Threshold violation) and (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
Harutyunyan doesn’t explicitly teach:
generate, using the updated machine learning model, an additional suggested change to the software platform based on the set of notifications.
However, Karlsson teaches such use: (Abstract, see a method for training and using a machine-learning based model to reduce and troubleshoot incidents in a system may include receiving first metadata regarding a previous modification, extracting a first feature from the received first metadata, receiving second metadata regarding a previous incident, extracting a second feature from the received second metadata training the machine-learning based model to learn an association between the previous modification and the previous incident, based on the extracted first feature and the extracted second feature, and using the machine-learning based model to determine a modification to a system causing a change in a performance of the system), (p. 2, [0011], see the disclosed systems and methods may be used in test-automation. The disclosed systems and methods may be used with incident management to alert incident handlers about potentially code-related or change-related incidents and provide valuable information to improve speed of resolution) and (p. 6, Claim 3, see providing an alert identifying the determined modification causing the change in the performance of the system).
Harutyunyan and Karlsson are analogous art because they are from the same field of endeavor, analysis of program execution.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Harutyunyan and Karlsson before him or her, to modify the system of Harutyunyan to include the teachings of Karlsson, as a system for determining cause of performance issues, and accordingly it would enhance the system of Harutyunyan, which is focused on resolving performance issues using machine language, because that would provide Harutyunyan with the ability to quickly determining what caused of a change in performance of a system, as suggested by Karlsson (p. 2, [0011],, p. 6, [0061]).
Harutyunyan and Karlsson, in particular Harutyunyan doesn’t explicitly teach:
receive a health indicator that reflects readability or understandability associated with a software platform.
However, Balasubramanian teaches such use: (column 8, lines 49-61, see the documentation assessor 107 also determine the consistency of usage of terminologies and readability using natural language processing techniques. Based on the determination the document assessor 107 may allot terminologies and readability score and a flow score for the documentation. Further, the completeness of expected sections in the documentation is also evaluated based on the system define standard structure model. Based on the evaluation, a completeness of expected sections score may be determined. The above determined terminologies and readability score, the flow score, and the completeness of expected sections scores are saved in the database) and (column 2, lines 13-20, see in an embodiment, validating the document sections with project or stack metrics further comprises fetching the system defined standard structure model for the open source software project's identified technology stack, comparing the parsed document sections to a standard list and identifying mapping compliance, and scoring the open source project documentation for compliance to expected sections as per the system defined standard structure model).
Harutyunyan, Karlsson, and Balasubramanian are analogous art because they are from the same field of endeavor, analysis of program execution.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Harutyunyan, Karlsson and Balasubramanian before him or her, to modify the system of Harutyunyan and Karlsson in particular Harutyunyan, to include the teachings of Balasubramanian, as a system for scoring software documentation, and accordingly it would enhance the system of Harutyunyan, which is focused on resolving performance issues using machine language, because that would provide Harutyunyan with the ability for scoring software documentation in terms of relevance for the software usage, as suggested by Balasubramanian (column 8, lines 49-61, column 14, lines 43-48).
In regards to claim 15, Harutyunyan teaches:
the one or more instructions, to update the machine learning model, cause the device to: re-train the machine learning model using the feedback (column 29, claim 3, see using hyperparameter tuning with machine learning to train a random forest model that defines relationships between the event types and the class labels… using rule learning to form the rules).
In regards to claim 16, Harutyunyan teaches:
the one or more instructions, to update the machine learning model, cause the device to: refine the machine learning model using the feedback (column 29, claim 3, see using hyperparameter tuning with machine learning to train a random forest model that defines relationships between the event types and the class labels… using rule learning to form the rules).
In regards to claim 19, Harutyunyan teaches:
the one or more instructions, when executed by the one or more processors, cause the device to: (Abstract, see the automated methods use machine learning to obtain rules defining relationships between probabilities of event types of in log messages and performance problems identified by a key performance indictor (“KPI”) of the object. When a KPI violates a corresponding threshold, the rules are used to evaluate run time log messages that describe the probable root cause of the performance problem).
generate synthetic monitoring setups using a generative adversarial network (column 29, claim 3, see using hyperparameter tuning with machine learning to train a random forest model that defines relationships between the event types and the class labels… using rule learning to form the rules).
the suggested change is further based on the synthetic monitoring setups (column 29, claim 3, see using hyperparameter tuning with machine learning to train a random forest model that defines relationships between the event types and the class labels… using rule learning to form the rules).
In regards to claim 20, Harutyunyan teaches:
the one or more instructions, to output the instructions for the UI, cause the device to: transmit the instructions for the UI to an administrator device (column 4, lines 22-24, see FIGS. 26A-26B show an example graphical use interface that displays a list of objects, KPI threshold violations, log messages, and remedial measures) and (Fig. 26B, List of Objects in Data Center, Object 01-07, Alert, Remedial measures 2624, Object 03- Key Performance Indicators, Threshold violation) and (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
10. Claims 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Harutyunyan in view of Karlsson in view of Balasubramanian in view of Azad et al., US Patent No. 11,645,188 (hereinafter Azad).
In regards to claim 14, the rejections above are incorporated respectively.
In regards to claim 17, Harutyunyan teaches:
the one or more instructions, to determine the possible solution, cause the device to: (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
determine at least one rule using the natural language processing; and apply the at least one rule to the notification, in the set of notifications, to determine the possible solution (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
Harutyunyan, Karlsson and Balasubramanian, in particular Harutyunyan doesn’t explicitly teach:
apply natural language processing to a set of pull requests associated with the software platform.
However, Azad teaches such use: (column 10, lines 48-58, see pull request risk prediction program 106 uses one or more natural language processing (NLP) techniques to extract the features. Pull request risk prediction program 106 combines the file risk assessment with the extracted pull request features (step 228). In an embodiment, pull request risk prediction program 106 combines the file risk assessment, as discussed with respect to step 222 and the extracted pull request features, as discussed with respect to step 226, to generate a labelled training set and train pull request risk prediction model 110)
Harutyunyan, Karlsson, Balasubramanian and Azad are analogous art because they are from the same field of endeavor, analysis of program execution.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Harutyunyan, Karlsson, Balasubramanian and Azad before him or her, to modify the system of Harutyunyan, Karlsson and Balasubramanian, in particular Harutyunyan to include the teachings of Azad, as a system for pull risk prediction, and accordingly it would enhance the system of Harutyunyan, which is focused on resolving performance issues using machine language, because that would provide Harutyunyan with the ability to quickly and proactively identify whether incorporating file changes into production may introduce a risk of a bug as suggested by Azad (column 10, lines 48-58, column 15, lines 31-42).
In regards to claim 18, Harutyunyan teaches:
the one or more instructions, to determine the possible solution, cause the device to (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
determine the possible solution based on output from the additional machine learning model (column 24, lines 44-52, see in response to a run-time KPI value violating a corresponding KPI threshold, the analytics engine 312 sends an alert notification to the controller 310 that a KPI threshold violation has occurred and the controller 310 directs the user interface 302 to display an alert in GUI of a system administrators console or monitor. The analytics engine 312 determines log messages that describe the probable root cause of performance problem of a data center object).
Harutyunyan and Karlsson and Balasubramanian, in particular Harutyunyan doesn’t explicitly teach:
apply an additional machine learning model to a set of pull requests associated with the software platform.
However, Azad teaches such use: (column 10, lines 48-58, see pull request risk prediction program 106 uses one or more natural language processing (NLP) techniques to extract the features. Pull request risk prediction program 106 combines the file risk assessment with the extracted pull request features (step 228). In an embodiment, pull request risk prediction program 106 combines the file risk assessment, as discussed with respect to step 222 and the extracted pull request features, as discussed with respect to step 226, to generate a labelled training set and train pull request risk prediction model 110)
Harutyunyan, Karlsson, Balasubramanian and Azad are analogous art because they are from the same field of endeavor, analysis of program execution.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Harutyunyan, Karlsson, Balasubramanian and Azad before him or her, to modify the system of Harutyunyan, Karlsson and Balasubramanian, in particular Harutyunyan to include the teachings of Azad, as a system for pull risk prediction, and accordingly it would enhance the system of Harutyunyan, which is focused on resolving performance issues using machine language, because that would provide Harutyunyan with the ability to quickly and proactively identify whether incorporating file changes into production may introduce a risk of a bug as suggested by Azad (column 10, lines 48-58, column 15, lines 31-42).
Allowable Subject Matter
11. Claim 3 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all the limitation of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: As per claim 3, prior art of record does not each and/or fairly suggest that “a clustering model is trained using property indications associated with software platforms labeled as well-established”. The art of record does not expressly disclose such features.
Conclusion
12. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Patent Application Publications
Power Patent No. 11,132,371, teaches computer implemented method comprises steps of: generating an index of electronic objects, comprising steps of: performing, upon each electronic object, natural language processing, using a natural language processor, and readability analysis, using a readability analyser, to generate natural language processing data and a readability score for the respective object; and storing in the index, for each electronic object, electronic object data indicative of the electronic object and associated meta-data indicative of the natural language processing data and the readability score.
Byron Patent No. 10,956,471 teaches Electronic natural language processing in a natural language processing (NLP) system, such as a Question-Answering (QA) system. A receives electronic text input, in question form, and determines a readability level indicator in the question. The readability level indicator includes at least a grammatical error, a slang term, and a misspelling type. The computer determines a readability level for the electronic text input based on the readability level indicator, and retrieves candidate answers based on the readability level.
13. Examiner, in light of the above submission maintains the previous rejections, and any new ground(s) of rejection is necessitated by Applicant’s amendment. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
14. A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Correspondence Information
15. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Evral Bodden whose telephone number is 571-272-3455. The examiner can normally be reached on Monday to Friday from 9am to 5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do, can be reached at telephone number 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automatedinterview-request-air-form.
If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EVRAL E BODDEN/Primary Examiner, Art Unit 2193