DETAILED ACTION
This action is in response to the original filing on 11/22/2023. Claims 1-20 are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 1, 4, 12, 15, 17, and 20 objected to because of the following informalities:
Claims 1, 12, and 17 recite ‘the determination in which the operation of the data processing system is to be updated’; however, it should recite - - a determination in which the operation of the data processing system is to be updated - -.
Claims 4, 15, and 20 recite ‘cost of the operation of the data processing system’; however, it should recite - - a cost of the operation of the data processing system - -.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, claim 1 recites “obtaining, via a graphical user interface (GUI), a request for a portion of system data for the data processing system from a user, the portion of the system data being associated with an aspect of multiple aspects of operation of the data processing system”. It is unclear whether the limitations “for the data processing system” and “from a user” are intended to modify the request or the portion. It is further unclear whether “from a user” is intended to modify the request, the portion, or the data processing system. It is unclear how “a portion of system data for the data processing system from a user” relates to “the portion of the system data being associated with an aspect”. It is unclear whether “of operation” is intended to modify the aspect or the multiple aspects. It is unclear whether “of the data processing system” is intended to modify the aspect, the multiple aspects, or the operation. For the purposes of examination, this limitation is interpreted as:
obtaining, via a graphical user interface (GUI), a request for a portion of system data, wherein the request is for the data processing system and the request is from a user, wherein the portion of the system data is associated with an aspect of multiple data processing system operation aspects
Claim 1 further recites “receiving, via the updated GUI, user feedback from the user based on the portion of the system data”. It is unclear whether “based on the portion of the system data” is intended to modify the receiving, the user feedback, or the user. For the purposes of examination, this limitation is interpreted as:
receiving, via the updated GUI, user feedback from the user, wherein the receiving is based on the portion of the system data
Claim 1 further recites “a determination, based at least on the user feedback,” and “the determination in which the operation of the data processing system is to be updated”. It is unclear whether these are intended to be the same or different determinations. For the purposes of examination, this limitation is interpreted as: “a first determination, based at least on the user feedback,” and “a second determination in which the operation of the data processing system is to be updated”
Regarding claims 12 and 17, claims 12 and 17 contain substantially similar limitations to those found in claim 1. Consequently, claims 12 and 17 are rejected for the same reasons.
Regarding claims 2-11, 13-16, and 18-20, claims 2-11, 13-16, and 18-20 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for depending on an indefinite parent claim. Limitations in claims 2-11, 13-16, and 18-20 regarding system data, portions, aspects, and determinations are likewise rejected and interpreted.
Regarding claim 2, claim 2 recites “performing a lookup process using a system data lookup table and an indicator of the aspect from the request as a key for the system data lookup table to obtain the portion of the system data”. It is unclear which the limitations “using a system data lookup table” and “an indicator of the aspect”, “from the request”, “as a key” “for the system data lookup table”, and “obtain the portion of the system data” are intended to modify. The claims do not previously recite “the aspect from the request”. For the purposes of examination, this limitation is interpreted as:
performing a lookup process, wherein, to obtain the portion of the system data, the lookup process uses, as a key for the system data lookup table, a system data lookup table and an indicator of an aspect from the request
Regarding claims 13 and 18, claims 13 and 18 contain substantially similar limitations to those found in claim 2. Consequently, claims 13 and 18 are rejected for the same reasons.
Regarding claim 4, claim 4 recites “the first aspect of the operation of the data processing system”. It is unclear how this limitation is intended to relate to the previously recited “first aspect of the multiple aspects of the operation of the data processing system”. For the purposes of examination, this limitation is interpreted as:
an additional first aspect of the operation of the data processing system
Regarding claims 15 and 20, claims 15 and 20 contain substantially similar limitations to those found in claim 4. Consequently, claims 15 and 20 are rejected for the same reasons.
Regarding claim 6, claim 6 recites “the first avatar associated with the first aspect”. It is unclear how this limitation is intended to relate to the previously recited “a first avatar representing a first aspect”. For the purposes of examination, this limitation is interpreted as:
an additional first avatar associated with the first aspect
Regarding claim 8, claim 8 recites “simulating, using a digital twin of the data processing system, the operation of the data processing system based on the user feedback to obtain updated simulated operation of the data processing system”. It is unclear which previous limitations “based on the user feedback” and “to obtain updated simulated operation of the data processing system” are intended to modify. For the purposes of examination, this limitation is interpreted as:
simulating, using a digital twin of the data processing system, the operation of the data processing system, wherein the simulating is based on the user feedback, and wherein an updated simulated operation of the data processing system is obtained
Regarding claim 11, claim 11 recites “presenting, via the updated GUI, a preference selection tool to allow the user to establish relative preferences between at least two of the aspects of the multiple aspects”. It is unclear whether “to allow the user to establish relative preferences between at least two of the aspects of the multiple aspects” is intended to modify the presenting or the preference selection tool. For the purposes of examination, this limitation is interpreted as:
presenting, via the updated GUI, a preference selection tool, wherein the preference selection tool allows the user to establish relative preferences between at least two of the aspects of the multiple aspects
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-7 and 12-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Harutyunyan et al. (US 20240028955 A1, published 01/25/2024), hereinafter Harutyunyan.
Regarding claim 17, Harutyunyan teaches the claim comprising:
A data processing system, comprising: a processor; and a memory coupled to the processor to store instructions, which when executed by the processor, cause the processor to perform operations for managing a data processing system, the operations comprising (Harutyunyan Figs. 1-28; [0146], FIG. 23 shows an example architecture of a computer system that may be used to host the operations manager 132 and perform the automated processes for troubleshooting and resolving performance problems with objects executing in a data center. The computer system contains one or multiple central processing units (“CPUs”) 2302-2305, one or more electronic memories 2308 interconnected with the CPUs):
obtaining, via a graphical user interface (GUI), a request for a portion of system data for the data processing system from a user, the portion of the system data being associated with an aspect of multiple aspects of operation of the data processing system; providing, via an updated GUI, the portion of the system data to the user (Harutyunyan Figs. 1-28; [0044], Automated methods described below are performed by an operations manager 132 that is executed in one or more VMs or containers on the administration computer system 108. The operations manager 132 is an automated computer implemented tool that aids IT administrators with monitoring, troubleshooting, and managing the health and capacity of the data center virtual environment. The operations manager 132 provides management across physical, virtual, and cloud environments; [0142], The analytics engine 312 maintains in a data-storage device lists of ranked even types associated with CPU usage 2008, memory usage 2010, data stores 2012, network throughput 2014, and other lists of ranked even types represented by ellipsis 2016, such as traffic rate, traffic drop rate, and flow rate. For example, CPU usage performance problems has lists of ranked event types 2018-2021. Each listing contains a different combination of event types and associated ranks and is associated with a particular application performance problem; [0144], FIG. 22A shows an example GUI 2200 that displays a list of objects executing in a data center in a left-hand pane 2202. Each object may have numerous KPIs that are used to monitor different aspects of the performance of the object as described above. In this example, the object identified as “Object 03” has an associated alert 2204. A user may click on the highlighted area around Object 03, creating plots of the KPIs associated with Object 03 in a right-hand pane 2206);
receiving, via the updated GUI, user feedback from the user based on the portion of the system data; making a determination, based at least on the user feedback, regarding whether to update the operation of the data processing system (Harutyunyan Figs. 1-28; [0144], In this example, an alert 2216 is displayed in the plot 2210. Each KPI has an associated troubleshoot button. In this example, a user clicks on the troubleshoot button 2218 to start the troubleshooting process of the KPI. In response to receiving the troubleshoot command from the user interface 302, the analytics engine 312 executes the operations to obtain log messages that describe the probable root cause of the performance problem indicated by the KPI threshold violation. The GUI 2200 includes a pane 2220 that displays a plot of the ten largest importance scores of the event types of log messages produced in a time interval as described above with reference to Equations (24) . . . (29). Pane 2222 displays the most recent log messages of the ten largest importance scores. A user can scroll through the log messages and identify the log messages that describe the root cause of the problem. The user clicks on the “remedial measures” button 2224 to display remedial measures associated with the root cause of the problem; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object. A user can automatically execute a selected remedial measure by clicking on the corresponding “Execute” button);
in an instance of the determination in which the operation of the data processing system is to be updated: modifying the operation of the data processing system, based at least in part on the user feedback, to obtain an updated data processing system; and providing computer-implemented services using the updated data processing system (Harutyunyan Figs. 1-28; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object. A user can automatically execute a selected remedial measure by clicking on the corresponding “Execute” button. Other remedial measures that may be executed to correct the problem with the object include, but are not limited to, powering down hosts, replacing VMs disabled by physical hardware problems and failures, spinning up cloned VMs on additional hosts to ensure that the microservices provided by the VMs are accessible to increasing demand for services. When an alert is generated indicating inadequate virtual processor capacity, remedial measures that increase the virtual processor capacity of the virtual object may be executed, the virtual object may be deleted, or the virtual object may be migrated to a different server computer with more processor capacity; [0150], In block 2406, an alert identifying the violation of the KPI threshold and the log messages are displayed in a graphical user interface of an electronic display device as described above with reference to FIGS. 22A-22B. The log messages describe the probable root cause of the performance problem. In block 2407, remedial measures to resolve the performance problem associated with the KPI threshold violation detected in block 2403 are executed. The remedial measures include, but are not limited to, restarting a host that runs the object, restarting the object, increasing memory or CPU allocation to the object. Other remedial measures include deleting the object and migrating the object to a different host)
Regarding claims 1 and 12, claims 1 and 12 contain substantially similar limitations to those found in claim 17. Consequently, claims 1 and 12 are rejected for the same reasons.
Regarding claim 2, Harutyunyan teaches all the limitations claim 1, further comprising:
wherein providing the portion of the system data to the user comprises: performing a lookup process using a system data lookup table and an indicator of the aspect from the request as a key for the system data lookup table to obtain the portion of the system data; updating, based on the portion of the system data, the GUI to obtain the updated GUI; and displaying, via the updated GUI, at least the portion of the system data to the user (Harutyunyan Figs. 1-28; [0047], FIG. 4 shows an example of logging log messages in log files; The operations manager 132 records the log messages in log files 420-424 of the log database 315; [0052], As log messages are received from various event sources, the log messages are stored in corresponding log files of the log database 314 in the order in which the log messages are received; [0053], The analytics engine 312 constructs certain key performance indicators (“KPIs”) of application performance and stores the KPIs in the KPI database 316. An application can have numerous associated KPIs. Each KPI of an application measures a different feature of application performance and is used by the analytics engine 312 to detect a particular performance problem; [0133], The analytics engine 312 retrieves log messages generated in a run-time interval denoted by [t.sub.b, t.sub.c] from the log database 315, where t.sub.b denotes the beginning of the run-time interval, and t.sub.c (i.e., current time) denotes the end of the run-time interval, For example, the run-time interval [t.sub.b, t.sub.c] may have a duration of 30 seconds, 1 minute, 2 minutes, or 10 minutes. The event type engine 306 determines event types of the log messages in the run-time interval. The analytics engine 312 computes run-time event-type probability distributions for KPI values of the KPI; [0140], FIGS. 19A-19B show examples of highest ranked event types associated with different types of performance problems; [0142], FIG. 20 shows an example of highest ranked run-time event types 2002, such as one of the highest ranked run-time event types shown in FIGS. 19A-19B. The highest ranked event types 2002 include a column 2004 of event types and a column 2006 of associated ranks, such as the columns of event types and associated ranks described above with reference to FIGS. 19A-19B; [0144], FIG. 22A shows an example GUI 2200 that displays a list of objects executing in a data center in a left-hand pane 2202. Each object may have numerous KPIs that are used to monitor different aspects of the performance of the object as described above. In this example, the object identified as “Object 03” has an associated alert 2204. A user may click on the highlighted area around Object 03, creating plots of the KPIs associated with Object 03 in a right-hand pane 2206)
Regarding claims 13 and 18, claims 13 and 18 contain substantially similar limitations to those found in claim 2. Consequently, claims 13 and 18 are rejected for the same reasons.
Regarding claim 3, Harutyunyan teaches all the limitations claim 2, further comprising:
wherein the GUI comprises a first graphical representation comprising: a first avatar representing a first aspect of the multiple aspects of the operation of the data processing system; a second avatar representing a second aspect of the multiple aspects of the operation of the data processing system; a first indicator of a first status of the first aspect; and a second indicator of a second status of the second aspect (Harutyunyan Figs. 1-28; [0046], a user interface 302 that provides graphical user interfaces for data center management, system administrators, and application owners to receive alerts, view metrics, log messages, and KPIs, and execute user-selected remedial measures to correct performance problems; [0132], KPI values of Applications 1, 2, and 4 are below a threshold 1812, which indicates the applications are performing normally as represented by normal icons, such as normal icon 1814. On the other hand, KPI values of the Application 3 exceed the threshold 1812, such as KPI value 1814, triggering a warning alert 1816. Threshold 1816 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 1818; [0144], In this example, an alert 2216 is displayed in the plot 2210. Each KPI has an associated troubleshoot button. In this example, a user clicks on the troubleshoot button 2218 to start the troubleshooting process of the KPI. In response to receiving the troubleshoot command from the user interface 302, the analytics engine 312 executes the operations to obtain log messages that describe the probable root cause of the performance problem indicated by the KPI threshold violation. The GUI 2200 includes a pane 2220 that displays a plot of the ten largest importance scores of the event types of log messages produced in a time interval as described above with reference to Equations (24) . . . (29). Pane 2222 displays the most recent log messages of the ten largest importance scores. A user can scroll through the log messages and identify the log messages that describe the root cause of the problem. The user clicks on the “remedial measures” button 2224 to display remedial measures associated with the root cause of the problem; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object. A user can automatically execute a selected remedial measure by clicking on the corresponding “Execute” button; examiner note: per the instant specification an avatar includes any symbol or shape (see [0065], [0075], [0079]))
Regarding claims 14 and 19, claims 14 and 19 contain substantially similar limitations to those found in claim 3. Consequently, claims 14 and 19 are rejected for the same reasons.
Regarding claim 4, Harutyunyan teaches all the limitations claim 3, further comprising:
wherein the first aspect of the operation of the data processing system is one selected from a list consisting of: cost of the operation of the data processing system; a recoverability of the data processing system; a performance capability of the data processing system; and a capacity of the data processing system (Harutyunyan Figs. 1-28; [0143], FIG. 21 shows a table of example rules stored in a data storage device and is accessed by the analytics engine 312 to report performance problems and recommend remedial measures for correcting the performance problem; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object. A user can automatically execute a selected remedial measure by clicking on the corresponding “Execute” button. Other remedial measures that may be executed to correct the problem with the object include, but are not limited to, powering down hosts, replacing VMs disabled by physical hardware problems and failures, spinning up cloned VMs on additional hosts to ensure that the microservices provided by the VMs are accessible to increasing demand for services. When an alert is generated indicating inadequate virtual processor capacity, remedial measures that increase the virtual processor capacity of the virtual object may be executed, the virtual object may be deleted, or the virtual object may be migrated to a different server computer with more processor capacity)
Regarding claims 15 and 20, claims 15 and 20 contain substantially similar limitations to those found in claim 4. Consequently, claims 15 and 20 are rejected for the same reasons.
Regarding claim 5, Harutyunyan teaches all the limitations claim 3, further comprising:
wherein each of the multiple aspects is uniquely associated with a corresponding portion of the system data (Harutyunyan Figs. 1-28; [0046], a user interface 302 that provides graphical user interfaces for data center management, system administrators, and application owners to receive alerts, view metrics, log messages, and KPIs, and execute user-selected remedial measures to correct performance problems; [0132], KPI values of Applications 1, 2, and 4 are below a threshold 1812, which indicates the applications are performing normally as represented by normal icons, such as normal icon 1814. On the other hand, KPI values of the Application 3 exceed the threshold 1812, such as KPI value 1814, triggering a warning alert 1816. Threshold 1816 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 1818; [0142], The analytics engine 312 maintains in a data-storage device lists of ranked even types associated with CPU usage 2008, memory usage 2010, data stores 2012, network throughput 2014, and other lists of ranked even types represented by ellipsis 2016, such as traffic rate, traffic drop rate, and flow rate. For example, CPU usage performance problems has lists of ranked event types 2018-2021. Each listing contains a different combination of event types and associated ranks and is associated with a particular application performance problem; [0144], In this example, an alert 2216 is displayed in the plot 2210. Each KPI has an associated troubleshoot button; A user can scroll through the log messages and identify the log messages that describe the root cause of the problem. The user clicks on the “remedial measures” button 2224 to display remedial measures associated with the root cause of the problem; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object; (see [0065], [0075], [0079]))
Regarding claim 16, claim 16 contains substantially similar limitations to those found in claim 5. Consequently, claim 16 is rejected for the same reasons.
Regarding claim 6, Harutyunyan teaches all the limitations claim 4, further comprising:
wherein the updated GUI comprises: a second graphical representation comprising: the first avatar associated with the first aspect; the first indicator of the first status of the first aspect; and the portion of the system data (Harutyunyan Figs. 1-28; [0046], a user interface 302 that provides graphical user interfaces for data center management, system administrators, and application owners to receive alerts, view metrics, log messages, and KPIs, and execute user-selected remedial measures to correct performance problems; [0132], KPI values of Applications 1, 2, and 4 are below a threshold 1812, which indicates the applications are performing normally as represented by normal icons, such as normal icon 1814. On the other hand, KPI values of the Application 3 exceed the threshold 1812, such as KPI value 1814, triggering a warning alert 1816. Threshold 1816 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 1818; [0144], In this example, an alert 2216 is displayed in the plot 2210. Each KPI has an associated troubleshoot button. In this example, a user clicks on the troubleshoot button 2218 to start the troubleshooting process of the KPI. In response to receiving the troubleshoot command from the user interface 302, the analytics engine 312 executes the operations to obtain log messages that describe the probable root cause of the performance problem indicated by the KPI threshold violation. The GUI 2200 includes a pane 2220 that displays a plot of the ten largest importance scores of the event types of log messages produced in a time interval as described above with reference to Equations (24) . . . (29). Pane 2222 displays the most recent log messages of the ten largest importance scores. A user can scroll through the log messages and identify the log messages that describe the root cause of the problem. The user clicks on the “remedial measures” button 2224 to display remedial measures associated with the root cause of the problem; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object. A user can automatically execute a selected remedial measure by clicking on the corresponding “Execute” button; examiner note: per the instant specification an avatar includes any symbol or shape (see [0065], [0075], [0079]))
Regarding claim 7, Harutyunyan teaches all the limitations claim 6, further comprising:
wherein the first indicator of the first status of the first aspect is one selected from a list consisting of: a quantification; and a color-coded element (Harutyunyan Figs. 1-28; [0046], a user interface 302 that provides graphical user interfaces for data center management, system administrators, and application owners to receive alerts, view metrics, log messages, and KPIs, and execute user-selected remedial measures to correct performance problems; [0132], KPI values of Applications 1, 2, and 4 are below a threshold 1812, which indicates the applications are performing normally as represented by normal icons, such as normal icon 1814. On the other hand, KPI values of the Application 3 exceed the threshold 1812, such as KPI value 1814, triggering a warning alert 1816. Threshold 1816 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 1818; [0144], In this example, an alert 2216 is displayed in the plot 2210. Each KPI has an associated troubleshoot button. In this example, a user clicks on the troubleshoot button 2218 to start the troubleshooting process of the KPI. In response to receiving the troubleshoot command from the user interface 302, the analytics engine 312 executes the operations to obtain log messages that describe the probable root cause of the performance problem indicated by the KPI threshold violation. The GUI 2200 includes a pane 2220 that displays a plot of the ten largest importance scores of the event types of log messages produced in a time interval as described above with reference to Equations (24) . . . (29). Pane 2222 displays the most recent log messages of the ten largest importance scores. A user can scroll through the log messages and identify the log messages that describe the root cause of the problem. The user clicks on the “remedial measures” button 2224 to display remedial measures associated with the root cause of the problem; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object. A user can automatically execute a selected remedial measure by clicking on the corresponding “Execute” button)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Harutyunyan in view of Mulwad et al. (US 10430517 B1, published 10/01/2019), hereinafter Mulwad.
Regarding claim 8, Harutyunyan teaches all the limitations claim 6, further comprising:
the operation of the data processing system based on the user feedback to obtain updated simulated operation of the data processing system; performing a comparison process to determine whether to update the operation of the data processing system (Harutyunyan Figs. 1-28; [0144], In this example, an alert 2216 is displayed in the plot 2210. Each KPI has an associated troubleshoot button. In this example, a user clicks on the troubleshoot button 2218 to start the troubleshooting process of the KPI. In response to receiving the troubleshoot command from the user interface 302, the analytics engine 312 executes the operations to obtain log messages that describe the probable root cause of the performance problem indicated by the KPI threshold violation. The GUI 2200 includes a pane 2220 that displays a plot of the ten largest importance scores of the event types of log messages produced in a time interval as described above with reference to Equations (24) . . . (29). Pane 2222 displays the most recent log messages of the ten largest importance scores. A user can scroll through the log messages and identify the log messages that describe the root cause of the problem. The user clicks on the “remedial measures” button 2224 to display remedial measures associated with the root cause of the problem; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object. A user can automatically execute a selected remedial measure by clicking on the corresponding “Execute” button)
However, Harutyunyan fails to expressly disclose wherein making the determination comprises: simulating, using a digital twin of the data processing system, the operation of the data processing system based on the user feedback to obtain updated simulated operation of the data processing system; and performing a comparison process, using the updated simulated operation of the data processing system and operation criteria for the data processing system to determine whether to update the operation of the data processing system. In the same field of endeavor, Mulwad teaches:
wherein making the determination comprises: simulating, using a digital twin of the data processing system, the operation of the data processing system based on the user feedback to obtain updated simulated operation of the data processing system; and performing a comparison process, using the updated simulated operation of the data processing system and operation criteria for the data processing system to determine whether to update the operation of the data processing system (Mulwad Figs. 1-8; col. 4 [line 47], embodiments of the apparatus, system and method may automatically classify issues into one or more of a series of available appropriate issue categories, and may recommend possible solutions and actions based, in part, upon that categorization which a user can manually implement or have automatically implemented, such through a user interface dedicated to resolving IT issues; col. 13 [line 19], The presented solution set 118 may be presented via interface 130, as discussed throughout. Of note, the interface 130 may, in certain embodiments, further include actuatable components 138 in the visualization presented to the user, which may be provided in order to implement ones of the reconfigurations and/or solutions presented in the solution set 118. By way of non-limiting example, one of the solutions 118 may be actuated via a simple actuation/click/touch by the user on a visual component 138; col. 13 [line 32], as part of the interface 130, an isolated solution environment 142 may be provided in system 110. For example, the isolated environment 142 aspect of interface 130 may provide a “safe” environment in which a solution 118 may be “tested”, such as in the form of a simulation, against the present data state of a node or network to assess an impact of implementation of that solution 118 from solution set 118, apart from and thus without access to the “live” network 150, before ultimate implementation of that solution 118 by the user. Such an impact analysis may be a “what if′ scenario” posed by the user. In short, an aspect of interface 130 may thus be configured to engage in a runtime simulation and perform an impact analysis of a selected or prospective solution 118 proposed by solutions engine 120, and this aspect may be embodied as an isolated environment 142, by way of non-limiting example. Correspondingly, interface 130 and/or environment 142 may be or include an analytics subsystem 127, which may include one or more rulesets that, given access to a data snapshot of nodes 150(1), 150(2) and/or network 150, may provide an executable simulation of what effect undertaking an action would likely have on that snapshot data; col. 17 [line 56], FIG. 8 illustrates an embodiment of an interface 602 in accordance with disclosure. More particularly, illustratively shown is an embodiment of the visualization provided by interface 602, which may be an additional or alternative embodiment to the interface 130 discussed above. In the illustration, a user may enter an inquiry 604 into dialog 606. As is also shown, one or more categories 608 of prospective solutions 618a, 618b, 618c may be provided to the user)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein making the determination comprises: simulating, using a digital twin of the data processing system, the operation of the data processing system based on the user feedback to obtain updated simulated operation of the data processing system; and performing a comparison process, using the updated simulated operation of the data processing system and operation criteria for the data processing system to determine whether to update the operation of the data processing system as suggested in Mulwad into Harutyunyan. Doing so would be desirable because the known art affords no opportunity to override a category selected previously, whether that category was selected by the current IT personnel using the electronic interface or by another IT personnel. Nor does the known art allow for IT personnel to make judgments about the favorability of prior attempted solutions to an IT issue, but rather allows only for the entry of issue notes by IT personnel on the presently-attempted solution. Thus, as a particular IT person learns in her position, or as multiple IT personnel at an enterprise enhance their collective knowledge base over time, that learning cannot be readily reflected for addressing future issues within the typical IT ticket system (see Mulwad col. 1 [line 48]). The known art provides little in the way of guidance to IT personnel who do not have significant experience and/or preconceived ideas of how to address an open IT ticket. That is, the historic accumulation of knowledge regarding the addressing of IT issues, either within the enterprise or outside of it, is not typically available to the IT personnel through typical known electronic IT ticket systems (see Mulwad col. 1 [line 60]). Therefore, the need exists for an apparatus, system, and method for an agent that intelligently addresses IT issues (see Mulwad col. 2 [line 1]). The system of Mulwad would improve the system of by enabling the user to engage in a runtime simulation and perform an impact analysis of a selected or prospective solution to better understand what effect undertaking an action would likely have (see Mulwad col. 13 [line 32]), thereby allowing the user to make more informed decisions regarding how to resolve issues and, consequently, improve the performance of the system.
Regarding claim 9, Harutyunyan in view of Mulwad teaches all the limitations claim 8, further comprising:
obtaining a quantification of the first status of the first aspect; making a second determination regarding whether the quantification of the first status of the first aspect falls within an approved range for the quantification of the first status of the first aspect (Harutyunyan Figs. 1-28; [0046], a user interface 302 that provides graphical user interfaces for data center management, system administrators, and application owners to receive alerts, view metrics, log messages, and KPIs, and execute user-selected remedial measures to correct performance problems; [0132], KPI values of Applications 1, 2, and 4 are below a threshold 1812, which indicates the applications are performing normally as represented by normal icons, such as normal icon 1814. On the other hand, KPI values of the Application 3 exceed the threshold 1812, such as KPI value 1814, triggering a warning alert 1816. Threshold 1816 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 1818; [0144], In this example, an alert 2216 is displayed in the plot 2210. Each KPI has an associated troubleshoot button. In this example, a user clicks on the troubleshoot button 2218 to start the troubleshooting process of the KPI. In response to receiving the troubleshoot command from the user interface 302, the analytics engine 312 executes the operations to obtain log messages that describe the probable root cause of the performance problem indicated by the KPI threshold violation. The GUI 2200 includes a pane 2220 that displays a plot of the ten largest importance scores of the event types of log messages produced in a time interval as described above with reference to Equations (24) . . . (29). Pane 2222 displays the most recent log messages of the ten largest importance scores. A user can scroll through the log messages and identify the log messages that describe the root cause of the problem. The user clicks on the “remedial measures” button 2224 to display remedial measures associated with the root cause of the problem; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object. A user can automatically execute a selected remedial measure by clicking on the corresponding “Execute” button)
Mulwad further teaches:
wherein performing the comparison process comprises (Mulwad Figs. 1-8; col. 4 [line 47], embodiments of the apparatus, system and method may automatically classify issues into one or more of a series of available appropriate issue categories, and may recommend possible solutions and actions based, in part, upon that categorization which a user can manually implement or have automatically implemented, such through a user interface dedicated to resolving IT issues; col. 13 [line 19], The presented solution set 118 may be presented via interface 130, as discussed throughout. Of note, the interface 130 may, in certain embodiments, further include actuatable components 138 in the visualization presented to the user, which may be provided in order to implement ones of the reconfigurations and/or solutions presented in the solution set 118. By way of non-limiting example, one of the solutions 118 may be actuated via a simple actuation/click/touch by the user on a visual component 138; col. 13 [line 32], as part of the interface 130, an isolated solution environment 142 may be provided in system 110. For example, the isolated environment 142 aspect of interface 130 may provide a “safe” environment in which a solution 118 may be “tested”, such as in the form of a simulation, against the present data state of a node or network to assess an impact of implementation of that solution 118 from solution set 118, apart from and thus without access to the “live” network 150, before ultimate implementation of that solution 118 by the user. Such an impact analysis may be a “what if′ scenario” posed by the user. In short, an aspect of interface 130 may thus be configured to engage in a runtime simulation and perform an impact analysis of a selected or prospective solution 118 proposed by solutions engine 120, and this aspect may be embodied as an isolated environment 142, by way of non-limiting example. Correspondingly, interface 130 and/or environment 142 may be or include an analytics subsystem 127, which may include one or more rulesets that, given access to a data snapshot of nodes 150(1), 150(2) and/or network 150, may provide an executable simulation of what effect undertaking an action would likely have on that snapshot data; col. 17 [line 56], FIG. 8 illustrates an embodiment of an interface 602 in accordance with disclosure. More particularly, illustratively shown is an embodiment of the visualization provided by interface 602, which may be an additional or alternative embodiment to the interface 130 discussed above. In the illustration, a user may enter an inquiry 604 into dialog 606. As is also shown, one or more categories 608 of prospective solutions 618a, 618b, 618c may be provided to the user)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein performing the comparison process comprises as suggested in Mulwad into Harutyunyan. Doing so would be desirable because the known art affords no opportunity to override a category selected previously, whether that category was selected by the current IT personnel using the electronic interface or by another IT personnel. Nor does the known art allow for IT personnel to make judgments about the favorability of prior attempted solutions to an IT issue, but rather allows only for the entry of issue notes by IT personnel on the presently-attempted solution. Thus, as a particular IT person learns in her position, or as multiple IT personnel at an enterprise enhance their collective knowledge base over time, that learning cannot be readily reflected for addressing future issues within the typical IT ticket system (see Mulwad col. 1 [line 48]). The known art provides little in the way of guidance to IT personnel who do not have significant experience and/or preconceived ideas of how to address an open IT ticket. That is, the historic accumulation of knowledge regarding the addressing of IT issues, either within the enterprise or outside of it, is not typically available to the IT personnel through typical known electronic IT ticket systems (see Mulwad col. 1 [line 60]). Therefore, the need exists for an apparatus, system, and method for an agent that intelligently addresses IT issues (see Mulwad col. 2 [line 1]). The system of Mulwad would improve the system of by enabling the user to engage in a runtime simulation and perform an impact analysis of a selected or prospective solution to better understand what effect undertaking an action would likely have (see Mulwad col. 13 [line 32]), thereby allowing the user to make more informed decisions regarding how to resolve issues and, consequently, improve the performance of the system.
Regarding claim 10, Harutyunyan in view of Mulwad teaches all the limitations claim 9, further comprising:
wherein the approved range for the quantification of the first status of the first aspect is indicated by an administrator for the data processing system (Harutyunyan Figs. 1-28; [0042], an operations management interface may be displayed in a graphical user interface to system administrators and other users; [0044], The operations manager 132 is an automated computer implemented tool that aids IT administrators with monitoring, troubleshooting, and managing the health and capacity of the data center virtual environment; [0046], a user interface 302 that provides graphical user interfaces for data center management, system administrators, and application owners to receive alerts, view metrics, log messages, and KPIs, and execute user-selected remedial measures to correct performance problems; maintains dynamic thresholds of metrics, and generates alerts in response to KPIs that violate corresponding thresholds; [0132], KPI values of Applications 1, 2, and 4 are below a threshold 1812, which indicates the applications are performing normally as represented by normal icons, such as normal icon 1814. On the other hand, KPI values of the Application 3 exceed the threshold 1812, such as KPI value 1814, triggering a warning alert 1816. Threshold 1816 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 1818; [0138], the user-defined threshold may be set to 70%, 60%, 50% or 40%. The importance score computed in Equation (28) is assigned to each corresponding event type. The event types are rank ordered based on the corresponding importance scores to identify the highest ranked event types that affect the KPI. For example, the highest ranked event types have importance scores above the user-defined threshold Th.sub.score. The combination of highest ranked event types associated with a KPI that indicates a performance problem with an application identify the root cause of the performance problem with the application; [0144], In this example, an alert 2216 is displayed in the plot 2210. Each KPI has an associated troubleshoot button. In this example, a user clicks on the troubleshoot button 2218 to start the troubleshooting process of the KPI. In response to receiving the troubleshoot command from the user interface 302, the analytics engine 312 executes the operations to obtain log messages that describe the probable root cause of the performance problem indicated by the KPI threshold violation. The GUI 2200 includes a pane 2220 that displays a plot of the ten largest importance scores of the event types of log messages produced in a time interval as described above with reference to Equations (24) . . . (29). Pane 2222 displays the most recent log messages of the ten largest importance scores. A user can scroll through the log messages and identify the log messages that describe the root cause of the problem. The user clicks on the “remedial measures” button 2224 to display remedial measures associated with the root cause of the problem; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object. A user can automatically execute a selected remedial measure by clicking on the corresponding “Execute” button; [0148] In another implementation, the inference models can be used to identify log messages of event types that impact performance of data center objects in order to optimize planning and avoid performance problems with objects. For example, a system administrator may observe via the graphical user interface that a KPI has not violated a KPI threshold, but KPI values have not stayed in a desired range of values. For example, the KPI may be latency metric of an object, such as Object 02 in the GUI 2200. Suppose a systems administrator observes that the KPI has not violated a corresponding latency threshold in pane 2206, but the KPI often indicates an increase in network latency for periods that are longer than expected. The KPI has an associated inference model as described above. Even though the KPI has not violate a KPI threshold, the systems administrator may click on the troubleshoot button 2218 to view event types with the largest importance scores and log messages associated with the event types in panes 2220 and 2222, respectively)
Regarding claim 11, Harutyunyan in view of Mulwad teaches all the limitations claim 8, further comprising:
presenting, via the updated GUI, a preference selection tool to allow the user to establish relative preferences between at least two of the aspects of the multiple aspects (Harutyunyan Figs. 1-28; [0138], the user-defined threshold may be set to 70%, 60%, 50% or 40%. The importance score computed in Equation (28) is assigned to each corresponding event type. The event types are rank ordered based on the corresponding importance scores to identify the highest ranked event types that affect the KPI. For example, the highest ranked event types have importance scores above the user-defined threshold Th.sub.score. The combination of highest ranked event types associated with a KPI that indicates a performance problem with an application identify the root cause of the performance problem with the application; [0144], In this example, an alert 2216 is displayed in the plot 2210. Each KPI has an associated troubleshoot button. In this example, a user clicks on the troubleshoot button 2218 to start the troubleshooting process of the KPI. In response to receiving the troubleshoot command from the user interface 302, the analytics engine 312 executes the operations to obtain log messages that describe the probable root cause of the performance problem indicated by the KPI threshold violation. The GUI 2200 includes a pane 2220 that displays a plot of the ten largest importance scores of the event types of log messages produced in a time interval as described above with reference to Equations (24) . . . (29). Pane 2222 displays the most recent log messages of the ten largest importance scores. A user can scroll through the log messages and identify the log messages that describe the root cause of the problem. The user clicks on the “remedial measures” button 2224 to display remedial measures associated with the root cause of the problem; [0145], FIG. 22B shows a remedial measures pane 2226 displayed in the GUI 2200 in response to a user clicking on the “remedial measures” button 2224. Remedial measures include restarting the host 2228, restarting the VM 2230, and increasing memory allocation to the VM 2232, where UUID denotes the universal unique identity of the object. A user can automatically execute a selected remedial measure by clicking on the corresponding “Execute” button; [0148], Even though the KPI has not violate a KPI threshold, the systems administrator may click on the troubleshoot button 2218 to view event types with the largest importance scores and log messages associated with the event types in panes 2220 and 2222, respectively)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Clark (US 20070061385 A1) see Fig. 6 and [0096].
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN T REPSHER III whose telephone number is (571)272-7487. The examiner can normally be reached Monday - Friday, 8AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN T REPSHER III/ Primary Examiner, Art Unit 2143