DETAILED ACTION
This communication is in response to Application No. 18/322,341 filed on 5/23/2023. The amendment presented on 1/22/2026, which cancels claims 12-20 and adds new claims 21-27, is hereby acknowledged. Claims 1-11 and 21-27 have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of claims 1-11 in the reply filed on 1/22/2026 is acknowledged.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-11 and 21-27 are rejected under 35 U.S.C. 103 as being unpatentable over Harutyunyan et al. (hereinafter Harutyunyan)(US 2023/0281070) in view of Bailey et al. (hereinafter Bailey)(US 2021/0219460).
Regarding claims 1 and 21, Harutyunyan teaches as follows:
A data center monitoring system (interpreted as the operations management server 132 in figure 1) for a data center (the operations management server 132 automatically monitors in real time KPIs of data center objects for corresponding threshold violations. A KPI is a metric constructed from other metrics and is used as a real time indicator of the health of an object within the data center, see, ¶ [0167]), the data center including Informational Technology (IT) equipment and Operational Technology (OT) equipment, the data center monitoring system comprising:
an input (he electronic displays, including visual display screen, audio speakers, and other output interfaces, and the input devices, including mice, keyboards, touch screens, and other such input interfaces, together constitute input and output interfaces that allow the computer system to interact with human users, see, ¶ [0203] and figure 40) for receiving signals from the IT equipment and signals from the OT equipment (the operations management server 132 receiving object information from various physical and virtual objects. Directional arrows represent object information sent from physical and virtual resources to the operations manager 132, see, ¶ [0053] and figure 2A);
a display (FIG. 40 shows an example architecture of a computer system that may be used to host the operations management server 132 and perform automated methods and system for identifying and resolving root causes of performance problems in objects of a data center… The electronic displays, including visual display screen, see, ¶ [0203] and figure 40); and
a controller operably coupled to the input and the display (the busses or serial interconnections, in turn, connect the CPUs and memory with specialized processors, such as a graphics processor 4018, and with one or more additional bridges 4020, which are interconnected with high-speed serial links or with multiple controllers 4022-4027, such as controller 4027, that provide access to various different types of computer-readable media, such as computer-readable medium 4028, electronic displays, input devices, and other such components, subcomponents, and computational resources, see, ¶ [0203] and figure 40), the controller configured to:
receive one or more current performance parameters of the OT system and/or one or more current performance parameters of the IT system (the operations management server 132 automatically monitors in real time KPIs (equivalent to applicant’s current performance parameters) of data center objects for corresponding threshold violations. A KPI is a metric constructed from other metrics and is used as a real time indicator of the health of an object within the data center, see, ¶ [0167]);
receive an alarm (when anyone of the rules Rule 1, Rule 2 and Rule 3 is satisfied, the operations management server 132 displays the corresponding alert in a data center dashboard of a GUI of the data center and identifies the object as an outlier object, see, ¶ [0183]);
utilize one or more current performance parameters (the operations management server 132 automatically monitors in real time KPIs of data center objects for corresponding threshold violations. A KPI is a metric constructed from other metrics and is used as a real time indicator of the health of an object within the data center, see, ¶ [0167]) of the OT system and/or one or more current performance parameters of the IT system, in conjunction with the stored correlation, to determine whether one or more of the current performance parameters of the non-source system correlate to a possible root cause of the alarm of the source system (a KPI that violates a corresponding threshold is an indication that the corresponding object has entered an abnormal operational state (i.e., performance problem). The operations management server 132 monitors the KPIs associated with objects and uses the threshold violations to tag outlier objects as being in abnormal or normal operational states, see, ¶ [0168])(the operations management server 132 determines a root cause based on real time KPIs of the data center objects, see, ¶ [0192]);
provide, on the display, a dashboard that includes:
one or more alarm details of the alarm issued by the source system (when anyone of the rules Rule 1, Rule 2 and Rule 3 is satisfied, the operations management server 132 displays the corresponding alert in a data center dashboard of a GUI of the data center and identifies the object as an outlier object, see, ¶ [0183]); and
a listing of one or more possible root causes of the alarm that correlate to the non-source system (the operations management server 132 determines a root cause and recommendation for resolving a performance problem with an object by determining a closest list of event ranks in the root causes and recommendations database to the runtime list of event ranks of the object, see, ¶ [0192])(in block 3307, an alert is displayed in a GUI, the alert identifying the object, the root cause of the performance problem, and the recommendation for resolving the performance problem, see, ¶ [0196] and figure 33).
Harutyunyan identifies alerts for objects in the data center (the objects include virtual objects, such as VMs, containers, applications, programs, and software, and physical objects, such as server computers, data storage devices, network devices, and other physical components of the data center, see, ¶ [0121] and figure 13) as presented above but does not explicitly teach the objects differentiated by the IT system or the OT system.
Bailey teaches as follows:
FIG. 1 depicts a simplified functional block diagram of a data center, and in particular a modular data center (MDC) 100 having IT compartment 103 within which information technology (IT) components 101 (equivalent to applicant’s IT system) and operation technology (OT) components 102 (equivalent to applicant’s OT system) that are efficiently maintained within specified environmental operating conditions by environmental subsystem 104. Certain OT components 102 can be installed within a separate compartment such as utility room 105. Utility room 105 provides a higher degree of access to infrastructure support functions, such as management system 106, network subsystem 107, security subsystem 108, fire suppression subsystem 109, and power distribution subsystem 110 (see, ¶ [0022] and figure 1).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify Harutyunyan with Bailey to include the data center separated by the IT system and the OT system as taught by Bailey in order to efficiently identify correlated alerts between the IT system and the OT system.
Regarding claims 2 and 22, Harutyunyan teaches as follows:
When the status label identifies abnormal behavior (i.e., the object has a performance problem), an alert is triggered and displayed in a data center dashboard of a GUI the data center, such as the dashboard of a systems administrator or a data center tenant and identifies the object as an abnormally behaving object, see, ¶ [0182]); and
FIGS. 32A-32C show an example of using the root causes and recommendations database to identify a root cause of a performance problem with the object Oj and provide a recommendation for resolving the problem… The GUI includes a button that enables a user to execute the operation of determining the root cause and the recommendation for resolving the performance problem (see, ¶ [0193]).
Therefore, Harutyunyan’ GUI interface inherently teaches the claimed limitations.
Regarding claims 3 and 23, Harutyunyan teaches as follows:
Wherein the correlation comprises a look-up table (interpreted as the table 2314 in figure 23B) that provides the correlation between one or more performance parameters of the IT system on one or more alarms of the OT system and/or one or more performance parameters of the OT system on one or more alarms of the IT system (FIG. 23B shows example contents of an operational status database 2312 the contents of which are represented as a table 2314. Column 2316 list the runtime distributions. Column 2318 list the significant events. Column 2320 list the status labels assigned to the runtime distributions and/or significant events, see, ¶ [0164] and figure 23B)(in an alternative implementation, state labeling is automated and performed using key performance indicator (“KPI”) metrics that correspond to the outlier objects. The operations management server 132 automatically monitors in real time KPIs of data center objects for corresponding threshold violations. A KPI is a metric constructed from other metrics and is used as a real time indicator of the health of an object within the data center, see, ¶ [0167]).
Regarding claims 4 and 24, Harutyunyan teaches as follows:
Wherein the correlation is represented at least in part in a machine learning model, wherein teaching the machine learning model includes: monitoring one or more performance parameters of the IT system and identifying resulting impacts on one or more performance parameters of the OT system; and/or monitoring one or more performance parameters of the OT system and identifying resulting impacts on one or more performance parameters of the IT system (the operations management server 132 uses machine learning and the operational status database, such as databases 2302, 2312, and 2402, to construct a model that is used to identify the operational state (i.e., normal or abnormal operations state) of the objects Oj (i.e., for j=1, 2, . . . , J) in a runtime window denoted by TWr. In other words, the model is used to determination whether object is experiencing a performance problem (see, ¶ [0172]).
Regarding claims 5 and 25, Harutyunyan in view of Bailey teaches similar limitations as presented above.
Harutyunyan further teaches of modulating various equipment as follows:
Each list of event ranks includes a previously identified root cause of a performance problem associated with the list of event ranks denoted by RCh, and a recommendation of a remedial measure previously used to correct the performance problem denoted by Rech. Each list of event ranks serves as a fingerprint for a particular kind of performance problem (equivalent to applicant’s resulting impacts), root cause of the problem, and a recommendation (equivalent to applicant’s modulating equipment) for resolving the problem… For example, if the performance problem is a long delay in response time to client requests, the root cause RC1 may be that a particular VM used to run an application component is not responding, and a previously used remedial measure to correct the problem is restarting the server computer that host the VM is denoted by the recommendation Rec1 (see, ¶ [0191] and figure 31).
Therefore, they are rejected for similar reason as presented above.
Regarding claims 6 and 9, Harutyunyan teaches as follows:
Wherein: the alarm comprises an IT alarm, where the source system is the IT system; and the listing of one or more possible root causes of the alarm comprise suspected problems with one or more pieces of OT equipment of the OT system (each list of event ranks serves as a fingerprint for a particular kind of performance problem, root cause of the problem, and a recommendation for resolving the problem, see, ¶ [0191] and figure 31).
Regarding claims 7 and 11, Harutyunyan teaches as follows:
The operations management server 132 displays an alert in a GUI. FIG. 32B shows an example GUI that displays an alert associated with the object Oj. The alert identifies the object Oj as having a performance problem. The GUI includes a button that enables a user to execute the operation of determining the root cause and the recommendation for resolving the performance problem (see, ¶ [0193]);
the objects include virtual objects, such as VMs, containers, applications, programs, and software, and physical objects, such as server computers, data storage devices, network devices, and other physical components of the data center, see, ¶ [0121] and figure 13); and
if the object is a network device, such as a switch, router, or a network interface controller. KPIs include latency, throughput, number of packets dropped per unit time (equivalent to applicant’s network error), or number of packets transmitted per unit time (see, ¶ [0167]).
Regarding claims 8 and 10, Harutyunyan teaches as follows:
The operations management server 132 automatically monitors in real time KPIs of data center objects for corresponding threshold violations. A KPI is a metric constructed from other metrics and is used as a real time indicator of the health of an object within the data center, see, ¶ [0167]).
Harutyunyan does not explicitly teach the claimed performance parameters.
Bailey teaches as follows:
Management controller 117 can determine cooling requirements based in part on the received cooling requirements of the IHSs 120 and based on values provided by power consumption monitor 160, rack pressure sensor 161a, rack humidity sensor 162a, and rack temperature sensor 163a… Management controller 117 can determine respective cooling requirements for each of separate zones defined within IT compartment, based in part on cold aisle (CA) environmental sensors depicted as CA pressure sensor 161c, CA temperature sensor 162c, and CA humidity sensor 163c and/or based in part on hot aisle (HA) environmental sensors depicted as HA pressure sensor 161d, HA temperature sensor 162d, and HA humidity sensor 163d (see, ¶ [0030] and figure 1).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify Harutyunyan with Bailey to include the multiple sensors as taught by Bailey in order to efficiently monitor specific performance parameters of monitored objects.
Regarding claim 26, Harutyunyan in view of Bailey teaches same limitations as the rejection for claims 6 and 7. Therefore, they are rejected for similar reason as presented above.
Regarding claim 27, Harutyunyan in view of Bailey teaches same limitations as the rejection for claims 9 and 10. Therefore, they are rejected for similar reason as presented above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jeong S Park whose telephone number is (571)270-1597. The examiner can normally be reached Monday through Friday 8:00-4:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Glenton B Burgess can be reached at 571-272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEONG S PARK/Primary Examiner, Art Unit 2454
March 7, 2026