Prosecution Insights
Last updated: April 19, 2026
Application No. 18/313,993

SYSTEMS, APPARATUS, AND METHODS FOR MANAGING COOLING OF COMPUTE COMPONENTS

Non-Final OA §102§103
Filed
May 08, 2023
Examiner
EVERETT, CHRISTOPHER E
Art Unit
2117
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
692 granted / 830 resolved
+28.4% vs TC avg
Strong +24% interview lift
Without
With
+23.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
37 currently pending
Career history
867
Total Applications
across all art units

Statute-Specific Performance

§101
8.3%
-31.7% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
25.7%
-14.3% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 830 resolved cases

Office Action

§102 §103
DETAILED ACTION In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 5-11, and 13-16 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable by U.S. Patent No. 9,003,003 (Hyser). Claim 1: The cited prior art describes an apparatus comprising: (Hyser: see the workload/cooling management hardware module 100 as illustrated in figure 1) interface circuitry; (Hyser: “In one embodiment, WCMHM 100 includes a workload state information accessor 105, a cooling state information accessor 145, a state information comparor 165, and a workload re-positioning instruction generator 170.” Col. 3, lines 56-59) machine readable instructions; and (Hyser: “The computer readable and computer executable instructions reside, for example, in data storage features such as computer usable volatile and non-volatile memory. However, the computer readable and computer executable instructions may reside in any type of computer readable medium. In one embodiment, process 400 is performed by WCMHM 100 of FIG. 1.” Col. 9, lines 2-8) programmable circuitry to at least one of instantiate or execute the machine readable instructions to: (Hyser: “FIG. 4 is a flowchart of an example method for managing computer resources, in accordance with embodiments of the present technology. In one embodiment, process 400 is carried out by processors and electrical components under the control of computer readable and computer executable instructions.” Col. 8, line 64 through col. 9, line 2) identify a workload to be performed by a compute device; (Hyser: see the access computer workload state information 405 as illustrated in figure 4) identify a service level objective associated with the workload or the compute device; (Hyser: see the access performance requirements associated with the data center 405 as illustrated in figure 4; “said performance requirements including rules associated with a service level agreement to be met in performing the workloads” claim 1) determine a parameter of a coolant in an environment including the compute device; and (Hyser: see the access computer cooling state 410 as illustrated in figure 4) cause at least one of (a) a cooling distribution unit to cause the coolant parameter to be adjusted to enable the service level objective to be satisfied or (Hyser: see the computer cooling resources adjustment instructions to meet the performance requirements 425 and the sends cooling resource adjustment instructions 440 as illustrated in figure 4) (b) the workload to be adjusted to enable the service level objective to be satisfied. (Hyser: see the workload repositioning instructions to meet the performance requirements 423 as illustrated in figure 4) Claim 5: The cited prior art describes the apparatus of claim 1, wherein the workload is a first workload, (Hyser: see the workloads 130A-130E as illustrated in figure 1) the compute device is a first compute device, (Hyser: see the servers 115A-115J as illustrated in figure 1) the first compute device to perform a second workload, and (Hyser: see the workloads 130A-130E as illustrated in figure 1) the programmable circuitry is to cause the second workload to be re-distributed from the first compute device to a second compute device based on one or more of the service level objective, the coolant parameter, or the adjustment to the coolant parameter. (Hyser: “Further, FIG. 1 shows workload repositioning instructions 175 being generated by workload repositioning instruction generator 170. These workload repositioning instructions 175 may comprise, but are not limited to, instructions to reposition workload on the one or more servers 115A-115J to meet performance requirements 140 and powering on and/or off servers in response to the repositioning of workloads on the one or more servers 115A-115J.” col. 4, lines 54-61; “Based upon accessed information concerning workload placement, cooling conditions of the servers and the air conditioning units, and performance requirements of the data center, the WCMHM 100 may cause workload to be migrated to different servers of a data center and cause air conditioning units to be turned up and/or down to meet SLAs while also conserving overall resources.” Col. 5, lines 58-64) Claim 6: The cited prior art describes the apparatus of claim 1, wherein the programmable circuitry is to determine the adjustment to the coolant parameter based on an expected operating temperature range of the compute device during performance of the workload. (Hyser: “Furthermore, in FIG. 1, the workload state information accessor 105 accesses workload state information 110 in the form of cooling conditions 135 of one or more servers 115A-115J. Referring now to FIG. 2, in one embodiment, these cooling conditions 135 for each server are quantified as a local workload placement index (LWPI) 225. The LWPI 225 and its function is well known in the art. The WCMHM 100 accesses this LWPI 225 measurement for each server of the one or more servers 115A-115J. More particularly, the temperatures 215 and the air flow 220 with the zone of influence of each cooling resource 155A-155F (in essence, the cooling efficiency capability for that specific area) is measured by the LWPI 225. The LWPI 225 determines the best available location for placement of a workload based on that location's cooling efficiency capability.” Col. 6, lines 26-40) Claim 7: The cited prior art describes the apparatus of claim 6, wherein the coolant parameter includes a temperature of the coolant or a flow rate of the coolant. (Hyser: “FIG. 2 further shows temperatures 215, and air flow 220 within a zone of influence of each cooling resource 155A-155F associated with one or more servers 115A-115J. The phrase "zone of influence", in the context of embodiments of the present technology, refers to an area affected by (temperature is increased or decrease in this area due to the influence of a cooling resources) each cooling resource of cooling resources 155A-155F. In one embodiment, the temperatures 215 and air flow 220 within a zone of influence of each cooling resource 155A-155F associated with one or more servers 115A-115J is quantified as a local workload placement index (LWPI) 225.” Col. 5, lines 10-21) Claim 8: The cited prior art describes the apparatus of claim 6, wherein the compute device is a first compute device, and (Hyser: see the servers 115A-115J as illustrated in figure 1) the programmable circuitry is to cause the cooling distribution unit to adjust a flow of the coolant between the first compute device and a second compute device based on the workload. (Hyser: “In continuing with the example begun above, these cooling resource adjustment instructions 210 adjust cooling resources 155A and 155B to be powered off, while turning up cooling resource 155F to emit more cooling air.” Col. 8, lines 16-20; “Referring to FIG. 2, a block diagram of an example of a WCMHM 100 upon which embodiments of the present technology can be implemented is shown. In one embodiment, WCMHM 100 further includes a cooling resource adjustment instruction generator 205, cooling resource adjustment instructions 210 and instruction sender 230. In embodiments of the present technology, cooling resource adjustment instructions 210 include, but are not limited to, instructions to adjust cooling resources 155A-155F (e.g., adjusting set points or valves controlling movement of a cooling medium) and to power cooling resources 115A-155F on and/or off.” Col. 4, line 62 through col. 5, line 5) Claim 9: The cited prior art describes the apparatus of claim 1, wherein the service level objective includes (Hyser: see the access performance requirements associated with the data center 405 as illustrated in figure 4; “said performance requirements including rules associated with a service level agreement to be met in performing the workloads” claim 1) a target throughput, a target latency, (Hyser: “More specifically, the rules and polices govern the minimum acceptable performance of the services. One rule and policy may relate to response time of the data center. For example, a rule and policy may govern a minimum acceptable response time required for responding to a customer's click of a button during an on-line transaction. Once the customer clicks a button to select an item to purchase, the customer expects this selected item to be placed in a shopping cart. If this item is placed in the shopping cart too slowly, the customer may become frustrated with the wait time and terminate the transaction without purchasing anything. Owners of services that are to be transacted utilizing the data center desire that these minimum performance requirements be met, as described in an SLA.” Col. 4, lines 13-26) target instructions per second, or an operating temperature threshold of the compute device or of the workload. Claim 10: The cited prior art describes the apparatus of claim 1, wherein the programmable circuitry includes one or more of: at least one of a central processor unit, a graphics processor unit, or a digital signal processor, the at least one of the central processor unit, the graphics processor unit, or the digital signal processor having control circuitry to control data movement within the programmable circuitry, arithmetic and logic circuitry to perform one or more first operations corresponding to machine-readable data, and one or more registers to store a result of the one or more first operations, the machine-readable data in the apparatus; (Hyser: see the processors 506A-C and memories 510, 508, 512 as illustrated in figure 5; “FIG. 4 is a flowchart of an example method for managing computer resources, in accordance with embodiments of the present technology. In one embodiment, process 400 is carried out by processors and electrical components under the control of computer readable and computer executable instructions. The computer readable and computer executable instructions reside, for example, in data storage features such as computer usable volatile and non-volatile memory. However, the computer readable and computer executable instructions may reside in any type of computer readable medium. In one embodiment, process 400 is performed by WCMHM 100 of FIG. 1.” Col. 8, line 64 through col. 9, line 8) a Field Programmable Gate Array (FPGA), the FPGA including logic gate circuitry, a plurality of configurable interconnections, and storage circuitry, the logic gate circuitry and the plurality of the configurable interconnections to perform one or more second operations, the storage circuitry to store a result of the one or more second operations; or Application Specific Integrated Circuitry (ASIC) including logic gate circuitry to perform one or more third operations. Claim 11: The cited prior art describes a non-transitory machine readable storage medium comprising instructions to cause programmable circuitry to at least: (Hyser: ““FIG. 4 is a flowchart of an example method for managing computer resources, in accordance with embodiments of the present technology. In one embodiment, process 400 is carried out by processors and electrical components under the control of computer readable and computer executable instructions. The computer readable and computer executable instructions reside, for example, in data storage features such as computer usable volatile and non-volatile memory. However, the computer readable and computer executable instructions may reside in any type of computer readable medium. In one embodiment, process 400 is performed by WCMHM 100 of FIG. 1.” Col. 8,, line 64 through col. 9, line 8) identify a first coolant parameter associated with a first compute device based on outputs generated by a first sensor associated with the first compute device; (Hyser: see the thermal sensor 245A coupled to the server 115A as illustrated in figure 3; “Referring now to 445 of FIG. 4 and as described herein, in one embodiment workload state information 110 comprising cooling condition 135 of one or more servers 115A-115J is received by a computer from one or more thermal sensors 245A and 245B coupled with one or more servers 115A and 115B.” col. 9, lines 60-65) identify a second coolant parameter associated with a second compute device based on outputs generated by a second sensor associated with the second compute device; (Hyser: see the thermal sensor 245B coupled to the server 115B as illustrated in figure 3; “Referring now to 445 of FIG. 4 and as described herein, in one embodiment workload state information 110 comprising cooling condition 135 of one or more servers 115A-115J is received by a computer from one or more thermal sensors 245A and 245B coupled with one or more servers 115A and 115B.” col. 9, lines 60-65) identify a first workload assigned to the first compute device, the first workload associated with a first service level objective for the first compute device; (Hyser: see the access performance requirements associated with the data center 405 as illustrated in figure 4; “said performance requirements including rules associated with a service level agreement to be met in performing the workloads” claim 1; “Referring to 405 of FIG. 4 and as described herein, in one embodiment, workload state information 110 associated with one or more servers 115A-115J in a data center 120 is accessed by a computer. The workload state information 110 comprises workload placement 125 of workloads 130A-130E on the one or more servers 115A-115J, cooling conditions 135 of said one or more servers 115A-115J, and performance requirements 140 associated with the data center 120.” Col. 9, lines 10-17) identify a second workload assigned to the second compute device, the second workload associated with a second service level objective for the second compute device; (Hyser: see the access performance requirements associated with the data center 405 as illustrated in figure 4; “said performance requirements including rules associated with a service level agreement to be met in performing the workloads” claim 1; “Referring to 405 of FIG. 4 and as described herein, in one embodiment, workload state information 110 associated with one or more servers 115A-115J in a data center 120 is accessed by a computer. The workload state information 110 comprises workload placement 125 of workloads 130A-130E on the one or more servers 115A-115J, cooling conditions 135 of said one or more servers 115A-115J, and performance requirements 140 associated with the data center 120.” Col. 9, lines 10-17) determine a cooling parameter for cooling the first compute device and the second compute device based on the first coolant parameter, the second coolant parameter, the first workload, and the second workload; and (Hyser: “Referring now to 425 of FIG. 4 and as described herein, in one embodiment, and based on the workload repositioning instructions 175, cooling resource adjustment instructions 210 are generated by the computer. The cooling resource adjustment instructions 210 instruct cooling resources 155A-155F to be adjusted to enable the data center 120 to meet the performance requirements 140. Referring to 430 of FIG. 4 and as described herein, in one embodiment, the cooling resource adjustment instruction 210 generated instructs one or more cooling resources of cooling resources 155A-155F to be powered down.” Col. 9, lines 42-52) cause a cooling distribution unit to control flow of coolant with respect to the first compute device and the second compute device based on the cooling parameter. (Hyser: see the computer cooling resources adjustment instructions to meet the performance requirements 425 as illustrated in figure 4; “In one embodiment, a cooling resource is an air conditioning unit. It should be understood that a cooling resource is any component that has an effect on the cooling environment of data center 120. The term "cooling conditions" refers to the state of the cooling resource. For example, but not limited to, whether the cooling resource is powered on or off, and at what temperature the cooling resource is running and/or capable of running.” Col. 4, lines 43-50; “instructions to adjust cooling resources 155A-155F (e.g., adjusting set points or valves controlling movement of a cooling medium)” col. 5, lines 2-4) Claim 13: The cited prior art describes the non-transitory machine readable storage medium of claim 11, wherein the instructions cause the programmable circuitry to: determine a third coolant parameter of the coolant based on the first workload, the cooling parameter including the third coolant parameter; and (Hyser: see the access computer cooling state 410 as illustrated in figure 4; see the servers 115A-115J as illustrated in figure 1) cause the cooling distribution unit to provide the coolant having the third coolant parameter for cooling the first compute device at a first time. (Hyser: see the computer cooling resources adjustment instructions to meet the performance requirements 425 and the sends cooling resource adjustment instructions 440 as illustrated in figure 4; see the servers 115A-115J as illustrated in figure 1) Claim 14: The cited prior art describes the non-transitory machine readable storage medium of claim 13, wherein the instructions cause the programmable circuitry to: detect, based on outputs of the first sensor, a change in the third coolant parameter after exposure of the coolant the first compute device at a second time; and (Hyser: “Referring now to 445 of FIG. 4 and as described herein, in one embodiment workload state information 110 comprising cooling condition 135 of one or more servers 115A-115J is received by a computer from one or more thermal sensors 245A and 245B coupled with one or more servers 115A and 115B.” col. 9, lines 60-65; “Referring still to FIG. 2, in one embodiment, the WCMHM 100 receives workload state information 110 comprising cooling conditions 135 of the one or more servers 115A-115J from thermal sensors 245A-245C coupled with the one or more servers 115A-115J. In one embodiment, the thermal sensors 245A-245C are only positioned within the vicinity of the one or more servers 115A-115J and not directly attached to the one or more servers 115A-115J. However, in one embodiment, and referring to FIG. 3, the thermal sensors 245A and 245B are shown coupled with servers 115A and 115B, respectively.” Col. 8, lines 43-53) adjust the cooling parameter based on the change. (Hyser: see the computer cooling resources adjustment instructions to meet the performance requirements 425 and the sends cooling resource adjustment instructions 440 as illustrated in figure 4) Claim 15: The cited prior art describes the non-transitory machine readable storage medium of claim 11, wherein the cooling parameter includes a schedule for performance of the first workload by the first compute device and performance of the second workload by the second compute device and the instructions cause the programmable circuitry to transmit the schedule to the first compute device and the second compute device. (Hyser: see the workload repositioning instructions to meet the performance requirements 423 and the sending of the workload repositioning instructions 435 as illustrated in figure 4) Claim 16: The cited prior art describes the non-transitory machine readable storage medium of claim 11, wherein the first compute device includes a first compute component and a second compute component, and the instructions cause the programmable circuitry to cause the cooling distribution unit to control the flow of coolant with respect to the first compute component and the second compute component. (Hyser: see the servers 115A-115J as illustrated in figure 1; “Furthermore, in FIG. 1, the workload state information accessor 105 accesses workload state information 110 in the form of cooling conditions 135 of one or more servers 115A-115J. Referring now to FIG. 2, in one embodiment, these cooling conditions 135 for each server are quantified as a local workload placement index (LWPI) 225. The LWPI 225 and its function is well known in the art. The WCMHM 100 accesses this LWPI 225 measurement for each server of the one or more servers 115A-115J. More particularly, the temperatures 215 and the air flow 220 with the zone of influence of each cooling resource 155A-155F (in essence, the cooling efficiency capability for that specific area) is measured by the LWPI 225. The LWPI 225 determines the best available location for placement of a workload based on that location's cooling efficiency capability.” Col. 6, lines 26-40; “In continuing with the example begun above, these cooling resource adjustment instructions 210 adjust cooling resources 155A and 155B to be powered off, while turning up cooling resource 155F to emit more cooling air.” Col. 8, lines 16-20; “Referring to FIG. 2, a block diagram of an example of a WCMHM 100 upon which embodiments of the present technology can be implemented is shown. In one embodiment, WCMHM 100 further includes a cooling resource adjustment instruction generator 205, cooling resource adjustment instructions 210 and instruction sender 230. In embodiments of the present technology, cooling resource adjustment instructions 210 include, but are not limited to, instructions to adjust cooling resources 155A-155F (e.g., adjusting set points or valves controlling movement of a cooling medium) and to power cooling resources 115A-155F on and/or off.” Col. 4, line 62 through col. 5, line 5) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 9,003,003 (Hyser) in view of U.S. Patent Application Publication No. 2019/0069433 (Balle). Claim 2: Hyser does not explicitly describe a heat influx as described below. However, Balle teaches the heat influx as described below. The cited prior art describes the apparatus of claim 1, wherein the programmable circuitry is to determine an expected heat influx associated with performance of the workload by the compute device based on one or more of (i) outputs of a sensor associated with the compute device or (ii) the workload. (Balle: “In some embodiments, the orchestrator server 1520 may generate a map of heat generation in the data center 100 using telemetry data (e.g., temperatures, fan speeds, etc.) reported from the sleds 400 and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in the data center 100.” Paragraph 0080) One of ordinary skill in the art would have recognized that applying the known technique of Hyser, namely, managing computer resources, with the known techniques of Balle, namely, server control based on workloads, would have yielded predictable results and resulted in an improved system. Accordingly, applying the teachings of Hyser to control computer resources based on various inputs to the teachings of Balle to use a heat map to control servers would have been recognized by those of ordinary skill in the art as resulting in an improved data center control system (i.e., the combination of references provides for computer resource control based on various inputs based on the teachings of computer resource control based on workload, service levels, and coolant data in Hyser and based on the teachings of controlling servers based on heat data in Balle). Claim 3: Hyser does not explicitly describe a heat influx as described below. However, Balle teaches the heat influx as described below. The cited prior art describes the apparatus of claim 2, wherein the programmable circuitry is to determine the adjustment to the coolant parameter based on the expected heat influx. (Balle: “In some embodiments, the orchestrator server 1520 may generate a map of heat generation in the data center 100 using telemetry data (e.g., temperatures, fan speeds, etc.) reported from the sleds 400 and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in the data center 100.” Paragraph 0080) (Hyser: see the computer cooling resources adjustment instructions to meet the performance requirements 425 and the sends cooling resource adjustment instructions 440 as illustrated in figure 4) Hyser and Balle are combinable for the same rationale as set forth above with respect to claim 2. Claim 4: The cited prior art describes the apparatus of claim 2, wherein the programmable circuitry is to: determine an operational parameter of the cooling distribution unit; and (Hyser: see the access computer cooling state 410 as illustrated in figure 4; “Referring now to FIG. 1, cooling state information accessor 145 accesses cooling state information 150 of cooling resources 155A-155F associated with the one or more servers 115A-115J. As described herein, the cooling state information includes, but is not limited to, cooling conditions 160 associated with the cooling resources 155A-155F. In one embodiment, the cooling state information accessor 145 accesses cooling conditions 160, garnering information as to which cooling resources 155A-155F are currently powered on and off. For example, cooling state information 150 in the form of cooling conditions 160 may show that all cooling resources 155A-155F are powered on and running at a medium level on a scale ranging from low level to high level.” Col. 6, lines 41-53) cause a schedule of performance of the workload to be adjusted based on one or more of the operational parameter of the cooling distribution unit, the coolant parameter, or the adjustment to the coolant parameter. (Hyser: “Further, FIG. 1 shows workload repositioning instructions 175 being generated by workload repositioning instruction generator 170. These workload repositioning instructions 175 may comprise, but are not limited to, instructions to reposition workload on the one or more servers 115A-115J to meet performance requirements 140 and powering on and/or off servers in response to the repositioning of workloads on the one or more servers 115A-115J.” col. 4, lines 54-61; “Based upon accessed information concerning workload placement, cooling conditions of the servers and the air conditioning units, and performance requirements of the data center, the WCMHM 100 may cause workload to be migrated to different servers of a data center and cause air conditioning units to be turned up and/or down to meet SLAs while also conserving overall resources.” Col. 5, lines 58-64) Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 9,003,003 (Hyser) in view of U.S. Patent Application Publication No. 2019/0069433 (Balle) and further in view of U.S. Patent Application Publication No. 2009/0265045 (Coxe). Claim 12: Hyser does not explicitly describe a heat influx or comparison as described below. However, Balle teaches the heat influx and Coxe teaches the comparison as described below. The cited prior art describes the non-transitory machine readable storage medium of claim 11, wherein the instructions cause the programmable circuitry to: determine heat influx associated with the first compute device based on the first workload; (Balle: “In some embodiments, the orchestrator server 1520 may generate a map of heat generation in the data center 100 using telemetry data (e.g., temperatures, fan speeds, etc.) reported from the sleds 400 and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in the data center 100.” Paragraph 0080) (Hyser: see the workloads 130A-130E and the servers 115A-115J as illustrated in figure 1; see the access computer workload state information 405 as illustrated in figure 4) determine heat influx associated with the second compute device based on the second workload; (Balle: “In some embodiments, the orchestrator server 1520 may generate a map of heat generation in the data center 100 using telemetry data (e.g., temperatures, fan speeds, etc.) reported from the sleds 400 and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in the data center 100.” Paragraph 0080) (Hyser: see the workloads 130A-130E and the servers 115A-115J as illustrated in figure 1; see the access computer workload state information 405 as illustrated in figure 4) perform a comparison of the heat influx associated with the first compute device to the heat influx associated with the second compute device; and (Coxe: “The element cooling demands for all elements in a particular thermal region are compared with each other, and the higher element cooling demand is determined in block 1208. The region cooling demand for the particular region is set to substantially equal the higher element cooling demand in block 1210.” Paragraph 0058) (Balle: “In some embodiments, the orchestrator server 1520 may generate a map of heat generation in the data center 100 using telemetry data (e.g., temperatures, fan speeds, etc.) reported from the sleds 400 and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in the data center 100.” Paragraph 0080) (Hyser: see the workloads 130A-130E and the servers 115A-115J as illustrated in figure 1; see the access computer workload state information 405 as illustrated in figure 4) determine the cooling parameter based on the comparison. (Coxe: “The element cooling demands for all elements in a particular thermal region are compared with each other, and the higher element cooling demand is determined in block 1208. The region cooling demand for the particular region is set to substantially equal the higher element cooling demand in block 1210.” Paragraph 0058) One of ordinary skill in the art would have recognized that applying the known technique of Hyser, namely, managing computer resources, with the known techniques of Balle, namely, server control based on workloads, and the known techniques of Coxe, namely, controlling an information handling system. would have yielded predictable results and resulted in an improved system. Accordingly, applying the teachings of Hyser to control computer resources based on various inputs to the teachings of Balle to use a heat map to control servers and the teachings of Coxe to control information handling system components based on various control mechanisms would have been recognized by those of ordinary skill in the art as resulting in an improved data center control system (i.e., the combination of references provides for computer resource control based on various inputs and control mechanisms based on the teachings of computer resource control based on workload, service levels, and coolant data in Hyser and based on the teachings of controlling servers based on heat data in Balle and the teachings of controlling servers based on data comparisons in Coxe). Claims 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2024/0040753 (Khan) in view of U.S. Patent Application Publication No. 2009/0265045 (Coxe). Claim 17: The cited prior art describes an apparatus comprising: (Khan: see the computing system 100 as illustrated in figure 1) interface circuitry; (Khan: see the I/O interfaces 115 and communications unit 111 as illustrated in figure 1) computer readable instructions; and (Khan: see the programs 114 in the persistent storage 106 as illustrated in figure 1) programmable circuitry to instantiate: (Khan: see the processors 103 as illustrated in figure 1) telemetry analysis circuitry to (Khan: ) generate a heatmap associated with an environment based on (Khan: see the heat map as illustrated in figure 6C; “In step 803, using the thermal imaging data and temperature information collected in step 801, a heat map can be generated by mapping temperature data to corresponding hardware units observed by the imaging system(s) 515 to be emitting the heat captured as part of the thermal imaging data. The map can be color coded to the temperature information of the thermal images taken by the imaging system(s) 515.” Paragraph 0085) outputs of sensors in the environment, (Khan: “Embodiments of the computing environment 500 of a datacenter 600 may include a plurality of sensors 517 positioned throughout the datacenter. Sensors 517 may be incorporated as part of the imaging system 515 or may be a separate device that is separate from the imaging system 515. Sensors 517 may be described as a device that detects or measures a physical property and records, indicates or otherwise responds to a physical stimulus, including heat, light, sound, pressure, magnetism, or motion. As heat is generated by one or more hardware units (and observed by the imaging system 515), sensors 517 positioned throughout the datacenter 600 may obtain information describing the heat exhausted by the one or more hardware units. In an exemplary embodiments sensors 517 may be a temperature sensor and may be capable of measuring temperature data at a specific location within the datacenter 600.” Paragraph 0068) the environment including a first compute device and a second compute device, (Khan: see the blades 510a-510n in the server rack 519 as illustrated in figure 5) one or more of the sensors associated with the first compute device and (Khan: see the blades 510a-510n and sensors 517 as illustrated in figure 5; “Sensors 517 may record temperature readings of specific server blades 510 and/or server racks 519 within the datacenter 600 as well as the time the temperature reading was taken by the sensor 517. Embodiments of sensors 517 may report the collected temperature data measured by the sensors 517 to a temperature management system 501.” Paragraph 0068) one or more of the sensors associated with the second compute device; and (Khan: see the blades 510a-510n and sensors 517 as illustrated in figure 5; “Sensors 517 may record temperature readings of specific server blades 510 and/or server racks 519 within the datacenter 600 as well as the time the temperature reading was taken by the sensor 517. Embodiments of sensors 517 may report the collected temperature data measured by the sensors 517 to a temperature management system 501.” Paragraph 0068) orchestration circuitry to: (Khan: see the scoring module 511 as illustrated in figure 5) determine a heat influx associated with the first compute device based on the heatmap; (Khan: see the final score of the rack and blades in the table 705 as illustrated in figure 7C; “In step 809, the thermal imaging data and/or temperature information collected by imaging system 515 and/or sensors 517 can be inputted in to a scoring module 511, along data collected by a resource monitor 509 describing frequency of access by each hardware unit of the datacenter 600 and the number of mission critical applications being deployed by each hardware unit.” Paragraph 0087; “FIG. 7C provides an example of a scoring algorithm output table 705 generated by a scoring module 511 using the weighting table 701 of FIG. 7A. In the example depicted, rack 20 of a datacenter may comprise 4 server blades 510 referred to as R20B0 to R20B3. At time T1 a temperature for rack 20 blade 0 is identified by the imaging module 507 as being 32° C. If the frequency of access of R20B0 is 650 times within a selected time period and R20B0 hosts 6 mission critical applications, then the risk assessment of R20B0 can be calculated in accordance with the weighting table 701 as having a final weighted score of 3 (1+1+1). Final weighted scores can be calculated for each of the remaining server blades of server rack 20.” Paragraph 0082) determine heat influx associated with the second compute device based on the heatmap; (Khan: see the final score of the rack and blades in the table 705 as illustrated in figure 7C; “In step 809, the thermal imaging data and/or temperature information collected by imaging system 515 and/or sensors 517 can be inputted in to a scoring module 511, along data collected by a resource monitor 509 describing frequency of access by each hardware unit of the datacenter 600 and the number of mission critical applications being deployed by each hardware unit.” Paragraph 0087; “FIG. 7C provides an example of a scoring algorithm output table 705 generated by a scoring module 511 using the weighting table 701 of FIG. 7A. In the example depicted, rack 20 of a datacenter may comprise 4 server blades 510 referred to as R20B0 to R20B3. At time T1 a temperature for rack 20 blade 0 is identified by the imaging module 507 as being 32° C. If the frequency of access of R20B0 is 650 times within a selected time period and R20B0 hosts 6 mission critical applications, then the risk assessment of R20B0 can be calculated in accordance with the weighting table 701 as having a final weighted score of 3 (1+1+1). Final weighted scores can be calculated for each of the remaining server blades of server rack 20.” Paragraph 0082) Khan does not explicitly describe coolant control as described below. However, Coxe teaches the coolant control as described below. determine a first coolant parameter for fluid in the environment based on the respective heat influxes; (Coxe: see the set cooling device cooling level to the higher zone cooling level 1222 and the set the region cooling demand to the higher element cooling demand 1210 as illustrated in figure 12; “Each element in the information handling system 100 can physically reside in a thermal region within the information handling system 100 that is cooled by one or more zones of the cooling devices 132 through 134. The management controller 120 can logically associate each particular thermal region with the zone or zones that most effectively cool that particular thermal region. The management controller 120 can determine an element cooling demand for each element (e.g., as communicated by the element over the system communication bus 116), and set a region cooling demand for each particular region that corresponds to the higher element cooling demand from among the elements that are associated with each thermal region” paragraph 0051) cause a cooling distribution unit to provide the fluid having the first coolant parameter for cooling the first compute device and the second compute device. (Coxe: see the drive cooling device cooling level to all cooling devices 1226 as illustrated in figure 12) One of ordinary skill in the art would have recognized that applying the known technique of Khan, namely, data center temperature control and management, with the known techniques of Coxe, namely, controlling an information handling system. would have yielded predictable results and resulted in an improved system. Accordingly, applying the teachings of Khan to control data center temperature based on various inputs to the teachings of Coxe to control information handling system components based on various control mechanisms would have been recognized by those of ordinary skill in the art as resulting in an improved data center control system (i.e., the combination of references provides for data center temperature control based on various inputs based on the teachings of data center temperature control based on heat data in Khan and based on the teachings controlling servers based on data comparisons in Coxe). Claim 19: Khan does not explicitly describe coolant control as described below. However, Coxe teaches the coolant control as described below. The cited prior art describes the apparatus of claim 17, further including sensor circuitry to: transmit a first output indicative of a second coolant parameter of the fluid at a first location, the first location including the first compute device; and (Coxe: “The system management controller 120 can receive temperature data, cooling demand data, other suitable data, or any combination thereof, from the data processing systems 102 through 104, and the I/O controller 110 over the system communication bus 116.” Paragraph 0033; “In a particular embodiment of the second aspect, the information handling system can include a blade server, and in a further embodiment, the blade server can include a particular thermal sensor of the thermal sensors.” Paragraph 0066) (Khan: see the blades 510a-510n and sensors 517 as illustrated in figure 5; “Sensors 517 may record temperature readings of specific server blades 510 and/or server racks 519 within the datacenter 600 as well as the time the temperature reading was taken by the sensor 517. Embodiments of sensors 517 may report the collected temperature data measured by the sensors 517 to a temperature management system 501.” Paragraph 0068) transmit a second output indicative of a third coolant parameter of the fluid at a second location, the second location including the second compute device, (Coxe: “The system management controller 120 can receive temperature data, cooling demand data, other suitable data, or any combination thereof, from the data processing systems 102 through 104, and the I/O controller 110 over the system communication bus 116.” Paragraph 0033; “In a particular embodiment of the second aspect, the information handling system can include a blade server, and in a further embodiment, the blade server can include a particular thermal sensor of the thermal sensors.” Paragraph 0066) (Khan: see the blades 510a-510n and sensors 517 as illustrated in figure 5; “Sensors 517 may record temperature readings of specific server blades 510 and/or server racks 519 within the datacenter 600 as well as the time the temperature reading was taken by the sensor 517. Embodiments of sensors 517 may report the collected temperature data measured by the sensors 517 to a temperature management system 501.” Paragraph 0068) the orchestration circuitry to determine the first coolant parameter based on the second coolant parameter and the third coolant parameter. (Coxe: see the set cooling device cooling level to the higher zone cooling level 1222 and the set the region cooling demand to the higher element cooling demand 1210 as illustrated in figure 12; “Each element in the information handling system 100 can physically reside in a thermal region within the information handling system 100 that is cooled by one or more zones of the cooling devices 132 through 134. The management controller 120 can logically associate each particular thermal region with the zone or zones that most effectively cool that particular thermal region. The management controller 120 can determine an element cooling demand for each element (e.g., as communicated by the element over the system communication bus 116), and set a region cooling demand for each particular region that corresponds to the higher element cooling demand from among the elements that are associated with each thermal region” paragraph 0051) Khan and Coxe are combinable for the same rationale as set forth above with respect to claim 17. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2024/0040753 (Khan) in view of U.S. Patent Application Publication No. 2009/0265045 (Coxe) and further in view of U.S. Patent Application Publication No. 2022/0117121 (Heydari). Claim 18: Khan and Coxe do not explicitly describe a tank as described below. However, Heydari teaches the tank as described below. The cited prior art describes the apparatus of claim 17, wherein the environment includes a tank including the first compute device and the second compute device. (Heydari: see the computing devices 220A-220D in the server box 202 as illustrated in figure 2; “In at least one embodiment, the server tray 202 is an immersive-cooled server tray that may be flooded by the fluid from a cooling manifold.” Paragraph 0065) One of ordinary skill in the art would have recognized that applying the known technique of Khan, namely, data center temperature control and management, with the known techniques of Coxe, namely, controlling an information handling system, and the known techniques of Heydari, namely, data center cooling control, would have yielded predictable results and resulted in an improved system. Accordingly, applying the teachings of Khan to control data center temperature based on various inputs to the teachings of Coxe to control information handling system components based on various control mechanisms and the teachings of Heydari to cool a data center by immersing the servers would have been recognized by those of ordinary skill in the art as resulting in an improved data center control system (i.e., the combination of references provides for data center temperature control based on various inputs and using various mechanisms based on the teachings of data center temperature control based on heat data in Khan and based on the teachings controlling servers based on data comparisons in Coxe and the teachings of cooling servers based on tank immersion in Heydari). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2024/0040753 (Khan) in view of U.S. Patent Application Publication No. 2009/0265045 (Coxe) and further in view of U.S. Patent No. 9,003,003 (Hyser). Claim 20: Khan and Coxe do not explicitly describe service level objective as described below. However, Hyser teaches the service level objective as described below. The cited prior art describes the apparatus of claim 17, wherein the orchestration circuitry is to determine the first coolant parameter based on a service level objective associated with the first compute device and a service level objective associated with the second compute device. (Hyser: see the access performance requirements associated with the data center 405 as illustrated in figure 4; “said performance requirements including rules associated with a service level agreement to be met in performing the workloads” claim 1; “Further, FIG. 1 shows workload repositioning instructions 175 being generated by workload repositioning instruction generator 170. These workload repositioning instructions 175 may comprise, but are not limited to, instructions to reposition workload on the one or more servers 115A-115J to meet performance requirements 140 and powering on and/or off servers in response to the repositioning of workloads on the one or more servers 115A-115J.” col. 4, lines 54-61; “Based upon accessed information concerning workload placement, cooling conditions of the servers and the air conditioning units, and performance requirements of the data center, the WCMHM 100 may cause workload to be migrated to different servers of a data center and cause air conditioning units to be turned up and/or down to meet SLAs while also conserving overall resources.” Col. 5, lines 58-64) One of ordinary skill in the art would have recognized that applying the known technique of Khan, namely, data center temperature control and management, with the known techniques of Coxe, namely, controlling an information handling system, and the known techniques of Hyser, namely, managing computer resources, would have yielded predictable results and resulted in an improved system. Accordingly, applying the teachings of Khan to control data center temperature based on various inputs to the teachings of Coxe to control information handling system components based on various control mechanisms and the teachings of Hyser to control computer resources based on various inputs would have been recognized by those of ordinary skill in the art as resulting in an improved data center control system (i.e., the combination of references provides for data center temperature control based on various inputs and using various mechanisms based on the teachings of data center temperature control based on heat data in Khan and based on the teachings controlling servers based on data comparisons in Coxe and the teachings of computer resource control based on service levels in Hyser). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent Application Publication No. 2012/0216065 describes thermal relationship based workload planning. U.S. Patent Application Publication No. 2007/0260417 describes data center environmental control based on sensed data. U.S. Patent Application Publication No. 2010/0217454 describes dynamic thermal load balancing. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER E EVERETT whose telephone number is (571)272-2851. The examiner can normally be reached Monday-Friday 8:00 am to 5:00 pm (Pacific). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Fennema can be reached at 571-272-2748. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Christopher E. Everett/Primary Examiner, Art Unit 2117
Read full office action

Prosecution Timeline

May 08, 2023
Application Filed
Jun 27, 2023
Response after Non-Final Action
Jan 26, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603509
MICROGRID WITH AUTOMATIC LOAD SHARING CONTROL DURING OFF-GRID STANDALONE OPERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602032
METHOD AND SYSTEM FOR MANAGING ENTERPRISE DIGITAL AUTOMATION PROCESSES
2y 5m to grant Granted Apr 14, 2026
Patent 12596352
System and method for controlling a production plant consisting of a plurality of plant parts, in particular a production plant for producing industrial goods such as metallic semi-finished products
2y 5m to grant Granted Apr 07, 2026
Patent 12596338
METHOD AND APPARATUS FOR PERFORMING OPTIMAL CONTROL
2y 5m to grant Granted Apr 07, 2026
Patent 12585251
METHOD FOR THE DISTRIBUTED CALCULATION OF COMPUTATIONAL TASKS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+23.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 830 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month