Prosecution Insights
Last updated: April 19, 2026
Application No. 18/739,867

MANAGEMENT SYSTEM FOR PROVISIONING SERVER RESOURCES OF A DATA CENTER

Non-Final OA §103
Filed
Jun 11, 2024
Examiner
ZAMAN, FAISAL M
Art Unit
2175
Tech Center
2100 — Computer Architecture & Software
Assignee
Super Micro Computer, Inc.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
81%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
614 granted / 917 resolved
+12.0% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
43 currently pending
Career history
960
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
63.4%
+23.4% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 917 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of Species I (Claims 1-5 and 11-15) in the reply filed on 1/16/26 is acknowledged. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 11-15 are rejected under 35 U.S.C. 103 as being unpatentable over Tantawi et al. (U.S. Patent Application Publication Number 2023/0418687), Jia et al. (U.S. Patent Application Publication Number 2024/0249045), and Misra et al. (U.S. Patent Application Publication Number 2023/0409393). Regarding Claim 1, Tantawi discloses a method of providing a server resource in a data center, the method comprising: training a machine learning model using an initial training dataset comprising temperature information, server positions, and power consumption information of server computers of different data centers (paragraphs 0025-0028; i.e., the machine learning model may be trained using data from multiple other data centers; this training data may include information about the temperature of each server, the power consumption of each server, and the server position; specifically with regards to server position, an energy profile is created for each data center that includes information about the temperature in various areas of the data center building which can include the location of individual servers); fine-tuning the machine learning model using fine-tuning data comprising temperature information, server positions, and power consumption information of a plurality of server computers of the data center (paragraph 0028; i.e., the same information [e.g., temperature, positioning, and power consumption] is learned for servers in the particular data center); for each of the predictions (paragraph 0034; i.e., the efficiency rank calculations may be performed repeatedly to update the ranks), using the machine learning model to generate a predicted difference in power consumption of the data center (paragraphs 0030 and 0032; i.e., generating an efficiency rank, which includes a predicted power consumption, for each of the available servers; this would contribute to the overall power consumption of the data center); comparing predicted differences in power consumption of the data center to identify a selected server computer among the plurality of server computers that is powered OFF but when powered ON will result in a lowest power consumption of the data center relative to powering ON other server computers of the plurality of server computers (paragraphs 0017 and 0034-0035; i.e., the efficiency ranks of each of the servers is calculated and then compared; the new workload is then allocated to the server with the greatest efficiency rank; with respect to the feature of powering on the selected server from a group of servers that were previously powered off, certain of the servers may have previously been powered off [paragraph 0017]; therefore, the allocation of the new workload to the selected server may include powering it on first); and starting the selected server computer by powering ON the selected server computer (paragraphs 0017 and 0035; i.e., assuming the selected server was previously powered off, then allocating the new workload to it would include powering it on first). Tantawi does not expressly disclose after the machine-learning model has been fine-tuned, sending prediction requests to the machine learning model, each of the prediction requests including at least a position in the data center of a server computer of the plurality of server computers that is powered OFF. In the same field of endeavor (e.g., data center machine learning models), Jia teaches after the machine-learning model (Figure 1, item 106) has been fine-tuned (paragraph 0039; i.e., the machine learning model 106 can be fine-tuned [it could have already generated the prediction models 114] before it receives the power prediction requests 112), sending prediction requests (Figure 1, item 112, paragraph 0039) to the machine learning model, each of the prediction requests including at least an identification of a server computer (Figure 1, item 104A) of the plurality of server computers that is powered OFF (paragraphs 0043 and 0101; i.e., the prediction request 112 can including information identifying a particular server 104A that it wants to have the power consumption predicted; further, as stated above, Tantawi discloses that the servers may be powered off). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Jia’s teachings of data center machine learning models with the teachings of Tantawi, for the purpose of saving resources of the machine learning model. More specifically, by only predicting the power consumption of the servers when a request is received, the machine learning model will only use the necessary power to make the prediction at those times, rather than continuously. Also in the same field of endeavor (e.g., server control techniques in a data center), Misra teaches an identification of a server computer (Figure 1, item 104) of a plurality of server computers in a data center (Figure 1, item 100) includes at least a position in the data center (paragraph 0024). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Misra’s teachings of server control techniques in a data center with the teachings of Tantawi, for the purpose of being able to identify areas of the data center that are highly loaded (and thereby generating heat) and route new workload requests to areas that have less load. Regarding Claim 2, Tantawi discloses wherein starting the selected server computer includes provisioning an operating system to the selected server computer (paragraph 0018). Regarding Claim 3, Tantawi discloses wherein starting the selected server computer includes sending a signal to a Baseboard Management Controller (BMC) of the selected server computer (paragraph 0016; the reference states that each server may contain a controller but does not expressly state that it is a Baseboard Management Controller; the examiner takes Official Notice that transmitting a signal to a Baseboard Management Controller in a server to start it is well known in the art and is commonly used for the purpose of providing critical "lights-out" management capabilities, allowing administrators to remotely power on, reboot, and troubleshoot servers from any location). Regarding Claim 4, Tantawi discloses wherein the power consumption information of the plurality of server computers includes power consumption of corresponding racks that contain the plurality of server computers (paragraph 0025; i.e., the energy efficiency profile includes power consumption of a plurality of servers located in a rack). Regarding Claim 5, Tantawi discloses wherein the predicted differences in power consumption of the data center are received from a regressor of the machine learning model (paragraph 0028). Regarding Claim 11, Tantawi discloses a computer system comprising at least one processor (Figure 1, item 102) and a memory (Figure 1, item 104), the memory storing instructions (paragraph 0060) that when executed by the at least one processor cause the computer system to: train a machine learning model to predict power consumption of a data center using an initial training dataset comprising temperature information, server positions, and power consumption information of server computers of a plurality of different data centers (paragraphs 0025-0028; i.e., the machine learning model may be trained using data from multiple other data centers; this training data may include information about the temperature of each server, the power consumption of each server, and the server position; specifically with regards to server position, an energy profile is created for each data center that includes information about the temperature in various areas of the data center building which can include the location of individual servers); fine-tune the machine learning model using fine-tuning data comprising temperature information, server positions, and power consumption information of a plurality of server computers of the data center (paragraph 0028; i.e., the same information [e.g., temperature, positioning, and power consumption] is learned for servers in the particular data center); for each of the predictions (paragraph 0034; i.e., the efficiency rank calculations may be performed repeatedly to update the ranks), use the machine learning model to generate a predicted difference in power consumption of the data center (paragraphs 0030 and 0032; i.e., generating an efficiency rank, which includes a predicted power consumption, for each of the available servers; this would contribute to the overall power consumption of the data center); compare predicted differences in power consumption of the data center to identify a selected server computer among the plurality of server computers that is powered OFF but when powered ON will result in a lowest power consumption of the data center relative to powering ON other server computers of the plurality of server computers (paragraphs 0017 and 0034-0035; i.e., the efficiency ranks of each of the servers is calculated and then compared; the new workload is then allocated to the server with the greatest efficiency rank; with respect to the feature of powering on the selected server from a group of servers that were previously powered off, certain of the servers may have previously been powered off [paragraph 0017]; therefore, the allocation of the new workload to the selected server may include powering it on first); and start the selected server computer by powering ON the selected server computer (paragraphs 0017 and 0035; i.e., assuming the selected server was previously powered off, then allocating the new workload to it would include powering it on first). Tantawi does not expressly disclose after the machine learning model is fine-tuned, send prediction requests to the machine learning model, each of the prediction requests including at least a position in the data center of a server computer of the plurality of server computers that is powered OFF. In the same field of endeavor, Jia teaches after the machine-learning model (Figure 1, item 106) is fine-tuned (paragraph 0039; i.e., the machine learning model 106 can be fine-tuned [it could have already generated the prediction models 114] before it receives the power prediction requests 112), send prediction requests (Figure 1, item 112, paragraph 0039) to the machine learning model, each of the prediction requests including at least an identification of a server computer (Figure 1, item 104A) of the plurality of server computers that is powered OFF (paragraphs 0043 and 0101; i.e., the prediction request 112 can including information identifying a particular server 104A that it wants to have the power consumption predicted; further, as stated above, Tantawi discloses that the servers may be powered off). Also in the same field of endeavor, Misra teaches an identification of a server computer (Figure 1, item 104) of a plurality of server computers in a data center (Figure 1, item 100) includes at least a position in the data center (paragraph 0024). The motivation discussed above with regards to Claim 1 applies equally as well to Claim 11. Regarding Claim 12, Tantawi discloses wherein the instructions stored in the memory of the computer system, when executed by the at least one processor of the computer system cause the computer system to start the selected server computer by provisioning an operating system to the selected server computer (paragraph 0018). Regarding Claim 13, Tantawi discloses wherein the instructions stored in the memory of the computer system, when executed by the at least one processor of the computer system cause the computer system to start the selected server computer by sending a signal to a Baseboard Management Controller (BMC) of the selected server computer (paragraph 0016; the reference states that each server may contain a controller but does not expressly state that it is a Baseboard Management Controller; the examiner takes Official Notice that transmitting a signal to a Baseboard Management Controller in a server to start it is well known in the art and is commonly used for the purpose of provides critical "lights-out" management capabilities, allowing administrators to remotely power on, reboot, and troubleshoot servers from any location). Regarding Claim 14, Tantawi discloses wherein the power consumption information of the plurality of server computers includes power consumption of corresponding racks that contain the plurality of server computers (paragraph 0025; i.e., the energy efficiency profile includes power consumption of a plurality of servers located in a rack). Regarding Claim 15, Tantawi discloses wherein the predicted differences in power consumption of the data center are received from a regressor of the machine learning model (paragraph 0028). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure because each reference discloses a method for using a machine learning model to determine which of a plurality of servers to power on. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FAISAL M ZAMAN whose telephone number is (571)272-6495. The examiner can normally be reached Monday - Friday, 8 am - 5 pm, alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew J. Jung can be reached at 571-270-3779. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FAISAL M ZAMAN/ Primary Examiner, Art Unit 2175
Read full office action

Prosecution Timeline

Jun 11, 2024
Application Filed
Feb 02, 2026
Non-Final Rejection — §103
Mar 27, 2026
Interview Requested
Apr 08, 2026
Examiner Interview Summary
Apr 08, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578780
CIRCUIT SLEEP METHOD AND SLEEP CIRCUIT
2y 5m to grant Granted Mar 17, 2026
Patent 12572490
LINKS FOR PLANARIZED DEVICES
2y 5m to grant Granted Mar 10, 2026
Patent 12560993
POWER MANAGEMENT OF DEVICES WITH DIFFERENTIATED POWER SCALING BASED ON RELATIVE POWER BENEFIT ESTIMATION
2y 5m to grant Granted Feb 24, 2026
Patent 12561267
Multiple Independent On-chip Interconnect
2y 5m to grant Granted Feb 24, 2026
Patent 12562599
Contactless Power Feeder
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
81%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 917 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month