DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claims 1-20 are pending and rejected below.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 5, 7-8, 13, 14, 16 are rejected under 35 U.S.C. 102 a)(1) and (a)(2) as anticipated by MacNamara, 20180335824 (IDS filed 05/07/25).
Regarding claim 1, MacNamara teaches a method comprising:
determining, by a computing system (Fig. 5, controller 500), an expected scale of a network device (Fig. 6, network interface 574), wherein the network device comprises a plurality of hardware components (Fig. 1-2 and 4, abstract, par. 11, 95- controller performs demand scaling )
comparing, by the computing system, the expected scale to a maximum scale
of the network device (Fig. 6 and Fig. 7, block 706 thresholds/policy for scale up/down. Par. 0102-0103, and par. 136 In block 606, the demand scaling engine calculates a current capability for the processor and determine either that a scale up/down based on performance headroom; compare the network demand data to scaleup criterion)
adjusting, by the computing system and based on the comparison, power consumption of the network device (Fig. 6, 610 and 612, par. 103-105step 610 and 612, scale up or down based criterion, see also Fig. 8)
wherein adjusting power consumption includes off-lining one or more of the hardware components of the network device (par. 26 and 107 . The orchestrator delivers a general power management policy. The policy from the orchestrator may affect logical blocks, such as instructions to activate or deactivate resources 704, as well as thresholds and policies for when to scale up and scale down in block 706).
Claim 2. MacNamara teaches the method of claim 1, wherein the hardware components include a plurality of CPU cores (Fig. 4, cores 412 A-D), and wherein adjusting power consumption of the network device further includes: reducing a frequency at which at least one of the CPU cores are clocked (par. 105. Scaling down processor speed, i.e., reducing frequency, see also par. 143 and 147) .
Claim 3. MacNamara teaches the method of claim 1, wherein the hardware components include a plurality of CPU cores (Fig. 4, cores 412 A-D), and wherein offlining one or more the hardware components includes: offlining one of the CPU cores in the network device (see par. 24, CPU enter energy efficient, P or C states)
Claim 4. MacNamara teaches the method of claim 1, wherein the hardware components include a plurality of memory modules (par. 36, processor having cores along with memory for the cores), and
wherein offlining one or more the hardware components includes: offlining one or more of the memory modules in the network device (see par. 24, CPU enter energy efficient, P or C states, thus putting memory offline).
Claim 5. MacNamara teaches the method of claim 1, further comprising:
determining, by the computing system, an updated expected scale of the network device; and further adjusting, by the computing system and based on the updated expected scale, power consumption of the network device (Par. 96, demand scaling block 520 instructs the core frequency to step up or down based on changes to the incoming traffic load…to proactively meet expected changes in demand).
Claim 6. MacNamara teaches the method of claim 5, wherein further adjusting power consumption includes: onlining one or more of the hardware components (par. 107, the policy from the orchestrator may affect logical blocks, such as instructions to activate resources).
Claim 7. MacNamara teaches the method of claim 1, wherein the computing system is included within the network device (Fig. 4, par. 69, In the embodiment depicted, platforms 402A, 402B, and 402C, along with a data center management platform 406 and data analytics engine 404 are interconnected via network 408. In other embodiments, a computer system may include any suitable number of (i.e., one or more) platforms).
Claim 8. MacNamara teaches the method of claim 1, wherein the network device is a router (Fig. 1 Farbric 170, and 91) and wherein determining the expected scale of the router includes: determining information about convergence capabilities of the router (par. 19-22, logic to carry out the demand scaling function includes comprehensive assessment of different demands on the network and hardware resources and their capabilities, and provide scaling factors to achieve best power saving)
Regarding claim 10, MacNamara teaches a system (Fig 1-4) a storage device; and processing circuitry having access to the storage device (par. 54 and 69) and configured to perform the method as recited in claim 1 and therefore rejected accordingly.
Claim 11-17 repeat the limitation of claim 2-8 respectively and therefore are rejected accordingly.
Regarding claim 19, MacNamara teaches a non-transitory computer media comprising instruction (par. 54) to perform the method as recited in claim 1 and therefore rejected accordingly.
Claim 20 repeats the limitation of claim 5 and therefore is rejected accordingly
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over MacNamara in view of Lee, US 20160239074 A1 (submitted in IDS filed 05/07/25).
Regarding claim 9. MacNamara teaches the method of claim 8, wherein determining the expected scale according to at least one policy to predict the expected scale (par. 96, expected demand scaling) based on at least one of a configuration associated with the router, specifications associated with the router, switching operations, CPU utilization, core utilization, or memory utilization. (par. 19-23, 107;, logic to carry out the demand scaling function includes comprehensive assessment of different demands on the network and hardware resources and their capabilities, and provide scaling factors to achieve best power saving based on a power management policy) .
However, MacNamara does not specifically teaches applying a machine learning model to predict the expected scaling of resources (control . In analogous art of power management, Lee teaches using machine learning based performance and energy module to identify workload behavior and predict optimal power control of the resources (par. 25, 28 and 121) to achieve optimal power configuration.
It would have been obvious to one having ordinary skills in the art prior to the effective filing date, to implement the use of machine learning model to predict the expected scale (controlling policy) to save energy without adversely affecting or sacrifice performance (Lee, par. 29)
Claim 18 repeats the limitation of claim 9 and therefore is rejected accordingly.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Kumar Addepalli, 20200250510 teaches Systems and Methods for Artificial Intelligence with a Flexible Hardware Processing Framework.
Yamaguchi 20130028090 teaches Routers and Network transfer technology for transferring data while saving the power and cutting down the latency.
Minwalla 20190258756 A1 teaches management interface to control facilitate communication with a router having machine learning algorithms to obtain/create model for parameter configuration and settings to successfully configure network devices like routers.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIM HUYNH whose telephone number is (571)272-4147. The examiner can normally be reached M-Th 5:30am-3:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAWEED ABBASZADEH can be reached at (571)270-1640. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KIM HUYNH/Primary Patent Examiner, Art Unit 2176