DETAILED ACTION
This office action is in response to the filed application 18/751,429 on June 24, 2024.
Claims 1-20 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on July 16, 2024 was in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements were considered by the Examiner.
Claim Objections
Claim 19 is objected to because of the following informalities: Examiner advised the “non-transitory computer-readable storage medium” be changed to “The non-transitory computer-readable storage medium”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8, 10-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Grushka et al. (US 2023/0246981) in further view of Hermoni et al. (US 10,846,141)
In regard to claim 1, Grushka et al. teach a method, comprising:
performing, by a computer system (the evaluation 120, para. 47, fig. 4), a first diagnostic test on a distributed computing system based on a first restriction level (resource utilization threshold 406 that can be utilized to analyze various resource requirement recommendations, para. 48, fig. 4) indicating resource consumption of a first set of hardware units of the distributed computing system (live deployment environment to accurately evaluate resource recommendations, para. 4, 12), the distributed computing system comprising a plurality of computing devices with processing and memory resources (the processing units of the processing systems are distributed, para. 66);
generating a first log comprising an output of the first diagnostic test at the first restriction level of the distributed computing system (the simulated environment 102 may then generate an evaluation 120 for the resource requirement recommendation, fig. 1D, para. 40);
configuring a first diagnostic tool (predictive model, fig. 1D, 106, para. 4, 31) to emulate the first diagnostic test (model can be trained using historical usage data of computing resources from past software deployments, para. 9); and
applying the first diagnostic tool to obtain an output of the first diagnostic test at a second restriction level of the first set of hardware units (many iterations and situation can be simulated, para. 31), the second restriction level being higher than the first restriction level (the simulated computing environment 102 can subsequently analyze the operation of instances 108 to generate an evaluation defining various metrics of the instances 108 such as resources underutilization, overutilization, para. 32-35, in a test scenario, values can be slightly modified to be plus of their previous simulation, para. 35).
Grushka et al. does not explicitly teach but Hermoni et al. teach of a first set of parameter values and a second set of parameter values for applying to the first diagnostic tool (values of the one or more parameters of a first entry of the log data may be used to create training data and testing data for the AI system … log data may include simulated log data, col. 4 lines37-45, parameters such as bandwidth, latency, jitter ect. as well as processing power, memory, storage, ect., col. 7 lines 20-32, scanning the log data for parameters, col. 4 lines 65-67-col. 5 lines 1-20).
It would have been obvious to modify the method of Grushka et al. by adding Hermoni et al. training and testing data. A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would aid in collecting parameters from log data for use to create training and testing data for the AI system (col. 4 lines 37-45).
In regard to claim 2, Grushka et al. teach the method of claim 1, further comprising:
performing a second diagnostic test on the distributed computing system based on the first restriction level indicating resource consumption of a second set of hardware units of the distributed computing system (test deployments for evaluation a resource requirement recommendation can be performed quickly, and thus, many iterations and situations can be simulated, para. 31, fig. 1A);
generating a second log comprising an output of the second diagnostic test at a third restriction level (the simulated environment 102 may then generate an evaluation 120 for the resource requirement recommendation, fig. 1D, para. 40), it is noted that different simulation would generate multiple resource requirement recommendations which equated to first and second log);
configuring a second diagnostic tool (predictive model, fig. 1D, 106, para. 4, 31, it is noted that there can be more than one model within the system) to emulate the second diagnostic test (model can be trained using historical usage data of computing resources from past software deployments, para. 9); and
applying the second diagnostic tool to obtain an output of the second diagnostic test at a fourth restriction level of the second set of hardware units (many iterations and situation can be simulated, para. 31), the fourth restriction level being higher than the third restriction level (the simulated computing environment 102 can subsequently analyze the operation of instances 108 to generate an evaluation defining various metrics of the instances 108 such as resources underutilization, overutilization, para. 32-35, in a test scenario, values can be slightly modified to be plus of their previous simulation, para. 35).
Grushka et al. does not explicitly teach but Hermoni et al. teach of a second log with a third set of parameter values indicating a first set of parameter values and a second set of parameter values for applying to the first diagnostic tool, configured a second diagnostic tool with the third set of parameter values and apply the second diagnostic tool to obtain a fourth set of parameter values (values of the one or more parameters of a first entry of the log data may be used to create training data and testing data for the AI system … log data may include simulated log data, col. 4 lines37-45, parameters such as bandwidth, latency, jitter ect. as well as processing power, memory, storage, ect., col. 7 lines 20-32, scanning the log data for parameters, col. 4 lines 65-67-col. 5 lines 1-20).
Refer to claim 1 for motivational statement.
In regard to claim 3, Grushka et al. teach the method of claim 2, wherein the first set of hardware units includes the processing resources of the distributed computing system, and wherein the second set of hardware units includes the memory resources of the distributed computing system (for instance, the system characteristics 112 can define available computing resources such as computing cores, memory and storage of a cluster, para. 43, fig. 3A).
In regard to claim 4, Grushka et al. does not explicitly teach but Hermoni et al. teach the method of claim 1, wherein extracting the first set of parameter values from the first log comprises executing a script that reads the first set of parameter values from the first log (the log data may be automatically scan and label, col. 4 lines 65-67 through col. 5 lines 1-37).
Refer to claim 1 for motivational statement.
In regard to claim 5, Grushka et al. teach of machine learning for the predictive model (para. 37), however, Grushka et al. does not explicitly teach but Hermoni et al. teach the method of claim 1, wherein the first diagnostic tool is based on a first artificial intelligence (AI) model (artificial-intelligence (AI) system, col. 5 lines 38-57), and wherein the method further comprises: determining whether performing the first diagnostic test generates a sufficient amount of data for training the first AI model ; and in response to the first diagnostic test not generating the sufficient amount of data, re-performing the first diagnostic test (a higher level of the deep system module may include processes that simulate a network behavior when there is not enough log data, col. 21 lines 1-15).
Refer to claim 1 for motivational statement.
In regard to claim 6, Grushka et al. teach the method of claim 5, further comprising: training the first model based on the first set of parameter values (the resource requirement recommendation 104 includes a set of predicted resource requirements, fig. 1A, para. 31); and inferring the second set of parameter values by applying the first model at the second restriction level of the first set of hardware units (the modified resource requirement recommendation 128 can subsequently be evaluated within the simulated computing environment, para. 40).
Grushka et al. teach of machine learning for the predictive model (para. 37), however, Grushka et al. does not explicitly teach but Hermoni et al. teach of an AI model (artificial-intelligence (AI) system, col. 5 lines 38-57).
Refer to claim 1 for motivational statement.
In regard to claim 7, Grushka et al. teach the method of claim 1, wherein performing the first diagnostic test based on the first restriction level comprises:
performing a set of computations at a plurality of discrete restriction levels indicating corresponding resource consumptions of the first set of hardware units up to the first restriction level (simulated computing environment according to the resource requirement recommendations, para. 6-7, para. 31); and
incorporating respective outputs of the set of computations at the plurality of discrete restriction levels into the first log (the second resource requirement recommendation 304B is a superset of the first resource requirement recommendation 304A, para. 46, fig. 3).
In regard to claim 8, Grushka et al. teach the method of claim 1, further comprising storing the first log in a persistent database (data can be stored in a data structure in one or more memory components, para. 64, datastores 726 can be configured for algorithms for execution by a recommendation engine, para. 77).
In regard to claim 10, Grushka et al. does not explicitly teach but Hermoni et al. teach the method of claim 1, further comprising presenting a visual representation of the second set of parameter values on a user interface (the service operators and/or the communication customers may have an arrangement and/or agreement with an operator of communication network, such as one or more service level agreements (SLAs) which define various parameters of the services provided by communication network, col. 6 lines 10-35).
Refer to claim 1 for motivational statement.
In regard to claim 11, Grushka et al. teach a non-transitory computer-readable medium storing instructions to:
perform (the evaluation 120, para. 47, fig. 4) a first diagnostic test on a distributed computing system based on a first restriction level (resource utilization threshold 406 that can be utilized to analyze various resource requirement recommendations, para. 48, fig. 4) indicating resource consumption of a first set of hardware units of the distributed computing system (live deployment environment to accurately evaluate resource recommendations, para. 4, 12), the distributed computing system comprising a plurality of computing devices with processing and memory resources (the processing units of the processing systems are distributed, para. 66);
generate a first set of values indicating an output of the first diagnostic test at the first restriction level of the distributed computing system (the simulated environment 102 may then generate an evaluation 120 for the resource requirement recommendation, fig. 1D, para. 40);
store the first set of values in a first log in association with the first restriction level (data can be stored in a data structure in one or more memory components, para. 64, datastores 726 can be configured for algorithms for execution by a recommendation engine, para. 77);
configure a first diagnostic tool (predictive model, fig. 1D, 106, para. 4, 31) with the first set of values to emulate the first diagnostic test (model can be trained using historical usage data of computing resources from past software deployments, para. 9); and
apply the first diagnostic tool to obtain a second set of values indicating an output of the first diagnostic test at a second restriction level of the first set of hardware units (many iterations and situation can be simulated, para. 31), the second restriction level being higher than the first restriction level (the simulated computing environment 102 can subsequently analyze the operation of instances 108 to generate an evaluation defining various metrics of the instances 108 such as resources underutilization, overutilization, para. 32-35, in a test scenario, values can be slightly modified to be plus of their previous simulation, para. 35).
Grushka et al. does not explicitly teach but Hermoni et al. teach of a first set of parameter values and a second set of parameter values for applying to the first diagnostic tool (values of the one or more parameters of a first entry of the log data may be used to create training data and testing data for the AI system … log data may include simulated log data, col. 4 lines37-45, parameters such as bandwidth, latency, jitter ect. as well as processing power, memory, storage, ect., col. 7 lines 20-32, scanning the log data for parameters, col. 4 lines 65-67-col. 5 lines 1-20).
Refer to claim 1 for motivational statement.
In regard to claim 12, Grushka et al. teach the non-transitory computer-readable storage medium of claim 11, wherein the instructions are further to:
perform a second diagnostic test on the distributed computing system based on the first restriction level indicating resource consumption of a second set of hardware units of the distributed computing system (test deployments for evaluation a resource requirement recommendation can be performed quickly, and thus, many iterations and situations can be simulated, para. 31, fig. 1A);
generate a second log comprising an output of the second diagnostic test at a third restriction level (the simulated environment 102 may then generate an evaluation 120 for the resource requirement recommendation, fig. 1D, para. 40), it is noted that different simulation would generate multiple resource requirement recommendations which equated to first and second log);
configure a second diagnostic tool (predictive model, fig. 1D, 106, para. 4, 31, it is noted that there can be more than one model within the system) to emulate the second diagnostic test (model can be trained using historical usage data of computing resources from past software deployments, para. 9); and
apply the second diagnostic tool to obtain an output of the second diagnostic test at a fourth restriction level of the second set of hardware units (many iterations and situation can be simulated, para. 31), the fourth restriction level being higher than the third restriction level (the simulated computing environment 102 can subsequently analyze the operation of instances 108 to generate an evaluation defining various metrics of the instances 108 such as resources underutilization, overutilization, para. 32-35, in a test scenario, values can be slightly modified to be plus of their previous simulation, para. 35).
Grushka et al. does not explicitly teach but Hermoni et al. teach of a second log with a third set of parameter values, configured a second diagnostic tool with the third set of parameter values and apply the second diagnostic tool to obtain a fourth set of parameter values (values of the one or more parameters of a first entry of the log data may be used to create training data and testing data for the AI system … log data may include simulated log data, col. 4 lines37-45, parameters such as bandwidth, latency, jitter ect. as well as processing power, memory, storage, ect., col. 7 lines 20-32, scanning the log data for parameters, col. 4 lines 65-67-col. 5 lines 1-20).
Refer to claim 1 for motivational statement.
In regard to claim 13, Grushka et al. teach the non-transitory computer-readable storage medium of claim 12, wherein the first set of hardware units includes the processing resources of the distributed computing system, and wherein the second set of hardware units includes the memory resources of the distributed computing system (for instance, the system characteristics 112 can define available computing resources such as computing cores, memory and storage of a cluster, para. 43, fig. 3A).
In regard to claim 14, Grushka et al. does not explicitly teach but Hermoni et al. teach the non-transitory computer-readable storage medium of claim 11, wherein, to extract the first set of parameter values from the first log, wherein the instructions are further to execute a script that reads the first set of parameter values from the first log (the log data may be automatically scan and label, col. 4 lines 65-67 through col. 5 lines 1-37).
Refer to claim 1 for motivational statement.
In regard to claim 15, Grushka et al. teach of machine learning for the predictive model (para. 37), however, Grushka et al. does not explicitly teach but Hermoni et al. teach the non-transitory computer-readable storage medium of claim 11, wherein the first diagnostic tool is based on a first artificial intelligence (AI) model (artificial-intelligence (AI) system, col. 5 lines 38-57), and wherein the instructions are further to: determine whether performing the first diagnostic test generates a sufficient amount of data for training the first AI model; and in response to the first diagnostic test not generating the sufficient amount of data, re-perform the first diagnostic test (a higher level of the deep system module may include processes that simulate a network behavior when there is not enough log data, col. 21 lines 1-15).
Refer to claim 1 for motivational statement.
In regard to claim 16, Grushka et al. teach the non-transitory computer-readable storage medium of claim 15, wherein the instructions are further to:
train the first model based on the first set of parameter values (the resource requirement recommendation 104 includes a set of predicted resource requirements, fig. 1A, para. 31); and
infer the second set of parameter values by applying the first AI model at the second restriction level of the first set of hardware units (the modified resource requirement recommendation 128 can subsequently be evaluated within the simulated computing environment, para. 40).
Grushka et al. teach of machine learning for the predictive model (para. 37), however, Grushka et al. does not explicitly teach but Hermoni et al. teach of an AI model (artificial-intelligence (AI) system, col. 5 lines 38-57).
Refer to claim 1 for motivational statement.
In regard to claim 17, Grushka et al. teach the non-transitory computer-readable storage medium of claim 11, wherein, to perform the first diagnostic test based on the first restriction level, wherein the instructions are further to:
perform a set of computations at a plurality of discrete restriction levels indicating corresponding resource consumptions of the first set of hardware units up to the first restriction level (simulated computing environment according to the resource requirement recommendations, para. 6-7, para. 31); and
incorporate respective outputs of the set of computations at the plurality of discrete restriction levels into the first log (the second resource requirement recommendation 304B is a superset of the first resource requirement recommendation 304A, para. 46, fig. 3).
In regard to claim 18, Grushka et al. teach the non-transitory computer-readable storage medium of claim 11, wherein the instructions are further to store the first log in a persistent database (data can be stored in a data structure in one or more memory components, para. 64, datastores 726 can be configured for algorithms for execution by a recommendation engine, para. 77).
In regard to claim 19, Grushka et al. does not explicitly teach but Hermoni et al. teach non-transitory computer-readable storage medium of claim 11, wherein the instructions are further to present a visual representation of the second set of parameter values on a user interface (the service operators and/or the communication customers may have an arrangement and/or agreement with an operator of communication network, such as one or more service level agreements (SLAs) which define various parameters of the services provided by communication network, col. 6 lines 10-35).
Refer to claim 1 for motivational statement.
In regard to claim 20, Grushka et al. teach a computer system, comprising:
a processing resource (processing units, fig. 6, para. 66);
a non-transitory computer-readable storage medium (system memory, fig. 6, para. 66) storing instructions that when executed by the processing resource cause the computer system to:
perform (the evaluation 120, para. 47, fig. 4) a first diagnostic test on a distributed computing system based on a first restriction level (resource utilization threshold 406 that can be utilized to analyze various resource requirement recommendations, para. 48, fig. 4) indicating resource consumption of a first set of hardware units of the distributed computing system (live deployment environment to accurately evaluate resource recommendations, para. 4, 12), the distributed computing system comprising a plurality of computing devices with processing and memory resources (the processing units of the processing systems are distributed, para. 66);
store, in a first log, an output of the first diagnostic test at the first restriction level of the distributed computing system (the simulated environment 102 may then generate an evaluation 120 for the resource requirement recommendation, fig. 1D, para. 40);
configure a first diagnostic tool (predictive model, fig. 1D, 106, para. 4, 31) with the first set of parameter values to emulate the first diagnostic test (model can be trained using historical usage data of computing resources from past software deployments, para. 9); and
execute the first diagnostic tool at a second restriction level of the first set of hardware units (many iterations and situation can be simulated, para. 31) to obtain a second set of parameter values indicating an output of the first diagnostic test, the second restriction level being higher than the first restriction level (the simulated computing environment 102 can subsequently analyze the operation of instances 108 to generate an evaluation defining various metrics of the instances 108 such as resources underutilization, overutilization, para. 32-35, in a test scenario, values can be slightly modified to be plus of their previous simulation, para. 35).
Grushka et al. does not explicitly teach but Hermoni et al. teach the output comprising a first set of parameter values and a second set of parameter values for applying to the first diagnostic tool (values of the one or more parameters of a first entry of the log data may be used to create training data and testing data for the AI system … log data may include simulated log data, col. 4 lines37-45, parameters such as bandwidth, latency, jitter ect. as well as processing power, memory, storage, ect., col. 7 lines 20-32, scanning the log data for parameters, col. 4 lines 65-67-col. 5 lines 1-20).
Refer to claim 1 for motivational statement.
**************
Claim 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Grushka et al. (US 2023/0246981) in further view of Hermoni et al. (US 10,846,141) in further view of Hamlin et al. (US 2025/0298457).
In regard to claim 9, Grushka et al. and Hermoni et al. does not explicitly teach the method of claim 1, wherein a first power consumption of the first set of hardware units at the first restriction level is less than a second power consumption of the first set of hardware units at the second restriction level.
Hamlin et al. teach of dynamically adjust a power limit value (para. 206, fig. 12a-12B), where the power limit orchestration service 1010 generates a cumulative power consumption level for most or all nodes and determines an appropriate power limit value according to one or more threshold values, if the cumulative power consumption level is 35W-40W (set first power limit level, clock speed at 100%), 40W-45W (set, clock speed at 90%), 40W-45W (set, clock speed at 80%) (para. 213-216).
It would have been obvious to modify the method of Grushka et al. and Hermoni et al. by adding Hamlin et al. dynamic power limit orchestration system. A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would aid in a heterogeneous computing platform (para. 205).
**************
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892.
Sayyed et al. (US 2025/0315336) self diagnostic operating modes
Li et al. (US 2019/0028840) diagnostic tool, power consumption
Gupta et al. (US 2014/0333287) diagnostic module, power threshold
Gangemi et al. (US 2012/0203536) resource environment tests
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOAN TRUONG whose telephone number is 408-918-7552. The examiner can normally be reached on 10AM-6PM PST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, Thomas Ashish can be reached on 571-272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Loan L.T. Truong/Primary Examiner, Art Unit 2114 HYPERLINK "mailto:Loan.truong@uspto.gov" Loan.truong@uspto.gov