Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of Claim
1. Applicant's amendment dated 11/03/2025 responding to the Office Action 08/12/2025 provided in the rejection of claims 1-20.
2. Claims 1 and 3-20 have been amended and claim 2 have been canceled.
3. Claims 1 and 3-20 are pending in the application, of which claims 1, 19-20 in independent form and which have been fully considered by the examiner.
Response to Amendments
4. (A) Regarding claim objection: Claim objection raised in previous office action have been withdrawn in view of Applicants’ amendments.
(B) Regarding 112(b) rejection: 112(b) rejection raised in previous office action have been withdrawn in view of Applicants’ amendments.
(C) Regarding 101 rejections: 101 rejections raised in previous office action have been maintained in view of Applicants’ amendments.
(D) Regarding art rejection: Applicants’ amendment necessitated new grounds of rejections presented in the following art rejection. Please John Regehr (Test-Case Reduction for C Compiler Bugs, 2012 – herein after Regehr).
Examiner Notes
5. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Information Disclosure Statement
6. The information disclosure statement (IDS) submitted on 10/14/2025 and 11/25/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Examiner Notes
7. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. Claim(s) 1, 3-7, 9-13, 16-17 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Golovin et al. (WO 2018222204 A1 – IDS filed on 08/23/2024 – herein after Golovin) in view of John Regehr (Test-Case Reduction for C Compiler Bugs, 2012 – herein after Regehr).
Regarding claim 1.
Golovin discloses
A computing device (computing device – See paragraph [0403]), comprising:
a memory configured to store one or more computer instructions (a memory 114 – See paragraph [0403]); and
one or more processors configured to run the one or more computer instructions stored in the memory (processors – See paragraph [0404]), to cause the computing device to execute operations comprising:
after receiving a parameter optimization request, determining a target function corresponding to the received parameter optimization request (performance can be measured or evaluated as a function of those parameters – See paragraph [0089]; optimizing one or more adjustable parameters (e.g. operating parameters) of a system. In particular, the present disclosure provides a parameter optimization system that can perform one or more black-box optimization techniques to iteratively suggest new sets of parameter values for evaluation…the parameter optimization system can provide an evaluation service that evaluates the suggested parameter values using one or more evaluation devices – See paragraph [0092]);
in a process of performing a parameter test on any candidate parameter of a computing model by the target function to obtain a test result corresponding to the candidate parameter (perform one or more black-box optimization techniques to iteratively suggest new sets of parameter values for evaluation… the parameter optimization system can provide an evaluation service that evaluates the suggested parameter values using one or more evaluation devices – See paragraphs [0091-0092]; functions with known optimal solutions designed to test the ability of black-box optimization routines can be used – See paragraph [0301]), [[calling a test reduction module comprising a plurality of test reduction algorithms to determine
determining a target reduction algorithm matched with the parameter test from the plurality of test reduction algorithms of the test reduction module and calling the target reduction algorithm to determine whether the parameter test meets the reduction condition;]]
in response to the parameter test meeting the reduction condition (reach an acceptable degree of optimization in fewer iterations, thereby reducing the total computation associated with the optimization…reduced computational resource expenditure, when compared with alternative approaches such as Bayesian Optimization – See paragraphs [0090-0093]. The iterations can be stopped when a certain number of sequential iteration-over-iteration improvements are each below the threshold value – See paragraphs [0427-0428]), stopping the parameter test of the candidate parameter (iteratively suggest new sets of parameter values based on the returned results. The iterative suggestion and evaluation process can serve to optimize or otherwise improve the overall performance of the system, as evaluated by an objective function that evaluates one or more metrics…perform early stopping may reduce the expenditure of computational resources that are associated with continuing the performance of on-going variant evaluations which are determined to be unlikely to ultimately yield a final performance evaluation that is in excess of a current-best performance evaluation – See paragraphs [0091-0093 and 0121]); or
[[in response to the parameter test failing to meet the reduction condition, continuing to execute the parameter test of the candidate parameter to obtain the test result of the candidate parameter.]]
Golovin does not disclose
calling a test reduction module comprising a plurality of test reduction algorithms to determine
determining a target reduction algorithm matched with the parameter test from the plurality of test reduction algorithms of the test reduction module and calling the target reduction algorithm to determine whether the parameter test meets the reduction condition;
Regehr discloses
calling a test reduction module (C-reduce invokes – See Listing 2. C-Reduce calls five kinds of transformations. The first includes “peephole optimizations” that operate on a contiguous segment of the tokens within a test case…C-Reduce module – See page 6) comprising a plurality of test reduction algorithms to determine (delta debugging algorithms – See page 3. Some of the delta debugging algorithms that we used to reduce C programs (e.g., Berkeley delta) produce variants are not even syntactically valid – See page 5, left column. All three of our reducers adopt Berkeley delta’s convention of being parameterized by a test that determines whether a variant is successful or unsuccessful – See page 5, left column – Listing 2. The C-Reduce algorithm):
determining a target reduction algorithm matched with the parameter test from the plurality of test reduction algorithms of the test reduction module (All three of our reducers adopt Berkeley delta’s convention of being parameterized by a test that determines whether a variant is successful or unsuccessful – See page 5) and calling the target reduction algorithm to determine whether the parameter test meets the reduction condition (The inputs to each reducer—Berkeley delta, Seq-Reduce, Fast-Reduce, and C-Reduce—are the test case that is to be reduced and a shell script that determines whether a variant is successful. Berkeley delta additionally requires a “level” parameter that specifies how much syntax-driven flattening of its input to perform – See page 7, left column);
Regehr also discloses
in response to the parameter test meeting the reduction condition (C-reduce algorithm – Listing 2, page 6), stopping the parameter test of the candidate parameter (result == stop, break – Listing 2, page 6); or
in response to the parameter test failing to meet the reduction condition, continuing to execute the parameter test of the candidate parameter to obtain the test result of the candidate parameter (Because delta debugging is an iterative optimization algorithm, once a test-case reduction goes wrong in this fashion – See page 4).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Regehr’s teaching into Golovin’s invention because incorporating Regehr’s teaching would enhance Golovin to enable to invoke module to perform reduction operations as suggested by Regehr (page 1).
Regarding claim 3, the computing device of claim 1,
Golovin discloses
wherein the target reduction algorithm is determined from the plurality of test reduction algorithms (to reduce resource expenditure resulting from function evaluation, while others (for instance those relating to the "Gradientless Descent" optimization algorithm provided by the present disclosure) may serve to reduce computational resource expenditure resulting from execution of the optimization algorithm – See paragraphs [0091-0093]) by:
searching the target reduction algorithm matched with the parameter test from the plurality of test reduction algorithms (Another class of black-box optimization algorithms performs a local search by selecting points that maintain a search pattern – See paragraph [0101-0102]).
Regarding claim 4, the computing device of claim 3,
Golovin discloses
wherein searching the target reduction algorithm matched with the parameter test from the plurality of test reduction algorithms (Another class of black-box optimization algorithms performs a local search by selecting points that maintain a search pattern – See paragraphs [0010-0102]. CreateStudy: Given a Study configuration, this can create an optimization Study and return a globally unique identifier ("guid") which can be used for all future system calls. If a Study with a matching name exists, the guid for that Study is returned – See paragraph [0180]) comprises:
determining first test information according to the parameter test of the candidate parameter (the performance of the model can be measured according to different metrics such as, for example, the accuracy of the model (e.g., on a validation data set or testing data set) – See paragraph [0094]);
generating second test information respectively corresponding to the plurality of test reduction algorithms (provide both improved optimization and reduced computational resource expenditure, when compared with alternative approaches such as Bayesian Optimization – See paragraph [0093]);
searching, from the second test information, target test information matched with the first test information (depending on the problem, a one-to-one nonlinear mapping may be used for some of the parameters, and is typically used on the labels. Data normalization can be handled before trials are presented to the trial suggestion algorithms, and its suggestions can be transparently mapped back to the user- specified scaling – See paragraph [0215]); and
determining the test reduction algorithm corresponding to the target test information as the target reduction algorithm (applications of black-box optimization, information related to the performance of a trial may become available during trial evaluation. For example, this may take the form of intermediate results. If sufficiently poor, these intermediate results can be used to terminate a trial or evaluation early, thereby saving resources – See paragraphs [0215-0217]).
Regarding claim 5, the computing device of claim 4,
Golovin discloses
wherein the first test information comprises: a parameter test type (the parameter optimization system can be employed to optimize the adjustable parameters (e.g., component or ingredient type or amount, production order, production timing) of a physical product or process of producing a physical product such as, for example, an alloy, a metamaterial, a concrete mix, a process for pouring concrete, a drug cocktail, or a process for performing therapeutic treatment – See paragraph [0096]), and wherein searching the target test information matched with the first test information from the second test information respectively corresponding to the plurality of test reduction algorithms (Depending on the problem, a one-to-one nonlinear mapping may be used for some of the parameters, and is typically used on the labels. Data normalization can be handled before trials are presented to the trial suggestion algorithms, and its suggestions can be transparently mapped back to the user- specified scaling – See paragraph [0215]) comprises:
searching the target test information matched with the parameter test type of the first test information from the second test information respectively corresponding to the plurality of test reduction algorithms (The parameter optimization system of the present disclosure can use any number of different types of black-box optimization techniques, including the aforementioned novel optimization technique provided herein which is referred to as "Gradientless Descent." Black-box optimization techniques make minimal assumptions about the problem under consideration – See paragraphs [0099 -0100]).
Regarding claim 6, the computing device of claim 5,
Golovin discloses
wherein the parameter test type comprises: a serial test type (selected at random from a geometric series of radii. An upper limit on the geometric series of radii can be dependent on a diameter of a dataset, a resolution of the dataset and a dimensionality of an objective function – See paragraphs [0051-0052]) and a parallel test type (the parameter optimization system can support parallelization and/or be designed asynchronously – See paragraphs [0114-0116]), and wherein searching the target test information matched with the parameter test type of the first test information from the second test information respectively corresponding to the plurality of test reduction algorithms (the parameter optimization system of the present disclosure can use any number of different types of black-box optimization techniques, including the aforementioned novel optimization technique provided herein which is referred to as "Gradientless Descent." Black-box optimization techniques make minimal assumptions about the problem under consideration – See paragraphs [0099-0100] comprises:
in response to the parameter test type of the candidate parameter being the serial test type, determining the target test information with the serial test type in the second test information respectively corresponding to the plurality of test reduction algorithms (the benchmarking suite can optimize each function with each algorithm k times (where k is configurable), producing a series of performance-over-time metrics which can then be formatted after execution. The individual runs can be distributed over multiple threads and multiple machines, so it is easy to have thousands or more of benchmark runs being executed in parallel – See paragraph [0203]); or
in response to the parameter test type of the candidate parameter being the parallel test type, determining the target test information with the parallel test type in the second test information respectively corresponding to the plurality of test reduction algorithms (the parameter optimization system can provide the ability to ask for additional suggestions at any time and/or report back results at any time. Thus, in some implementations, the parameter optimization system can support parallelization and/or be designed asynchronously… in some instances it may be desired for the system to suggest multiple trials to run in parallel. The multiple trials should collectively contain a diverse set of parameter values that are believed to provide "good" results. Performing such batch suggestion requires the parameter optimization system to have some additional algorithmic sophistication -- See paragraphs [0114-0116]).
Regarding claim 7, the computing device of claim 4,
Golovin discloses
wherein the first test information comprises: a parameter test stage (the parameter optimization system can continuously or periodically consider which of a plurality of available black-box optimization techniques is best suited for performance of the next round of suggestion, given the current status of the study (e.g., number of trials, number of parameters, shape of data and previous trials, feasible parameter space) – See paragraphs [0107-0108]), and wherein searching the target test information matched with the first test information from the second test information respectively corresponding to the plurality of test reduction algorithms (the average optimality gap of each algorithm relative to Random Search, in problem space dimensions of 4, 8, 16, and 32. The horizontal axis shows the progress of the search in terms of the number of function evaluations – See paragraphs [0301-0304]) comprises:
searching the target test information matched with the parameter test stage of the first test information from the second test information respectively corresponding to the plurality of test reduction algorithms (the parameter optimization system of the present disclosure can be implemented as a managed service that stores the state of each optimization. This approach drastically reduces the effort a new user needs to get up and running; and a managed service with a well-documented and stable RPC API allows the service to be upgraded without user effort – See paragraphs [0157-0159]. The parameter optimization system of the present disclosure can include a web dashboard which can be used for monitoring/or and changing the state of Studies. The dashboard can be fully featured and can implement the full functionality of the parameter optimization system API. The dashboard can also be used for: (1) Tracking the progress of a study. (2) Interactive visualizations. (3) Creating, updating and deleting a study. (4) Requesting new suggestions, early stopping, activating/deactivating a study – See paragraph [0205]).
Regarding claim 9, the computing device of claim 1,
Golovin discloses
wherein the plurality of test reduction algorithms comprise a self-defined reduction algorithm set by a target user (receiving, by the one or more computing devices, a user input that selects the second black-box optimization technique from a plurality of available black-box optimization techniques – See paragraphs [0023-0026]); and
the target reduction algorithm is further determined from the plurality of test reduction algorithms by (reduce the expenditure of resources when performing optimization of the parameters of a system, product, or process – See paragraphs [0091-0092]):
in response to the self-defined reduction algorithm set by the target user existing in the plurality of test reduction algorithms, determining the self-defined reduction algorithm as the target reduction algorithm (The system can interface with a user device to receive results obtained through the evaluation of the suggested parameter values by the user…it is possible to reach an acceptable degree of optimization in fewer iterations, thereby reducing the total computation associated with the optimization – See paragraphs [0092-0094]).
Regarding claim 10, the computing device of claim 9,
Golovin discloses
wherein the operations further comprise: based on the self-defined reduction algorithm set by the target user, storing the self- defined reduction algorithm (the optimization algorithms supported by the parameter optimization system can be computed from or performed relative to the data stored in the system database, and nothing else, where all state is stored in the database. Such a configuration provides a major operational advantage: the state of the database can be changed (e.g., changed arbitrarily) and then processes, algorithms, metrics, or other methods can be performed "from scratch" (e.g., without relying on previous iterations of the processes, algorithms, metrics, or other methods) – See paragraphs [0105-0108]).
Regarding claim 11, the computing device of claim 9,
Golovin discloses
wherein in response to the self-defined reduction algorithm set by the target user existing in the plurality of test reduction algorithms, determining the self-defined reduction algorithm as the target reduction algorithm (the parameter optimization system can automatically switch between two or more different black box optimization techniques based on one or more factors, including, for example: a total number of trials associated with the study; a total number of adjustable parameters associated with the study; and a user-defined setting indicative of a desired processing time – See paragraphs [0105-0108]) comprises:
in response to the self-defined reduction algorithm set by the target user existing in the plurality of test reduction algorithms, generating prompt information showing existence of the self-defined reduction algorithm (the parameter optimization system can automatically switch between two or more different black box optimization techniques based on one or more factors, including, for example: a total number of trials associated with the study; a total number of adjustable parameters associated with the study; and a user-defined setting indicative of a desired processing time – See paragraphs [0105-0108]);
showing the prompt information to the target user for the target user to confirm whether the self-defined reduction algorithm is applicable to the parameter test (The operations include performing one or more black-box optimization techniques to generate a suggested trial based at least in part on the one or more results and the one or more sets of values respectively associated with the one or more results. The suggested trial includes a suggested set of values for the one or more adjustable parameters. The operations include accepting an adjustment to the suggested trial from a user – See paragraph [0010]); and
in response to the target user executing a confirming operation for the self-defined reduction algorithm applicable to the parameter test, determining the self-defined reduction algorithm as the target reduction algorithm (The operations include accepting an adjustment to the suggested trial from a user. The adjustment includes at least one change to the suggested set of values to form an adjusted set of values. The operations include receiving a new result obtained through evaluation of the adjusted set of values. The operations include associating the new result and the adjusted set of values with the study in the database – See paragraphs [0010, 0031 and 0110]).
Regarding claim 12, the computing device of claim 1,
Golovin discloses
wherein in the process of performing the parameter test on any candidate parameter of the computing model by the target function to obtain the test result corresponding to the candidate parameter, [[calling the test reduction module comprising the plurality of test reduction algorithms to determine whether the parameter test meets the reduction condition]] (it is possible to reach an acceptable degree of optimization in fewer iterations, thereby reducing the total computation associated with the optimization – See paragraph [0093]. A first black-box optimization technique may be superior when the number of previous trials to consider is low, but may become undesirably computationally expensive when the number of trials reaches a certain number; while a second black-box optimization technique may be superior (e.g., because it is less computationally expensive) when the number of previous trials to consider is very high. Thus, in one example, when the total number of trials associated with the study reaches a threshold amount, the parameter optimization system can automatically switch from use of the first technique to use of the second technique – See paragraphs [0108-0110]) comprises:
in the process of performing the parameter test on any candidate parameter by the target function, determining whether an intermediate test result of the parameter test meets the reduction condition (support use of one or more automated stopping algorithms that evaluate the intermediate statistics (e.g., initial results) of a pending trial to determine whether to perform early stopping of the trial, thereby saving resources that would otherwise be consumed by completing a trial that is not likely to provide a positive result – See paragraphs [0119-0120]. Black-box optimization, information related to the performance of a trial may become available during trial evaluation. For example, this may take the form of intermediate results. If sufficiently poor, these intermediate results can be used to terminate a trial or evaluation early, thereby saving resources – See paragraphs [0217-0219]).
Golovin does not disclose
calling the test reduction module comprising the plurality of test reduction algorithms to determine whether the parameter test meets the reduction condition.
Regehr discloses
calling the test reduction module comprising the plurality of test reduction algorithms to determine whether the parameter test meets the reduction condition ((C-reduce invokes – See Listing 2. C-Reduce calls five kinds of transformations. The first includes “peephole optimizations” that operate on a contiguous segment of the tokens within a test case…C-Reduce module – See page 6) comprising a plurality of test reduction algorithms to determine whether the parameter test meets a reduction condition by (delta debugging algorithms – See page 3. Some of the delta debugging algorithms that we used to reduce C programs (e.g., Berkeley delta) produce variants are not even syntactically valid – See page 5, left column. All three of our reducers adopt Berkeley delta’s convention of being parameterized by a test that determines whether a variant is successful or unsuccessful – See page 5, left column – Listing 2. The C-Reduce algorithm) comprise:
Regehr also discloses
in the process of performing the parameter test on any candidate parameter by the target function, determining whether an intermediate test result of the parameter test meets the reduction condition (Seq-Reduce, Fast-Reduce, and C-Reduce—are the test case that is to be reduced and a shell script that determines whether a variant is successful. Berkeley delta additionally requires a “level” parameter that specifies how much syntax-driven flattening of its input to perform – See page 7).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Regehr’s teaching into Golovin’s invention because incorporating Regehr’s teaching would enhance Golovin to enable to require a level parameter that specifies input to perform as suggested by Regehr (page 7).
Regarding claim 13, the computing device of claim 12,
Golovin discloses
wherein determining whether the intermediate test result of the parameter test meets the reduction condition (create an optimization Study and return a globally unique identifier ("guid") which can be used for all future system calls. If a Study with a matching name exists, the guid for that Study is returned. This can allow parallel workers to call this method and all register with the same Study – See paragraphs [0178-0183]. Intermediate results can be used to terminate a trial or evaluation early, thereby saving resources – See paragraphs [0217-0218]) comprises:
in the process of performing the parameter test on any candidate parameter of the computing model by the target function, calling the target reduction algorithm to determine whether the intermediate test result of the parameter test meets the reduction condition (this design can ensure that all system calls are made with low latency, while allowing for the fact that the generation of Trials can take longer…AddMeasurementToTrial: This method can allow clients to provide intermediate metrics during the evaluation of a Trial. These metrics can then be used by the Automated Stopping rules to determine which Trials should be stopped early – See paragraphs [0178-0184]).
Regarding claim 16, the computing device of claim 1,
Golovin discloses
wherein the operations further comprise:
determining at least one candidate parameter failing to meet the reduction condition in a plurality of candidate parameters (fails to satisfy the first design criteria; For k studies with n trials each it would require /3(/c.sup.3n.sup.3) time. Such an approach also requires one to specify or learn kernel functions that bridge between the prior(s) and current Study, which may result in poorly chosen inductive biases and reducing its effectiveness – See paragraph [0247]), and obtaining the test result corresponding to the at least one candidate parameter (this normalized value over all benchmarks is taken resulting in the relative optimality gap of the algorithm applied to the benchmark – See paragraphs [0301-0302]); and
according to the test result corresponding to the at least one candidate parameter, selecting a target parameter meeting an optimal parameter condition from the at least one candidate parameter (an optimization algorithm to generate a suggested variant based at least in part on the one or more prior evaluations of performance and the associated set of values. The suggested variant is defined by a suggested set of values for the one or more adjustable parameters – See paragraphs [0006-0007]. The predefined probability can change (e.g., adaptively change) over a number of iterations of the method 1300. For example, the predefined probability can increasingly lead to selection of the ball sampling technique at 1304 as the number of iterations increases – See paragraphs [0415-0418]).
Regarding claim 17, the computing device of claim 16,
Golovin discloses
wherein the operations further comprise:
receiving the parameter optimization request initiated for a to-be-processed parameter of a target resource (Black box optimization can be used to find the best operating parameters for any system, product, or process whose performance can be measured or evaluated as a function of those parameters – See paragraphs [0089-0092]);
wherein determining the target function corresponding to the parameter optimization request in response to the parameter optimization request (Black box optimization can be used to find the best operating parameters for any system, product, or process whose performance can be measured or evaluated as a function of those parameters – See paragraphs [0089-0091]) comprises:
determining the target function corresponding to a processing target of the target resource in response to the parameter optimization request (the parameter optimization system can request, via an internal abstract policy, generation of the suggested trial by the external custom policy provided by the user – See paragraph [0125]); and
performing sampling processing on the to-be-processed parameter for a plurality of times to obtain a plurality of candidate parameters (maintaining a production system is that bugs are inevitably introduced as code matures. There are times when a new algorithmic change, however well tested, can lead to instances of the Suggestion Service failing for particular Studies. If a Study is picked up by the DanglingWorkFinder too many times, it can detect this, temporarily halt the Study, and alert an operator to the crashes. This can help prevent subtle bugs that only affect a few Studies from causing crash loops that can affect the overall stability of the system… optimize each function with each algorithm k times (where k is configurable), producing a series of performance-over-time metrics which can then be formatted after execution. The individual runs can be distributed over multiple threads and multiple machines, so it is easy to have thousands or more of benchmark runs being executed in parallel – See paragraphs [0190 and 0203]); and
after the operation of according to the test result corresponding to the at least one candidate parameter, selecting the target parameter meeting the optimal parameter condition from the at least one candidate parameter (Completed, and a partial performance curve (i.e., a set of measurements taken during Trial evaluation). Given this prediction, in some implementations, if the probability of exceeding the optimal value found – See paragraph [0221]), the data processing method further comprising:
according to a value of the to-be-processed parameter at the target parameter, generating processing information of the target resource to process the target resource according to the processing information (the suggested trial can include performing, by the one or more computing devices, a first black-box optimization technique to generate the suggested trial based at least in part on the one or more results and the one or more sets of values – See paragraphs [0020-0022]. Again, the parameter optimization system can enable and leverage a partnership between a human user and the parameter optimization system to improve computational resource expenditure, time or other attributes of the suggestion/evaluation process – See paragraph [0111]).
Regarding claim 19.
Golovin discloses
A computing device (a computing device – See paragraph [0403]), comprising:
a memory configured to store one or more computer instructions (a memory – See paragraphs [0403-0404]); and
one or more processors configured to run the one or more computer instructions stored in the memory, to cause the computing device (one or more processors – See paragraphs [0403-0404]), to execute operations comprising:
determining a processing resource corresponding to a parameter processing interface in response to a request of calling the parameter processing interface (the system can interface with a user device to receive results obtained through the evaluation of the suggested parameter values by the user – See paragraph [0092]. The web-based dashboard can be used for monitoring and/or changing the state of studies. The dashboard can be fully featured and implement the full functionality of a system API. The dashboard can be used for tracking the progress of the study; interactive visualizations; creating, update, and/or deleting a study; requesting new suggestions, early stopping, activating/deactivating a study; or other actions or interactions – See paragraph [0126]. A System API that takes service requests; (2) a Custom Policy that implements the Abstract Policy and generates suggested Trials; (3) a Playground Binary that drives the Custom Policy based on demand reported by the System API; and (4) the Evaluation Workers that behave as normal, such as, requesting and evaluating Trials – See paragraph [0196]);
executing using the processing resource corresponding to the parameter processing interface (users can configure a set of benchmark runs by providing a set of algorithm configurations and a set of objective functions. The benchmarking suite can optimize each function with each algorithm k times (where k is configurable), producing a series of performance-over-time metrics which can then be formatted after execution. The individual runs can be distributed over multiple threads and multiple machines, so it is easy to have thousands or more of benchmark runs being executed in parallel – See paragraph [0203]):
after receiving a parameter optimization request, determining a target function corresponding to the received parameter optimization request (optimizing one or more adjustable parameters (e.g. operating parameters) of a system. In particular, the present disclosure provides a parameter optimization system that can perform one or more black-box optimization techniques to iteratively suggest new sets of parameter values for evaluation…the parameter optimization system can provide an evaluation service that evaluates the suggested parameter values using one or more evaluation devices – See paragraph [0092]);
in a process of performing a parameter test on any candidate parameter of a computing model by the target function to obtain a test result corresponding to the candidate parameter (perform one or more black-box optimization techniques to iteratively suggest new sets of parameter values for evaluation… the parameter optimization system can provide an evaluation service that evaluates the suggested parameter values using one or more evaluation devices – See paragraphs [0091-0092]; functions with known optimal solutions designed to test the ability of black-box optimization routines can be used – See paragraph [0301]), [[calling a test reduction module comprising a plurality of test reduction algorithms to determine
determining a target reduction algorithm matched with the parameter test from the plurality of test reduction algorithms of the test reduction module; and
calling the target reduction algorithm to determine whether the parameter test meets the reduction condition]] (functions with known optimal solutions designed to test the ability of black-box optimization routines can be used – See paragraph [0301]), (reduce resource expenditure resulting from function evaluation, while others (for instance those relating to the "Gradientless Descent" optimization algorithm provided by the present disclosure) may serve to reduce computational resource expenditure resulting from execution of the optimization algorithm – See paragraphs [0091-0092]);
in response to the parameter test meeting the reduction condition (reach an acceptable degree of optimization in fewer iterations, thereby reducing the total computation associated with the optimization…reduced computational resource expenditure, when compared with alternative approaches such as Bayesian Optimization – See paragraphs [0090-0093]), stopping the parameter test of the candidate parameter (iteratively suggest new sets of parameter values based on the returned results. The iterative suggestion and evaluation process can serve to optimize or otherwise improve the overall performance of the system, as evaluated by an objective function that evaluates one or more metrics…perform early stopping may reduce the expenditure of computational resources that are associated with continuing the performance of on-going variant evaluations which are determined to be unlikely to ultimately yield a final performance evaluation that is in excess of a current-best performance evaluation – See paragraphs [0091-0093 and 0121]); or
in response to the parameter test failing to meet the reduction condition, continuing to execute the parameter test of the candidate parameter to obtain the test result of the candidate parameter.
Golovin does not disclose
calling a test reduction module comprising a plurality of test reduction algorithms to determine
determining a target reduction algorithm matched with the parameter test from the plurality of test reduction algorithms of the test reduction module; and
calling the target reduction algorithm to determine whether the parameter test meets the reduction condition;
Regehr discloses
calling a test reduction module (C-reduce invokes – See Listing 2. C-Reduce calls five kinds of transformations. The first includes “peephole optimizations” that operate on a contiguous segment of the tokens within a test case…C-Reduce module – See page 6) comprising a plurality of test reduction algorithms to determine (delta debugging algorithms – See page 3. Some of the delta debugging algorithms that we used to reduce C programs (e.g., Berkeley delta) produce variants are not even syntactically valid – See page 5, left column. All three of our reducers adopt Berkeley delta’s convention of being parameterized by a test that determines whether a variant is successful or unsuccessful – See page 5, left column – Listing 2. The C-Reduce algorithm):
determining a target reduction algorithm matched with the parameter test from the plurality of test reduction algorithms of the test reduction module (All three of our reducers adopt Berkeley delta’s convention of being parameterized by a test that determines whether a variant is successful or unsuccessful – See page 5) and calling the target reduction algorithm to determine whether the parameter test meets the reduction condition (The inputs to each reducer—Berkeley delta, Seq-Reduce, Fast-Reduce, and C-Reduce—are the test case that is to be reduced and a shell script that determines whether a variant is successful. Berkeley delta additionally requires a “level” parameter that specifies how much syntax-driven flattening of its input to perform – See page 7, left column);
Regehr also discloses
in response to the parameter test meeting the reduction condition (C-reduce algorithm – Listing 2, page 6), stopping the parameter test of the candidate parameter (result == stop, break – Listing 2, page 6); or
in response to the parameter test failing to meet the reduction condition, continuing to execute the parameter test of the candidate parameter to obtain the test result of the candidate parameter (Because delta debugging is an iterative optimization algorithm, once a test-case reduction goes wrong in this fashion – See page 4).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Regehr’s teaching into Golovin’s invention because incorporating Regehr’s teaching would enhance Golovin to enable to invoke module to perform reduction operations as suggested by Regehr (page 1).
Regarding claim 20.
Golovin discloses
A computing device (a computing device – See paragraph [0401]), comprising:
a memory configured to store one or more computer instructions (a memory – See paragraphs [0403-0404]); and
one or more processors configured to run the one or more computer instructions stored in the memory, to cause the computing device (one or more processors – see paragraphs [0403-0405]), to execute operations comprising:
receiving a determination request for determining whether a parameter test on a candidate parameter by a target function meets a reduction condition initiated by a computing device (receiving, by the one or more computing devices, one or more intermediate evaluations of performance of the suggested variant. The intermediate evaluations have been obtained from on an ongoing evaluation of the suggested variant – See paragraph [0006]. An optimization algorithm to generate a suggested variant of the machine-learning model based at least in part on the one or more prior evaluations of performance and the associated set of adjustable parameter values. The suggested variant of the machine-learning model is defined by a suggested set of adjustable parameter values – See paragraphs [0024-0028]), wherein the target function is determined by the computing device after receiving a parameter optimization request (automatically selecting, by the one or more computing devices, the second black-box optimization technique from the plurality of available black-box optimization techniques – See paragraphs [0025-0027]); and
determining whether the parameter test meets the reduction condition in response to the determination request by:(the suggested set of parameter values based at least in part on the one or more results and the one or more sets of parameter values can include requesting – See paragraphs [0045-0049]),
[[determining a target reduction algorithm matched with the parameter test from a plurality of test reduction algorithms of a test reduction module; and
calling the target reduction algorithm to determine whether the parameter test meets the reduction condition,]]
wherein the parameter test of the candidate parameter is stopped when the reduction condition is met (determine whether to perform early- stopping of the ongoing evaluation of the suggested variant. The method includes, in response to determining that early-stopping is to be performed, causing, by the one or more computing devices, early-stopping to be performed in respect of the ongoing evaluation or providing an indication that early-stopping should be performed – See paragraphs [0006-0008]), and
[[the parameter test of the candidate parameter continues to be executed when the reduction condition fails to be met so as to obtain a test result of the candidate parameter.]]
Golovin does not disclose
determining a target reduction algorithm matched with the parameter test from a plurality of test reduction algorithms of a test reduction module; and
calling the target reduction algorithm to determine whether the parameter test meets the reduction condition;
the parameter test of the candidate parameter continues to be executed when the reduction condition fails to be met so as to obtain a test result of the candidate parameter.
Regehr discloses
determining a target reduction algorithm matched with the parameter test from a plurality of test reduction algorithms of a test reduction module (All three of our reducers adopt Berkeley delta’s convention of being parameterized by a test that determines whether a variant is successful or unsuccessful – See page 5); and
calling the target reduction algorithm to determine whether the parameter test meets the reduction condition (The inputs to each reducer—Berkeley delta, Seq-Reduce, Fast-Reduce, and C-Reduce—are the test case that is to be reduced and a shell script that determines whether a variant is successful. Berkeley delta additionally requires a “level” parameter that specifies how much syntax-driven flattening of its input to perform – See page 7, left column);
the parameter test of the candidate parameter continues to be executed when the reduction condition fails to be met so as to obtain a test result of the candidate parameter (Because delta debugging is an iterative optimization algorithm, once a test-case reduction goes wrong in this fashion – See page 4. Previous program reducers based on delta debugging failed to produce sufficiently small test cases. Moreover, they frequently produced invalid test cases that rely on undefined or unspecified behavior – See page 11).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Regehr’s teaching into Golovin’s invention because incorporating Regehr’s teaching would enhance Golovin to enable to invoke module to perform reduction operations as suggested by Regehr (page 1).
10. Claim(s) 8, 15 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Golovin and Regehr as applied to claim 13 above, and further in view of Velipasaoglu et al. (US Pub. No. 2020/0379892 A1 – herein after Veli).
Regarding claim 8, the computing device of claim 4,
Golovin discloses
wherein the first test information comprises: a parameter test stage and a parameter test type (the parameter optimization system can continuously or periodically consider which of a plurality of available black-box optimization techniques is best suited for performance of the next round of suggestion, given the current status of the study (e.g., number of trials, number of parameters, shape of data and previous trials, feasible parameter space) – See paragraphs [0107-0108]), and wherein searching the target test information matched with the first test information from the second test information respectively corresponding to the plurality of test reduction algorithms (the average optimality gap of each algorithm relative to Random Search, in problem space dimensions of 4, 8, 16, and 32. The horizontal axis shows the progress of the search in terms of the number of function evaluations – See paragraphs [0301-0304]) comprises:
Golovin does not disclose
determining the target test information matched with both the parameter test type and the parameter test stage from the second test information respectively corresponding to the plurality of test reduction algorithms.
Veli discloses
determining the target test information matched with both the parameter test type and the parameter test stage from the second test information respectively corresponding to the plurality of test reduction algorithms (Continuing with the description of FIG. 2, test planning, configuration and execution engine 152 utilizes automation tools 254 that collect the performance metrics and configuration parameter states from monitoring system 105. Test planning, configuration and execution engine 152 performs analysis to decide the next set of configuration parameter values to use in tests to determine a performance difference. The next set of configuration parameters is then sent in config map 255 to application 226 by an actuator, to change the state of the test instance of the application – See paragraphs [0036-0037]. For a minimization problem, to find a large enough objective value that can serve as a proxy for cases in which the application becomes unresponsive, test planning, configuration and execution engine 152 can use a simple ascent method such as a coordinate ascent that successively minimizes along coordinate directions to find the minimum of the function, combined with a line search method, by taking small steps to the left and right of the current point—using the step size provided by the operator, in some cases – See paragraph [0055]).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Veli’s teaching into Golovin’s and Regehr’s inventions because incorporating Veli’s teaching would enhance Golovin and Regehr to enable to apply the performance evaluation criteria to determine a performance difference, evaluating stabilization of performance difference as the cycle progresses as suggested by Veli (Abstract).
Regarding claim 15, the computing device of claim 13,
Veli discloses
wherein the target reduction algorithm comprises:
a computational comparison algorithm, and the computational comparison algorithm determines whether the intermediate test result meets the reduction condition (test planning, configuration and execution engine 152 utilizes automation tools 254 that collect the performance metrics and configuration parameter states from monitoring system 105. Test planning, configuration and execution engine 152 performs analysis to decide the next set of configuration parameter values to use in tests to determine a performance difference. The next set of configuration parameters is then sent in config map 255 to application 226 by an actuator, to change the state of the test instance of the application – See paragraphs [0034-0036]) by:
determining an intermediate reference value corresponding to a monitoring node for obtaining the intermediate test result in the parameter test process (visualizing and alerting on performance metrics data 102, which includes results of automatic testing in which a test stimulus is applied for an application, and results are stored for both reference instances and test instances. User computing device 176 accepts operator inputs, which include starting values for configuration parameter components, and displays reporting results of the automatic testing, including configuration settings from one of the configuration points – See paragraph [0029]);
determining whether the intermediate reference value meets a preset reference threshold (the drone tracker configuration parameters reach the test completion criteria in which the overall latency is stabilized, and the latency as a function of time for the test stimulus is lower than the latency for the reference instance through the pipeline. The test completion criteria for a descent based method are typically represented as a threshold on relative improvement – See paragraphs [0065-0066]);
in response to the intermediate reference value failing to meet the preset reference threshold, determining that the intermediate test result meets the reduction condition; or
in response to the intermediate reference value meeting the preset reference threshold, determining that the intermediate test result failing to meet the reduction condition (for the iterative descent methods that reduce the step size gradually, it is important to choose the initial step size correctly. If the initial step size is too small, the noise in the performance metric can mimic local minima, causing the search to terminate prematurely – See paragraphs [0045]).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Veli’s teaching into Golovin’s and Regehr’s inventions because incorporating Veli’s teaching would enhance Golovin and Regehr to enable to successively minimizes along coordinate directions to find the minimum of the function as suggested by Veli (paragraph [0055]).
Regarding claim 18, the computing device of claim 16,
Veli discloses
wherein the operations further comprise:
detecting a browsing operation initiated by a target user, and generating the parameter optimization request for a browsing parameter of the target user (an iteration involves setting the configuration parameters to the new desired values, restarting the application, and measuring the target performance metrics, thus obtaining the value of the objective function to be optimized at the current configuration settings – See paragraphs [0041-0042]); and
wherein determining the target function corresponding to the parameter optimization request in response to the parameter optimization request (determining configuration parameters that meet test criteria need to deal with the noise in the objective function beyond the initial step – See paragraphs [0043-0049]) comprises:
determining the target function corresponding to a visit target of the target user in response to the parameter optimization request (each test cycle is a descent from the initial test configuration applied to the test instance of the application at the configuration points to a minimum of the performance metric – See paragraph [0052]);
the data processing method further comprising: performing sampling processing on the browsing parameter for a plurality of times to obtain a plurality of candidate parameters (the test cycle would run several times, and the best result would be selected. Analysis of the result of a most recent control step is usable for determining the next configuration set – See paragraphs [0051-0053]); and
after the operation of according to the test result respectively corresponding to the at least one candidate parameter, selecting the target parameter meeting the optimal parameter condition from the at least one candidate parameter (to speed up the Bayesian fitting process, test planning, configuration and execution engine 152 can modify the acquisition method, receiving multiple candidate parameter sets from the acquisition function along with their acquisition value—that is, the criterion used by the acquisition function to select the next query point. Test planning, configuration and execution engine 152 calculates a modified acquisition value by dividing the original acquisition value with a penalty that is proportional to the reciprocal of the distance of the query point to the closest known infeasible point, thereby reducing the acquisition value of the query candidates that are close to known infeasible points – See paragraphs [0057-0060]);
according to a value of the browsing parameter at the target parameter, generating visit recommendation information of the target user (configuration change in a live system is to treat the performance metric under study as an output of a system that experiences a step function change in its input – See paragraphs [0057-0060]); and
searching a target product matched with the visit recommendation information from a product database so as to output the target product to the target user (configuration parameter sets 172 can store information from one or more tenants and one or more applications into tables of a common database image – See paragraph [0031]. Black box testing is usable to check that the output of an application is as expected, given specific configuration parameter inputs – See paragraphs [0025-0027]. The config map is updated via the Kubernetes API. Kubernetes is a container orchestration system for automating deployment, scaling and management. Test planning, configuration and execution engine 152 leverages Kubernetes' ConfigMap functionality to represent a set of application control parameters as metadata and to the application as a test config 255 – See paragraphs [0034-0037]).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Veli’s teaching into Golovin’s and Regehr’s inventions because incorporating Veli’s teaching would enhance Golovin and Regehr to enable to update config parameters, of evaluating stabilization of the performance difference as a particular test cycle progresses as suggested by Veli (paragraph [0060]).
10. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Golovin and Regehr as applied to claim 13 above, and further in view of Nia et al. (US Pub. No. 2022/0129791 A1 – art of record -- herein after Nia).
Regarding claim 14, the computing device of claim 13,
Nia discloses
wherein the target reduction algorithm comprises a historical estimation algorithm, and the historical estimation algorithm determines whether the intermediate test result meets the reduction condition (he training set for workload/OS models 320 is labeled, by an administrator, with the workload types and/or operating systems running on the server device at the time the historical utilization data was gathered – See paragraphs [0154-0155]) by:
according to historical intermediate results and historical test results respectively corresponding to a plurality of historical parameters (A best trained Random Forest ML model is selected, from a set of models resulting from the training phase, to be the basis for instances of a trained ML model. In some embodiments, training data is pre-processed prior to labeling the training data that will be used to train the Random Forest ML model. The preprocessing may include cleaning the readings for null values, normalizing the data, downsampling the features, etc. – See paragraphs [0164-0165]), estimating an estimated test result corresponding to the intermediate test result (the evaluation of the black-box ML model is limited to data over which the black-box ML model was trained. This reduces the information that can be extracted about the black-box ML model's local behavior because the data that has previously been seen by the black-box ML model may not cover the data space thoroughly, which prevents the surrogate ML model trained on the previously-seen subset of data from accurately representing the behavior of the black-box ML model in the full data sample feature space – See paragraph [0018]);
determining whether the estimated test result is matched with a result threshold (a radius of a hypersphere, in a data sample feature space, is determined, where the hypersphere encompasses a plurality of known data samples, and where a first prediction, by the trained black-box ML model, for a first data sample of the plurality of known data samples differs, by at least a threshold amount, from a second prediction, by the trained black-box ML model, for a second data sample of the plurality of known data samples – See paragraph [0064-0066]);
in response to a determination that the estimated test result is not matched with the result threshold, determining that the intermediate test result meets the reduction condition (a first prediction, by the trained black-box ML model, for a first data sample of the plurality of known data samples differs, by at least a threshold amount, from a second prediction, by the trained black-box ML model, for a second data sample of the plurality of known data samples – see paragraph [0064]); or
in response to a determination that the estimated test result is matched with the result threshold, determining that the intermediate test result fails to meet the reduction condition (Training may cease when the error stabilizes (i.e., ceases to reduce) or vanishes beneath a threshold (i.e., approaches zero) – See paragraphs [0152-0124]).
It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Nia’s teaching into Golovin’s and Regehr’s inventions because incorporating Nia’s teaching would enhance Golovin and Regehr to enable to reduce the amount of computation needed to apply or train a neural network and less nodes means less activation values need be computed, and/or less derivative values need be computed during training as suggested by Nia (paragraphs [0148-0150].
Conclusion
10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Eberlein et al. (US Pub. No. 2020/0183812 A1) discloses test optimization based on actual use of configuration parameters. Actions include receiving a parameter set from a monitoring system, the parameter set including multiple configuration parameters corresponding to development artifacts detected by the monitoring system, retrieving statistical data from a central data analysis infrastructure, the statistical data being retrieved from application systems executing software created out of the development artifacts – See Abstract and specification for more details.
Hicks et al. (US Pub. No. 2021/0286713) discloses executing, by the testing system, the minimal set of tests on the SUT for analyzing a soft failure of the SUT in the active environment. The soft failure occurs in the active environment during execution of the SUT based at least in part on a performance parameter of the active environment – See Abstract and specification for more details.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MONGBAO NGUYEN whose telephone number is (571)270-7180. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S. Sough can be reached at 571-272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MONGBAO NGUYEN/ Examiner, Art Unit 2192