Prosecution Insights
Last updated: April 19, 2026
Application No. 18/665,446

BENCHMARK PROGRAM OPTIMIZATION

Final Rejection §103
Filed
May 15, 2024
Examiner
MYERS, PAUL R
Art Unit
2176
Tech Center
2100 — Computer Architecture & Software
Assignee
Hewlett Packard Enterprise Development LP
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
606 granted / 768 resolved
+23.9% vs TC avg
Moderate +14% lift
Without
With
+13.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
19 currently pending
Career history
787
Total Applications
across all art units

Statute-Specific Performance

§101
1.4%
-38.6% vs TC avg
§103
64.8%
+24.8% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 768 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Herein after “it would have been obvious” should be read as “it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention”. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. In regards to applicants arguments regarding the newly added claim limitation of “wherein a number of runs for each parameter set is adaptively determined based on a stability heuristic comprising calculating a statistical measure of variability across results from the multiple runs and determining that stability is achieved in response to the statistical measure falling below a predetermined threshold”. Schran et al teaches ([0022] "Any appropriate performance metrics 130 may be used, including download throughput speed (measured in bytes received per second), upload throughput speed (measured in bytes transmitted per second), latency (measured in milliseconds of ping time), and stability (measured in the percentage of network data packets lost and/or retransmitted)”. Schram et al also teaches the performance metric in relation to a threshold ([0041] “network configuration settings that have achieved a particular threshold weighted percentage score by the Scoring Formula, such as 90%”) However, while without another reference Schran et al implies details of the stability in relation to a threshold determining the number of iterations. Schran et al does not expressly give details of the stability in relation to a threshold determining the number of iterations. Therefore multiple references are being cited teaching stability in relation to a threshold determining the number of iterations. Such as de Alfaro et al PN 8,781,990 that teaches (Column 4 line 44 et seq. “As the iterative process proceeds until the determined scores individually stabilize or converge. The process may perform a fixed number of iterations, or it may perform as many iterations as needed, until the difference between the quantities computed in two successive iteration steps is below a pre-defined threshold”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 5, 7-8, 12, 14-15, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schran et al PN 2002/0138443 in view of Paterson PN 2002/0193979 and Richman et al PN 5,655,148 and de Alfaro et al PN 8,781,990. In regards to claims 1, 7, 14: Schran et al teaches a computing device (115) to determine an optimal configuration (Title "Configuration Settings That Provide Optimal Network Performance" of a benchmark (the network), the computing device comprising: a non-transitory computer-readable storage media configured to store programming instructions (claim 29 "a computer-readable medium holding computer-executable instructions for performing a method comprising: (a) establishing a network connection between the client machine and a remote server; (b) selecting a plurality of network configuration settings for the client machine; (c) automatically conducting one or more performance tests using the selected network configuration settings;"); and a processor coupled to the non-transitory computer-readable storage media ([0020] "An application program (client software) running on a processor within the client machine performs a set of tests to determine optimal network configuration settings"), wherein the programming instructions, when executed by the processor, cause the processor to: iteratively ([0029] "The monitoring/optimizing process may cease after a fixed number of iterations, or after maximum optimization benefits according to weighted score have been reached. In step 315, a multitude of tests are performed to determine the optimal network configuration settings for the client machine") select a unique configuration ([0042] "A large enough set must be supplied to give statistical significance to the resulting effect on the performance metrics by a particular network configuration setting variable. As in any statistical analysis, the more variables, i.e., network configuration settings, are present, the more test data, i.e., performance metrics based on sets of network configuration settings, must be supplied to determine the effect of any one variable on the test results"), unique configuration of the benchmark, run the benchmark as configured for each configuration at least once, determine an evaluation score (score) for each parameter set from one or more runs using an evaluation heuristic, record an optimal configuration best evaluation score (Abstract "An algorithm is used to determine the best network configuration settings and achieve the desired network performance characteristics for the computer based on preferences specified by the user." [0029] "The monitoring/optimizing process may cease after a fixed number of iterations, or after maximum optimization benefits according to weighted score have been reached") , and configure the benchmark based on the optimal configuration ([0039] "The remote server stores the percentage score results for various network configuration settings for one or more machines and may use the stored results to determine the set of network configuration settings to provide to client machine(s) in the future"). Schran et al teaches multiple configurations but does not state that a configuration is a set of parameters. Paterson teaches a set of parameters define a configuration ([0042] "These alternative parameter values can be grouped into different sets of parameter values that can be used to define different configurations of the computer model 302"). It would have been obvious to use a set of parameters to define a configuration because this is what a configuration is. Schran et al teaches the benchmark being a network and configuring a network but does not expressly state the network is a "program". Richman et al teaches configuring a network program. ("In response to loading a particular type of device driver, specifically one of the network drivers 402, the network software configuration process is initiated by the configuration manager 158 issuing an enumerate command to the enumerator for the interface 412, the network driver interface 403. Accordingly, the installation of a network adapter card as one of the devices 20 within the computer 8 serves as the catalyst for configuring the network software routines at boot time."). It would have been obvious to have the configured network be a program because this is what is routinely configured to "configure a network". Schran et al teaches ([0022] "Any appropriate performance metrics 130 may be used, including download throughput speed (measured in bytes received per second), upload throughput speed (measured in bytes transmitted per second), latency (measured in milliseconds of ping time), and stability (measured in the percentage of network data packets lost and/or retransmitted)”. Schram et al also teaches the performance metric in relation to a threshold ([0041] “network configuration settings that have achieved a particular threshold weighted percentage score by the Scoring Formula, such as 90%”). Schran et al does not expressly give details of the stability in relation to a threshold determining the number of iterations. De Alfaro et al teaches wherein a number of runs (“as many iterations as needed”) for each parameter set (“value unreliability 210, user accuracy 215, value probability 220, and consensus value 225 are determined regularly”) is adaptively determined based on a stability heuristic (“determined scores individually stabilize or converge”) comprising calculating a statistical measure (score) of variability across results (“between the quantities computed in two successive iteration steps”) from the multiple runs (“successive iteration steps”)and determining that stability is achieved in response to the statistical measure falling below a predetermined threshold (“difference between the quantities computed in two successive iteration steps is below a pre-defined threshold”), It would have been obvious to compare stability to a threshold across successive runs because this would have assured the network has at least a desired stability. In regards to claims 2, 8, 15: Schran et al teaches a predetermined number of runs (“a fixed number of iterations”). De Alfaro et al teaches the number of runs is determined by the stability heuristic, a predeterminer number of runs, and a duration “In yet another alternative, the value probabilities 220 for values proposed by the user are weighted based on the elapsed time since the user proposed that value with value probabilities for more recent proposed values weighted more heavily”). In regards to claims 5, 12, 19: Schran et al teaches score is based on desirability. (Abstract "An algorithm is used to determine the best network configuration settings and achieve the desired network performance characteristics for the computer based on preferences specified by the user"). Claim(s) 3, 9, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schran et al PN 2002/0138443 in view of Paterson PN 2002/0193979 and Richman et al PN 5,655,148 and de Alfaro et al PN 8,781,990 as applied to claim 2 above, and further in view of Kim PN 2013/0166486. In regards to claims 3, 9, 16: Schran et al teaches a weighted score compared to a limit but does not teach a "normalized standard error". Kim teaches predicting data trends using a normalizes standard error ([0055] "The second constraint or condition the can be considered is the stability of the data trend lines H.sub.2 to H.sub. (shown in FIG. 3). In other words, a data trend line can be considered to be stable if its stability is greater than a threshold. Data trend lines that are not stable are eliminated pursuant to the second constraint. The stability can, for example, be a normalized standard error (stderr) based on the average of y"). It would have been obvious to use a normalized standard error because this would have allowed for predicting the trends of the configuration data SO that a cutoff on searching the optimal configuration could be reached. Claim(s) 4, 10, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schran et al PN 2002/0138443 in view of Paterson PN 2002/0193979 and Richman et al PN 5,655,148 and de Alfaro et al PN 8,781,990 as applied to claim 2 above, and further in view of Nagatani et al PN 2024/0165634. In regards to claims 4, 10, 17: Schran et al teaches an active learning search algorithm ("The Intelligent version of the Active-Learning Support Algorithm searches the database of past results for that connection type to provide a representative range of network configuration settings that have achieved a particular threshold weighted percentage score by the Scoring Formula, such as 90%"). Schran et al does not teach any of the listed other search methods. Nagatani et al teaches methods of optimizing including multiple choices such as ([0095] "The optimization model 410 can determine optimized crusher settings using, for example, Bayesian algorithms, genetic algorithms, active learning, Q learning, or any combination of these"). It would have been obvious to use any of the claimed search methods because they are similar method of scarching. Claim(s) 6, 13, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schran et al PN 2002/0138443 in view of Paterson PN 2002/0193979 and Richman et al PN 5,655,148 and de Alfaro et al PN 8,781,990 as applied to claim 2 above, and further in view of Sedeghi Esfahani et al PN 2024/0248947. In regards to claims 6, 13, 20: Paterson et al does teach multiple types of and does not teach limiting the types of parameters. Paterson et al however does not list the types of parameters including continuous. discrete, or categorical. Sedeghi Esfahani et al teaches ([0101] "At 310, the processor selects a sampling distribution based on the variable type" "where the variable type is discrete, the sampling distribution selected may be the SoftMax distribution" "One-hot constraints refers to the transformation of categorical variables into binary variables by assigning a 0/1 or True/False dummy binary variable to each category" "where the variable type is continuous, sampling on the conditional probability may be done by slice sampling" [0290] "The updated value may also be selected based on the current value of each constraint and the type of each constraint as above"). It would have been obvious to select values for the variables/parameters based upon the type of variable because this would have prevented crossing variable types. Claim(s) 11, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schran et al PN 2002/0138443 in view of Paterson PN 2002/0193979 and Richman et al PN 5,655,148 and de Alfaro et al PN 8,781,990 as applied to claim 2 above, and further in view of Koch et al PN 2018/0240041. In regards to claim 11, 18: Schran et al teaches predefined groups of configuration settings as opposed to "regularization" or "gradient decent". Koch et al teaches optimizing a configuration using machine learning including regularization ([0003] "For example, a depth of a decision tree model type, a number of trees in a forest model type, a number of hidden layers and neurons in each layer in a neural network model type, and a degree of regularization to prevent overfitting are a few examples of quantities that are provided as inputs to train a predictive model."). It would have been obvious to use regularization because this would prevent overshooting. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Multiple references are cited that teach stability in relation to a threshold determining the number of iterations in a program. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL R MYERS whose telephone number is (571)272-3639. The examiner can normally be reached telework M-F start 7-8 leave 4-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jaweed Abbaszadeh can be reached at 571-270-1640. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Paul R. MYERS/ Primary Examiner, Art Unit 2176
Read full office action

Prosecution Timeline

May 15, 2024
Application Filed
Sep 22, 2025
Non-Final Rejection — §103
Dec 10, 2025
Response Filed
Jan 06, 2026
Final Rejection — §103
Apr 07, 2026
Request for Continued Examination
Apr 11, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591288
CONTROL METHOD FOR DETECTING SYSTEM, DETECTING SYSTEM AND VEHICLE
2y 5m to grant Granted Mar 31, 2026
Patent 12585477
AUTOMATED GENERATION AND EXECUTION OF APPLICATION PROGRAMMING INTERFACE CALLS
2y 5m to grant Granted Mar 24, 2026
Patent 12572487
I/O UNIT, MASTER UNIT, AND COMMUNICATIONS SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12561263
I/O UNIT
2y 5m to grant Granted Feb 24, 2026
Patent 12554307
PRESENCE DETECTION POWER EFFICIENCY IMPROVEMENTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
92%
With Interview (+13.6%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 768 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month