Prosecution Insights
Last updated: April 19, 2026
Application No. 17/834,873

AUTOMATED DISCOVERY AND DESIGN PROCESS BASED ON BLACK-BOX OPTIMIZATION WITH MIXED INPUTS

Non-Final OA §103
Filed
Jun 07, 2022
Examiner
CHOI, YUK TING
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
466 granted / 652 resolved
+16.5% vs TC avg
Strong +37% interview lift
Without
With
+37.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
681
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
55.0%
+15.0% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 652 resolved cases

Office Action

§103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/28has been entered. 2. This office action is in response to applicant’s communication filed on 11/28/2025 in response to PTO Office Action mailed on 08/28/2025. The Applicant’s remarks and amendments to the claims and/or the specification were considered with the results as follows. 3. In response to the last Office Action, claims 1, 2, 6, 8-10, 13 and 15-18 are amended. No claims are added or canceled. As a result, claims 1-20 are pending in this office action. Response to Arguments 4. Applicant’s arguments with respect to claims 1-20 have been considered but are moot in view of new ground(s) of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 4, 8, 10, 11, 15, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable by Sohn et al. (US 2020/0080744 A1), hereinafter Sohn and in view of Salhov et al (US 2022/299952 A1), hereinafter Salhov. Referring to claims 1 and 15, Sohn discloses a computer-implemented method (See para. [0004] and para. [0007], training an artificial neural network based on data built up through machine learning and deriving a second artificial neural network based on the training data for optimal solutions), the computer-implemented method comprising: receiving historical data including a first set of input values and a first set of output values (See para. [0053]-para. [0059] and Figure 1, 2, receiving data from building energy management system concerning the HVAC system operation and thermal conditions, e.g., weather, zone temperatures of a target multi-zone building, the received data including a set of conditions and values, e.g., the input is the input power of the HVAC system and the thermal state of the building, and the output is the output temperatures for the building); incorporating the received historical data into a sampling design to form an initial dataset (See para. [0054], para. [0058] and para. [0059], receiving the data from BEMS and generating a plurality of first training datasets); generating a surrogate model of the machine learning model (See para. [0054], para. [0058], para. [0059], generating a first artificial neural model) by fitting the initial dataset using a rectified linear activation function (ReLU) deep neural network (See para. [0070], para. [0076] the first artificial neural network includes a plurality of hidden layers which may use a sigmoid function or rectified linear unit function as an activation function); applying one or more mixed-integer linear programming techniques to the surrogate model (See para. [0050], para. [0054], using mixed-integer linear programming MILP to the artificial neural model using a feedback loop) […] and a second set of predicted input values […based on the applying of the one or more mixed-integer linear programming techniques to the surrogate model] (See para. [0050] and para. [0054] determining a profile of variety of electricity prices and the thermal state of the building [e.g., a second set of predicted input] based on applying the MILP Mixed-integer linear programming); testing the machine learning model, from which the surrogate model is generated using the second set of predicted input values […] (See para. [0050], para. [0074] and para. [0102] and para. [0103] testing the second artificial neural network which obtained from the first artificial neural network model using various profiles of electricity prices and environment variables and the corresponding optimal schedule for the indoor temperature and the input power); generating a second set of output values (See para. [0050], para. [0103]- para. [0105], generating a corresponding indoor temperature output through a zone temperature determination model which trains the second artificial neural network model); and testing a sample that incorporated the second set of predicted input values (See para. [0104] and para. [0105] and Figure 12, evaluating the second neural network by using the inputs including power, indoor temperature, operating cost and weighted sum of penalty terms on all daily profiles in the input data and obtaining an optimal demand response schedule for the input power and indoor temperature based on the predicted values of the electricity price and environment variable for the next scheduling period, for example, when a price profile 1210 and an environment variable 1220 are inputted, the optimal power supply and the optimal temperature for each zone can be derived based on the second artificial neural network and the first artificial neural networks). Sohn does not explicitly disclose applying mixed-integer linear programming techniques including domain knowledge-based constraints to a model such that a second set of predicted input values is determined. Salhov applying mixed-integer linear programming techniques (See para. [0069], applying linear programming techniques) including domain knowledge-based constraints (See para. [0155], controller neural network is configured to satisfying one or more constraints or object functions) to a model such that a second set of predicted input values is determined; testing the machine learning model using the second set of predicted input values as input to the machine learning model such that, in response, the machine learning model generates a second set of output values (See para. [0273], [0295], the predictive optimization process may include optimizer 1918 providing predictor neural network 1916 with a proposed set of values for the decision variables [e.g., the MV moves or values of the MV's over the time horizon] and using predictor neural network 1916 to predict the values of the CV's that will result from the proposed set of values for the decision variables. Optimizer 1918 may use objective function 1928 to determine the value of the control objective as a function of the proposed MV's, predicted CV's, and DV's over the duration of the optimization period. Optimizer 1918 may iteratively adjust the proposed MV moves using an optimization algorithm (e.g., zero-order algorithms, first-order algorithms, second-order algorithms, etc.) with the goal of optimizing (e.g., minimizing or maximizing) the value of objective function 1928. The result of the predictive optimization process is a set of optimal MV moves or values of the MV's [i.e., values of the decision variables in the predictive optimization]. Control signal generator 1920 may receive the optimal MV moves or optimal values of the MV's from optimizer 1918 and use the optimal MV moves or values of the MV's to generate control signals for controllable equipment 1926. These and other features of predictive controller 1914 are described in greater detail below. where DV.sub.1 is a vector containing a value for the first DV at each time step t=1 h, DV.sub.2 is a vector containing a value for the second DV at each time step t=1 h, and DV.sub.m is a vector containing a value for the m'th DV at each time step t=1 h, where m is the total number of DV's forecasted. Optimizer 1918 can be configured to execute the predictive optimization process using any of a variety of optimization techniques. Examples of optimization techniques that can be used by optimizer 1918 include zero-order, first-order, or second-order optimization algorithms. In some embodiments, optimizer 1918 performs the optimization process iteratively to converge on the optimal set of values for the MV's or MV moves as the iterations progress) and testing a sample that incorporates the second set of output values (See para. [0247] any of the values of CV's, MV's and DV's can be predicted, calculated, inferred, estimated, or interpolated at any point in real-time and/or via querying historical time data. For example, a predictive model [e.g., neural network, etc.] is configured receive multiple data points [e.g., data samples, etc.] of an MV at a rate that is lower than preferred. As such, the predictive model makes an inference as to the value of the MV based on the inferential functionality performed by the predictive model. In some embodiments, the inferential functionality is performed by using linear regression, nonlinear regression weighted interpolation, extrapolation, neural networks, or any combination thereof). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify the mixed-integer linear programming techniques to determine a second set of predicted input values, as taught by Salhov. Skilled artisan would have been motivated to perform the predictive optimization process includes setting the values of the one or more manipulated variables to adjusted values, using the neural network model to generate predicted values of the one or more controlled variables predicted to result from operating the controllable equipment in accordance with the adjusted values of the one or more manipulated variables until the control objective has converged upon an optimal value (See Salhov, para. [0006]). In addition, all references (Salhov and Sohn) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as outputting an optimal cost structure. This close relation between all references highly suggests an expectation of success. As to claim 3, Sohn discloses wherein the first input set of input values includes user defined constraints as side constraints (See para. [0085]-para. [0090], some of the user defined constraints can be set). As to claim 4, Sohn discloses wherein the input values is selected from the group consisting of continuous values, integer values and categorical values (See para. [0072], the ternal state of the building including one of an atmospheric temperature, daylight hour, a wind force, a humidity, a thermal load on the building and/or building usage schedules, these factors have different value ranges). Referring to claim 8, Sohn discloses a computer program product (See para. [0004] and para. [0007], training an artificial neural network based on data built up through machine learning and deriving a second artificial neural network based on the training data for optimal solutions), comprising: one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media (See para. [0112], the computer readable recording medium comprises computer-readable instructions), the program instructions comprising: receiving historical data including a first set of input values and a first set of output values (See para. [0053]-para. [0059] and Figure 1, 2, receiving data from building energy management system concerning the HVAC system operation and thermal conditions, e.g., weather, zone temperatures of a target multi-zone building, the received data including a set of conditions and values, e.g., the input is the input power of the HVAC system and the thermal state of the building, and the output is the output temperatures for the building); incorporating the received historical data into a sampling design to form an initial dataset (See para. [0054], para. [0058] and para. [0059], receiving the data from BEMS and generating a plurality of first training datasets); generating a surrogate model of a machine learning model (See para. [0054], para. [0058], para. [0059], generating a first artificial neural model) by fitting the initial dataset using a rectified linear activation function (ReLU) deep neural network (See para. [0070], para. [0076] the first artificial neural network includes a plurality of hidden layers which may use a sigmoid function or rectified linear unit function as an activation function); applying one or more mixed-integer linear programming techniques to the surrogate model (See para. [0050], para. [0054], using mixed-integer linear programming MILP to the artificial neural model using a feedback loop) […] second set of predicted input values […based on the applying of the one or more mixed-integer linear programming techniques to the surrogate model] See para. [0050] and para. [0054] determining a profile of variety of electricity prices and the thermal state of the building [e.g., a second set of predicted input] based on applying the MILP Mixed-integer linear programming); testing the machine learning model, from which the surrogate model is generated, using the determined second set of predicted input values […] (See para. [0074] and para. [0102] and para. [0103] testing the deep neural networked which obtained from the first artificial neural network model using various profiles of electricity prices and environment variables and the corresponding optimal schedule for the indoor temperature and the input power); […] testing a sample that incorporates the second set of predicted input values (See para. [0104] and para. [0105] and Figure 12, evaluating the deep neural network by using the inputs including power, indoor temperature, operating cost and weighted sum of penalty terms on all daily profiles in the input data and obtaining an optimal demand response schedule for the input power and indoor temperature based on the predicted values of the electricity price and environment variable for the next scheduling period, for example, when a price profile 1210 and an environment variable 1220 are inputted, the optimal power supply and the optimal temperature for each zone can be derived based on a deep neural network and artificial neural networks). Sohn does not explicitly disclose applying mixed-integer linear programming techniques including domain knowledge-based constraints to a model such that a second set of predicted input values is determined. Salhov applying mixed-integer linear programming techniques (See para. [0069], applying linear programming techniques) including domain knowledge-based constraints (See para. [0155], controller neural network is configured to satisfying one or more constraints or object functions) to a model such that a second set of predicted input values is determined; testing the machine learning model using the second set of predicted input values as input to the machine learning model such that, in response, the machine learning model generates a second set of output values (See para. [0273], [0295], the predictive optimization process may include optimizer 1918 providing predictor neural network 1916 with a proposed set of values for the decision variables [e.g., the MV moves or values of the MV's over the time horizon] and using predictor neural network 1916 to predict the values of the CV's that will result from the proposed set of values for the decision variables. Optimizer 1918 may use objective function 1928 to determine the value of the control objective as a function of the proposed MV's, predicted CV's, and DV's over the duration of the optimization period. Optimizer 1918 may iteratively adjust the proposed MV moves using an optimization algorithm (e.g., zero-order algorithms, first-order algorithms, second-order algorithms, etc.) with the goal of optimizing (e.g., minimizing or maximizing) the value of objective function 1928. The result of the predictive optimization process is a set of optimal MV moves or values of the MV's [i.e., values of the decision variables in the predictive optimization]. Control signal generator 1920 may receive the optimal MV moves or optimal values of the MV's from optimizer 1918 and use the optimal MV moves or values of the MV's to generate control signals for controllable equipment 1926. These and other features of predictive controller 1914 are described in greater detail below. where DV.sub.1 is a vector containing a value for the first DV at each time step t=1 h, DV.sub.2 is a vector containing a value for the second DV at each time step t=1 h, and DV.sub.m is a vector containing a value for the m'th DV at each time step t=1 h, where m is the total number of DV's forecasted. Optimizer 1918 can be configured to execute the predictive optimization process using any of a variety of optimization techniques. Examples of optimization techniques that can be used by optimizer 1918 include zero-order, first-order, or second-order optimization algorithms. In some embodiments, optimizer 1918 performs the optimization process iteratively to converge on the optimal set of values for the MV's or MV moves as the iterations progress) and testing a sample that incorporates the second set of output values (See para. [0247] any of the values of CV's, MV's and DV's can be predicted, calculated, inferred, estimated, or interpolated at any point in real-time and/or via querying historical time data. For example, a predictive model [e.g., neural network, etc.] is configured receive multiple data points [e.g., data samples, etc.] of an MV at a rate that is lower than preferred. As such, the predictive model makes an inference as to the value of the MV based on the inferential functionality performed by the predictive model. In some embodiments, the inferential functionality is performed by using linear regression, nonlinear regression weighted interpolation, extrapolation, neural networks, or any combination thereof). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify the mixed-integer linear programming techniques to determine a second set of predicted input values, as taught by Salhov. Skilled artisan would have been motivated to perform the predictive optimization process includes setting the values of the one or more manipulated variables to adjusted values, using the neural network model to generate predicted values of the one or more controlled variables predicted to result from operating the controllable equipment in accordance with the adjusted values of the one or more manipulated variables until the control objective has converged upon an optimal value (See Salhov, para. [0006]). In addition, all references (Salhov and Sohn) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as outputting an optimal cost structure. This close relation between all references highly suggests an expectation of success. Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Sohn (US 2020/0080744 A1) in view of Salhov et al (US 2022/299952 A1) and further in view of Moradian (US 2023/0185268 A1). As to claims 6 and 13, Sohn discloses the specific output is a discovery selected from, a hyper-parameter tuning for a neural network (See para. [0103] and para. [0104], training and testing on network parameters of a deep neural network). Sohn does not explicitly disclose wherein the specific output is a discovery selected from the group consisting of a new chemical compound, a materials design, a fabrication design, and a process design for a semiconductor device. Moradian discloses wherein the specific output is a discovery selected from the group consisting of a new chemical compound, a materials design, a fabrication design, and a process design for a semiconductor device (See para. [0020], para. [0024], para. [0110], para. [0121]). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify the output selected from other industrial fields, as taught by Moradian. Skilled artisan would have been motivated to apply machine learning discoveries to other industrial fields to assist engineer develop processes that meet both material engineering and eco-efficiency specifications by leverage sensor data, physicals, models and algorithms (See Moradian, para. [0020]). In addition, both references (Moradian, Chong and Sohn) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as outputting an optimal cost structure. This close relation between all references highly suggests an expectation of success. Claims 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Sohn (US 2020/0080744 A1) in view of Salhov et al (US 2022/299952 A1) and further in view of Chong (US 2023/0015423 A1), hereinafter Chong. As to claims 2, 9 and 16, Sohn in view of Salhov discloses testing of the machine learning model comprises determining whether an output value has been, among the second set of output values, has been generated (See Salhov, para. [0247] any of the values of CV's, MV's and DV's can be predicted, calculated, inferred, estimated, or interpolated at any point in real-time and/or via querying historical time data. For example, a predictive model [e.g., neural network, etc.] is configured receive multiple data points [e.g., data samples, etc.] of an MV at a rate that is lower than preferred. As such, the predictive model makes an inference as to the value of the MV based on the inferential functionality performed by the predictive model. In some embodiments, the inferential functionality is performed by using linear regression, nonlinear regression weighted interpolation, extrapolation, neural networks, or any combination thereof). Sohn does not explicitly disclose wherein the optimal output is based on an undefined black-box function of the input values. Chong discloses the optimal output is based on an undefined black-box function of the input values (See para. [0062], the optimal input-output data based on a black-box that allows the evaluation of the objective and constraints for a particular input). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify the optimal output of Sohn is based on an undefined black-box function, as taught by Chong. Skilled artisan would have been motivated to utilize simulation optimization algorithm approaches to search for optimal input settings as opposed to mathematical programming since simulation optimization does not assume that an algebraic description of the function is available (See Chong, para. [0062]). In addition, both references (Salhov, Chong and Sohn) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as outputting an optimal cost structure. This close relation between both references highly suggests an expectation of success. Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sohn (US 2020/0080744 A1) in view of Salhov (US 2022/299952 A1) and further in view of Miller et al. (US 2013/003861 A1), hereinafter Miller. As to claims 5, 12 and 19, Sohn does not explicitly disclose converting the input values to the integer values and setting the categorical values to integer levels. Miller discloses converting the input values to the integer values and setting the categorical values to integer levels (See para. [0038] and converting in input to a number and/or converts one range of values to a second range of values using different categories, e.g. linear, bi-linear, cubic or etc.). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify the input values of Sohn to convert the input values to integer and setting the categorial values to integer level, as taught by Miller. Skilled artisan would have been motivated to accommodate different input data with different data formats to obtaining meaningful data analysis result (See Miller, para. [0023]). In addition, both references (Salhov, Miller and Sohn) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as outputting an optimal data result. This close relation between both references highly suggests an expectation of success. Claims 7, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sohn (US 2020/0080744 A1) and in view of Salhov (US 2022/299952 A1) and further in view of Shrikumar et al. (US 2017/0249547 A1). As to claims 7, 14 and 20, Sohn does not explicitly disclose selecting a feedforward deep neural network with a softplus activation function. Shrikumar discloses selecting a feedforward deep neural network with a softplus activation function; determining a solution point from the feedforward deep neural network; and using the determined solution point as an initial point for the ReLU deep neural network (See para. [0045], para. [0051], para. [0153], selecting a feedforward neural network using softplus function and determining an optimal alignment using ReLU). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify the deep neural network of Sohn to a feedforward deep neural network with a softplus activation function, as taught by Shrikumar. Skilled artisan would have been motivated to modify the parameters or weights of an activation function to produce a desired set of output for a given set of inputs (See para. [0036]). In addition, all references (Salhov, Shrikumar and Sohn) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as outputting an optimal data result. This close relation between both references highly suggests an expectation of success. Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUK TING CHOI whose telephone number is (571)270-1637. The examiner can normally be reached Monday-Friday 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AMY NG can be reached at 5712701698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YUK TING CHOI/Primary Examiner, Art Unit 2164
Read full office action

Prosecution Timeline

Jun 07, 2022
Application Filed
May 05, 2025
Non-Final Rejection — §103
Jul 29, 2025
Interview Requested
Aug 05, 2025
Response Filed
Aug 26, 2025
Final Rejection — §103
Oct 16, 2025
Interview Requested
Oct 28, 2025
Response after Non-Final Action
Nov 28, 2025
Request for Continued Examination
Dec 06, 2025
Response after Non-Final Action
Feb 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591610
SYSTEMS AND METHODS FOR REMOVING NON-CONFORMING WEB TEXT
2y 5m to grant Granted Mar 31, 2026
Patent 12579156
SYSTEMS AND METHODS FOR VISUALIZING ONE OR MORE DATASETS
2y 5m to grant Granted Mar 17, 2026
Patent 12562753
SYSTEM AND METHOD FOR MULTI-TYPE DATA COMPRESSION OR DECOMPRESSION WITH A VIRTUAL MANAGEMENT LAYER
2y 5m to grant Granted Feb 24, 2026
Patent 12536282
METHODS AND APPARATUS FOR MACHINE LEARNING BASED MALWARE DETECTION AND VISUALIZATION WITH RAW BYTES
2y 5m to grant Granted Jan 27, 2026
Patent 12511258
DYNAMIC STORAGE OF SEQUENCING DATA FILES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+37.4%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 652 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month