Prosecution Insights
Last updated: April 19, 2026
Application No. 18/460,636

METHOD OF INTEGRALLY OPTIMIZING PARAMETERS

Non-Final OA §103
Filed
Sep 04, 2023
Examiner
FEREJA, SAMUEL D
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Industry Academy Cooperation Foundation Of Sejong University
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
86%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
458 granted / 614 resolved
+16.6% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
66 currently pending
Career history
680
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
64.1%
+24.1% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 614 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Information Disclosure Statement The information disclosure statements (IDS) were submitted on 09/04/2023 . The submission are in compliance with the provisions of 37 CFR § 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1- 3 & 6- 10 are rejected under 35 U.S.C. 103 as being unpatentable over Zappella et al. (US 20250013899 , hereinafter Zappella ) in view of Lambert et al. (US 20240037367 , hereinafter Lambert ). Regarding Claim 1, Zappella in view of Lambert discloses a method of integrally optimizing parameters ([0042] FIG. 2 , Bayesian hyperparameter s optimization ) , the method comprising: performing training on a machine learning model by selecting sensor parameters and machine learning model hyperparameters until a predetermined termination condition is satisfied ([0042] FIG. 1, FIG. 2, Step 200, determining an objective function to optimize and is validation error of a model 113 of a model training system 130 undergoing training using a Bayesian optimizer to optimize hyperparameters ) ; and determining, among the selected sensor parameters and machine learning model hyperparameters, an optimized sensor parameter and optimized machine learning model hyperparameter that minimize a loss value for the machine learning model ([0042] FIG. 1, FIG. 2 , the model training system 130 undergoing training using the Bayesian optimizer to optimize hyperparameters 140 of the model training system 130 in order to minimize validation error) , wherein the performing of the training on the machine learning model includes: selecting the sensor parameters and machine learning model hyperparameters that satisfy a predetermined optimization range ( [0044] FIG. 2, Step 210 , initialize probabilistic models of the objective function and constraint functions including evaluation of the objective function at one or more points. Select one of more points may be selected through a random search and determine a set of operational metrics that are provided to the Bayesian optimizer ) ; and performing training on the machine learning model based on sensor data provided from a sensor by the selected sensor parameters and the machine learning model hyperparameters ( [004 1 ] FIG. 1, the Bayesian optimizer 150 employ probabilistic models 155, in combination with an analysis of Constraints 112 and Metrics 145 at Constraint evaluator 152, to determine hyperparameters 140 and direct the model training system 130 to use the determined set of hyperparameters to perform a training operation using the training data 111 to generate additional metrics 145 ) . Zappella does not explicitly disclose integrally optimizing sensor parameters Lambert teaches optimizing sensor parameters ([0150], controller(s) 936 provide signals for controlling one or more components and/or systems of vehicle 900 in response to sens or data received from one or more sens ors (e.g., sens or inputs) Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of integrally optimizing sensor parameters as taught by Lambert ([01 50 ]) into the machine learning system of Zappella in order to provide systems for improv ing control of systems with high accuracy, efficiency and safety ( Lambert , [000 3 ]). Regarding Claim 2 , Zappella in view of Lambert discloses the method of claim 1, Lambert discloses wherein the sensor parameter includes at least one of a sampling frequency, a measurement range, and sensor sensitivity ( [ 0148], a brake sensor system 946 may be used to operate vehicle brakes in response to receiving signals from brake actuator(s) 948 and/or brake sensor s ; [0149], one or more onboard (e.g., integrated) computing devices that process sensor signals, to enable autonomous driving and/or to assist a human driver in driving vehicle 900 ; [0155], cameras with an RCCC, an RCCB, and/or an RBGC color filter array to increase light sensitivity ) . The same reason or rational of obviousness motivation applied as used above in claim 1. Regarding Claim 3 , Zappella in view of Lambert discloses the method of claim 1, Zappella discloses wherein the machine learning model hyperparameter includes at least one of an epoch, a batch size, and a learning rate ( [0040] FIG. 1 , optimization of hyperparameters for training of a machine learning system with constraints and submit various types of requests to the machine learning system 110 to train and/or execute machine learning models with constraints ). Regarding Claim 6, Analogous rejection as the rejection of Claim 1 applies. Lambert further discloses preprocessing filter ( [0018], A Gaus sian process surrogate model may be created for y(x) and iteratively updated by evaluating the black-box function at new points. Points are selected by optimizing an acquisition function which trades off exploration and exploitation. For example, for a black-box function of the validation error of a deep neural network (DNN) as a function of hyperparameters x, the DNN may be trained using newly selected points to determine various operational metrics). Regarding Claim 7, Zappella in view of Lambert discloses the method of claim 6, Lambert discloses wherein the preprocessing filter includes at least one of an interval average filter, a Gaussian filter, a maximum value filter, and a minimum value filter ( [0018], A Gaus sian process surrogate model may be created for y(x) and iteratively updated by evaluating the black-box function at new points. Points are selected by optimizing an acquisition function which trades off exploration and exploitation. For example, for a black-box function of the validation error of a deep neural network (DNN) as a function of hyperparameters x, the DNN may be trained using newly selected points to determine various operational metrics). Regarding Claim 8, Analogous rejection as the rejection of Claim 1 applies. Zappella in view of Lambert further discloses indicating for example how many training iterations have been run, current status of determined metrics, such as quality of the objective function and operational constraint values, and/or the current sampling weights assigned to the different training examples (Zappella: [0065]) and inference and/or training logic 615 may be used in system for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein (Lambert: [0153]. FIG. 9A ) Regarding Claim 9, Zappella in view of Lambert discloses the method of claim 8, Zappella in view of Lambert discloses wherein the performing of the training on the machine learning model includes generating training data by applying the weight to the sensor data, and performing training on the machine learning model using the training data (Zappella: [0065], indicating for example how many training iterations have been run, current status of determined metrics, such as quality of the objective function and operational constraint values, and/or the current sampling weights assigned to the different training examples; Lambert: [0153]. FIG. 9A inference and/or training logic 615 may be used in system for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein ) Regarding Claim 10, Zappella in view of Lambert discloses the method of claim 8, Zappella in view of Lambert discloses wherein the performing of the training on the machine learning model includes performing training on the machine learning model by selecting a sensor providing sensor data used for training from a sensor candidate group, and the optimized weight is an optimized weight for the sensor data provided by the selected senso (Zappella: [0065], indicating for example how many training iterations have been run, current status of determined metrics, such as quality of the objective function and operational constraint values, and/or the current sampling weights assigned to the different training examples; Lambert: [0153]. FIG. 9A inference and/or training logic 615 may be used in system for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein ) Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Zappella et al. (US 20250013899, hereinafter Zappella) in view of Lambert et al. (US 20240037367, hereinafter Lambert) and of Gullikson et al. (US 202 30110056 , hereinafter Gullikson) Regarding Claim 4 , Zappella in view of Lambert discloses the method of claim 1, but does not explicitly disclose wherein the performing of the training on the machine learning model includes performing training on the machine learning model by selecting preprocessing filters used for preprocessing the sensor data from a preprocessing filter candidate group until the termination condition is satisfied, and the determining the optimized sensor parameter and optimized machine learning model hyperparameter includes determining an optimized preprocessing filter that minimize the loss value among the selected preprocessing filters . Gullikson teaches wherein the performing of the training on the machine learning model includes performing training on the machine learning model by selecting preprocessing filters used for preprocessing the sensor data from a preprocessing filter candidate group until the termination condition is satisfied, and the determining the optimized sensor parameter and optimized machine learning model hyperparameter includes determining an optimized preprocessing filter that minimize the loss value among the selected preprocessing filters ( [0050] , FIG. 1, preprocessor 104 modi fies and supplement s the sensor data to generate preprocessed data for an anomaly detection model 106 such as filtering operations as filtering operations to remove outlying data samples, to reduce or limit bias (e.g., due to sensor drift or predictable variations), to remove sets of samples associated with particular events (such as data samples during a start-up period or during a known failure event), denoising, etc. ) Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of selecting preprocessing filters as taught by Gullikson ([0 0 50]) into the machine learning system of Zappella & Lambert in order to provide systems for automatically generating and training the trained behavior model based on historic data, reducing time and expense involved in process with multiple normal operational states and downstream effects of errors introduced by imputation of residual values and allowing the risk score calculator to calculate risk scores based on residual data corresponding to imputed values (Gullikson, [000 4 ]). Regarding Claim 5 , Zappella in view of Lambert discloses the method of claim 1, but does not explicitly disclose wherein the performing of the training on the machine learning model includes performing training on the machine learning model by selecting a sensor providing sensor data used for training from a sensor candidate group until the termination condition is satisfied, and the optimized sensor parameter is an optimized sensor parameter for the selected sensor. Gullikson teaches wherein the performing of the training on the machine learning model includes performing training on the machine learning model by selecting a sensor providing sensor data used for training from a sensor candidate group until the termination condition is satisfied, and the optimized sensor parameter is an optimized sensor parameter for the selected sensor ([0050], FIG. 1, preprocessor 104 modifies and supplements the sensor data to generate preprocessed data for an anomaly detection model 106 such as filtering operations as filtering operations to remove outlying data samples, to reduce or limit bias (e.g., due to sensor drift or predictable variations), to remove sets of samples associated with particular events (such as data samples during a start-up period or during a known failure event), denoising, etc. ) Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of selecting preprocessing filters as taught by Gullikson ([0050]) into the machine learning system of Zappella & Lambert in order to provide systems for automatically generating and training the trained behavior model based on historic data, reducing time and expense involved in process with multiple normal operational states and downstream effects of errors introduced by imputation of residual values and allowing the risk score calculator to calculate risk scores based on residual data corresponding to imputed values (Gullikson, [0004]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT Samuel D Fereja whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (469)295-9243 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 8AM-5PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT DAVID CZEKAJ can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-7327 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent- center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAMUEL D FEREJA/ Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Sep 04, 2023
Application Filed
Mar 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597264
Method for Calibrating an Assistance System of a Civil Motor Vehicle
2y 5m to grant Granted Apr 07, 2026
Patent 12598318
METHOD AND SYSTEM-ON-CHIP FOR PERFORMING MEMORY ACCESS CONTROL WITH LIMITED SEARCH RANGE SIZE DURING VIDEO ENCODING
2y 5m to grant Granted Apr 07, 2026
Patent 12593018
SYSTEM AND METHOD FOR CONTROLLING PERCEPTUAL THREE-DIMENSIONAL ELEMENTS FOR DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12593036
METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
2y 5m to grant Granted Mar 31, 2026
Patent 12591123
METHOD FOR DETERMINING SLOPE OF SLIDE IN SLIDE SCANNING DEVICE, METHOD FOR CONTROLLING SLIDE SCANNING DEVICE AND SLIDE SCANNING DEVICE USING THE SAME
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
86%
With Interview (+11.8%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 614 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month