DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Information Disclosure Statement The information disclosure statements (IDS) were submitted on 09/04/2023 . The submission are in compliance with the provisions of 37 CFR § 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1- 3 & 6- 10 are rejected under 35 U.S.C. 103 as being unpatentable over Zappella et al. (US 20250013899 , hereinafter Zappella ) in view of Lambert et al. (US 20240037367 , hereinafter Lambert ). Regarding Claim 1, Zappella in view of Lambert discloses a method of integrally optimizing parameters ([0042] FIG. 2 , Bayesian hyperparameter s optimization ) , the method comprising: performing training on a machine learning model by selecting sensor parameters and machine learning model hyperparameters until a predetermined termination condition is satisfied ([0042] FIG. 1, FIG. 2, Step 200, determining an objective function to optimize and is validation error of a model 113 of a model training system 130 undergoing training using a Bayesian optimizer to optimize hyperparameters ) ; and determining, among the selected sensor parameters and machine learning model hyperparameters, an optimized sensor parameter and optimized machine learning model hyperparameter that minimize a loss value for the machine learning model ([0042] FIG. 1, FIG. 2 , the model training system 130 undergoing training using the Bayesian optimizer to optimize hyperparameters 140 of the model training system 130 in order to minimize validation error) , wherein the performing of the training on the machine learning model includes: selecting the sensor parameters and machine learning model hyperparameters that satisfy a predetermined optimization range ( [0044] FIG. 2, Step 210 , initialize probabilistic models of the objective function and constraint functions including evaluation of the objective function at one or more points. Select one of more points may be selected through a random search and determine a set of operational metrics that are provided to the Bayesian optimizer ) ; and performing training on the machine learning model based on sensor data provided from a sensor by the selected sensor parameters and the machine learning model hyperparameters ( [004 1 ] FIG. 1, the Bayesian optimizer 150 employ probabilistic models 155, in combination with an analysis of Constraints 112 and Metrics 145 at Constraint evaluator 152, to determine hyperparameters 140 and direct the model training system 130 to use the determined set of hyperparameters to perform a training operation using the training data 111 to generate additional metrics 145 ) . Zappella does not explicitly disclose integrally optimizing sensor parameters Lambert teaches optimizing sensor parameters ([0150], controller(s) 936 provide signals for controlling one or more components and/or systems of vehicle 900 in response to sens or data received from one or more sens ors (e.g., sens or inputs) Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of integrally optimizing sensor parameters as taught by Lambert ([01 50 ]) into the machine learning system of Zappella in order to provide systems for improv ing control of systems with high accuracy, efficiency and safety ( Lambert , [000 3 ]). Regarding Claim 2 , Zappella in view of Lambert discloses the method of claim 1, Lambert discloses wherein the sensor parameter includes at least one of a sampling frequency, a measurement range, and sensor sensitivity ( [ 0148], a brake sensor system 946 may be used to operate vehicle brakes in response to receiving signals from brake actuator(s) 948 and/or brake sensor s ; [0149], one or more onboard (e.g., integrated) computing devices that process sensor signals, to enable autonomous driving and/or to assist a human driver in driving vehicle 900 ; [0155], cameras with an RCCC, an RCCB, and/or an RBGC color filter array to increase light sensitivity ) . The same reason or rational of obviousness motivation applied as used above in claim 1. Regarding Claim 3 , Zappella in view of Lambert discloses the method of claim 1, Zappella discloses wherein the machine learning model hyperparameter includes at least one of an epoch, a batch size, and a learning rate ( [0040] FIG. 1 , optimization of hyperparameters for training of a machine learning system with constraints and submit various types of requests to the machine learning system 110 to train and/or execute machine learning models with constraints ). Regarding Claim 6, Analogous rejection as the rejection of Claim 1 applies. Lambert further discloses preprocessing filter ( [0018], A Gaus sian process surrogate model may be created for y(x) and iteratively updated by evaluating the black-box function at new points. Points are selected by optimizing an acquisition function which trades off exploration and exploitation. For example, for a black-box function of the validation error of a deep neural network (DNN) as a function of hyperparameters x, the DNN may be trained using newly selected points to determine various operational metrics). Regarding Claim 7, Zappella in view of Lambert discloses the method of claim 6, Lambert discloses wherein the preprocessing filter includes at least one of an interval average filter, a Gaussian filter, a maximum value filter, and a minimum value filter ( [0018], A Gaus sian process surrogate model may be created for y(x) and iteratively updated by evaluating the black-box function at new points. Points are selected by optimizing an acquisition function which trades off exploration and exploitation. For example, for a black-box function of the validation error of a deep neural network (DNN) as a function of hyperparameters x, the DNN may be trained using newly selected points to determine various operational metrics). Regarding Claim 8, Analogous rejection as the rejection of Claim 1 applies. Zappella in view of Lambert further discloses indicating for example how many training iterations have been run, current status of determined metrics, such as quality of the objective function and operational constraint values, and/or the current sampling weights assigned to the different training examples (Zappella: [0065]) and inference and/or training logic 615 may be used in system for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein (Lambert: [0153]. FIG. 9A ) Regarding Claim 9, Zappella in view of Lambert discloses the method of claim 8, Zappella in view of Lambert discloses wherein the performing of the training on the machine learning model includes generating training data by applying the weight to the sensor data, and performing training on the machine learning model using the training data (Zappella: [0065], indicating for example how many training iterations have been run, current status of determined metrics, such as quality of the objective function and operational constraint values, and/or the current sampling weights assigned to the different training examples; Lambert: [0153]. FIG. 9A inference and/or training logic 615 may be used in system for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein ) Regarding Claim 10, Zappella in view of Lambert discloses the method of claim 8, Zappella in view of Lambert discloses wherein the performing of the training on the machine learning model includes performing training on the machine learning model by selecting a sensor providing sensor data used for training from a sensor candidate group, and the optimized weight is an optimized weight for the sensor data provided by the selected senso (Zappella: [0065], indicating for example how many training iterations have been run, current status of determined metrics, such as quality of the objective function and operational constraint values, and/or the current sampling weights assigned to the different training examples; Lambert: [0153]. FIG. 9A inference and/or training logic 615 may be used in system for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein ) Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Zappella et al. (US 20250013899, hereinafter Zappella) in view of Lambert et al. (US 20240037367, hereinafter Lambert) and of Gullikson et al. (US 202 30110056 , hereinafter Gullikson) Regarding Claim 4 , Zappella in view of Lambert discloses the method of claim 1, but does not explicitly disclose wherein the performing of the training on the machine learning model includes performing training on the machine learning model by selecting preprocessing filters used for preprocessing the sensor data from a preprocessing filter candidate group until the termination condition is satisfied, and the determining the optimized sensor parameter and optimized machine learning model hyperparameter includes determining an optimized preprocessing filter that minimize the loss value among the selected preprocessing filters . Gullikson teaches wherein the performing of the training on the machine learning model includes performing training on the machine learning model by selecting preprocessing filters used for preprocessing the sensor data from a preprocessing filter candidate group until the termination condition is satisfied, and the determining the optimized sensor parameter and optimized machine learning model hyperparameter includes determining an optimized preprocessing filter that minimize the loss value among the selected preprocessing filters ( [0050] , FIG. 1, preprocessor 104 modi fies and supplement s the sensor data to generate preprocessed data for an anomaly detection model 106 such as filtering operations as filtering operations to remove outlying data samples, to reduce or limit bias (e.g., due to sensor drift or predictable variations), to remove sets of samples associated with particular events (such as data samples during a start-up period or during a known failure event), denoising, etc. ) Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of selecting preprocessing filters as taught by Gullikson ([0 0 50]) into the machine learning system of Zappella & Lambert in order to provide systems for automatically generating and training the trained behavior model based on historic data, reducing time and expense involved in process with multiple normal operational states and downstream effects of errors introduced by imputation of residual values and allowing the risk score calculator to calculate risk scores based on residual data corresponding to imputed values (Gullikson, [000 4 ]). Regarding Claim 5 , Zappella in view of Lambert discloses the method of claim 1, but does not explicitly disclose wherein the performing of the training on the machine learning model includes performing training on the machine learning model by selecting a sensor providing sensor data used for training from a sensor candidate group until the termination condition is satisfied, and the optimized sensor parameter is an optimized sensor parameter for the selected sensor. Gullikson teaches wherein the performing of the training on the machine learning model includes performing training on the machine learning model by selecting a sensor providing sensor data used for training from a sensor candidate group until the termination condition is satisfied, and the optimized sensor parameter is an optimized sensor parameter for the selected sensor ([0050], FIG. 1, preprocessor 104 modifies and supplements the sensor data to generate preprocessed data for an anomaly detection model 106 such as filtering operations as filtering operations to remove outlying data samples, to reduce or limit bias (e.g., due to sensor drift or predictable variations), to remove sets of samples associated with particular events (such as data samples during a start-up period or during a known failure event), denoising, etc. ) Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of selecting preprocessing filters as taught by Gullikson ([0050]) into the machine learning system of Zappella & Lambert in order to provide systems for automatically generating and training the trained behavior model based on historic data, reducing time and expense involved in process with multiple normal operational states and downstream effects of errors introduced by imputation of residual values and allowing the risk score calculator to calculate risk scores based on residual data corresponding to imputed values (Gullikson, [0004]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT Samuel D Fereja whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (469)295-9243 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 8AM-5PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT DAVID CZEKAJ can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-7327 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent- center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAMUEL D FEREJA/ Primary Examiner, Art Unit 2487