DETAILED ACTION
Status of Claims
Claims 1, 9, 16 are amended.
Claims 1-2, 4-5, 7-10, 12-17, 19-20 are pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-5, 7-10, 12-17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Grau (US 20210018590) in view of Englard (US 20190178988).
Regarding Claim 1, 9, 16, Grau teaches the following limitations:
A computing system of a vehicle, comprising: a processor; and memory that stores instructions that, when executed by the processor, cause the processor to perform acts comprising: (Grau - [0084], [0115])
A computer-readable storage medium comprising instructions that, when executed by the processor, cause the processor to perform acts, the acts comprising: (Grau - [0084], [0115])
A method, comprising: (Grau - [0086])
receiving first sensor data generated by a radar system of the vehicle when using and initial radar setting; (Grau – [0034] An input set may include sensor data, such as image data, radar data, [0065] the sensor system may detect information (e.g. such as around a vicinity of a vehicle) and output sensor data representing the detected information. The perception module 504 may receive the outputted data from the sensor sub-module 502 and perform one or more perception operations. These perception operations may include, but are not limited to, detection of an object within the sensor data and/or detection of one or more object attributes (e.g. velocity, acceleration, distance, criticality, etc.). [0086] vehicle 100 may include… one or more radar sensors 110,)
identifying a state associated with the first sensor data; (Grau – [0034], [0065] sensor data evaluation system… monitoring system 506 may be configured to detect a sensor deficiency)
wherein the state comprises a classification of the ambient environment in which the radar system is operating, and (Grau – [0034] Various aspects described herein may utilize one or more classification models. In a classification model, the outputs may be restricted to a limited set of values (e.g., one or more classes). The classification model may output a class for an input set of one or more input values. An input set may include sensor data, such as image data, radar data, LIDAR data and the like. A classification model as described herein may, for example, classify certain driving conditions and/or environmental conditions, such as weather conditions, road conditions, and the like.)
wherein the state is identified based on what the radar system is able to measure in the ambient environment in which the vehicle is operating when using the initial radar parameter setting; (Grau – [0034], [0056] utilize synthetic data generation with an updated sensor model to overcome and compensate for differences between the original sensor data used for training and validation and the current sensor, which may exhibit altered characteristics. [0067] Upon detecting a sensor deficiency of sufficient magnitude, the system may declare a failure 508. A state is being considered as a system failure or not a system failure.)
modifying the radar system to operate using a radar parameter setting outputted by a computer-implemented model, (Grau – [0047] one or more processors 102 may process sensory information (such as images, radar signals, [0069] If a sensor deficiency is detected, one or more processors may initiate a sensor reconfiguration 510 [0070] Following reconfiguration 510, the system may perform revalidation 514. The revalidation 514 may include, generating from a first testing data output a second testing data output,)
the radar parameter defining operational characteristics for the radar system for a subsequent measurement by the radar system, (Grau – [0047], [0069])
the radar parameter setting being outputted by the computer-implemented model responsive to the state being inputted to the computer-implemented model, (Grau – [0084] The one or more processors may be configured to provide the scenarios to the sensors by means of rendering, such as using computer graphics or other sensor simulation techniques, e.g. for radar, audio and other.)
wherein the computer- implemented model is trained based on simulation sensor data that satisfies a goal function generated from virtual driving simulation in a virtual environment by; (Grau – [0029] A trained machine learning model may be used during an inference phase to make predictions or decisions based on input data. [0079] the update sensor model can be used in data synthesis to simulate the estimated alterations. This may allow the system to re-validate itself)
performing a simulated driving maneuver based on an output of a perception system of the vehicle that processes the simulation sensor data; and (Grau – [0065], [0103] In Example 4, the sensor data evaluation device of any one of examples 1 to 3 is disclosed, wherein the one or more processors are further configured to send a signal representing an instruction to a vehicle to stop operation if the determined difference is outside of the one or more predetermined ranges or does not satisfy the one or more predetermined criteria. [0114] In Example 15, the sensor data evaluation device of any one of examples 1 to 14 is disclosed, wherein the first testing data and/or the second testing data are simulator data.)
scoring an outcome of the simulated driving maneuver against the goal function to generate a quantitative performance score, (Grau – [0114], [0029] Various aspects of the disclosure herein may utilize one or more machine learning models to perform or control functions of the vehicle (or other functions described herein). The term “model” may, for example, used herein may be understood as any kind of algorithm, which provides output data from input data (e.g., any kind of algorithm generating or calculating output data from input data). A machine learning model may be executed by a computing system to progressively improve performance of a specific task. In some aspects, parameters of a machine learning model may be adjusted during a training phase based on training data. A trained machine learning model may be used during an inference phase to make predictions or decisions based on input data. In some aspects, the trained machine learning model may be used to generate additional training data. An additional machine learning model may be adjusted during a second training phase based on the generated additional training data. A trained machine learning model may be used during an inference phase to make predictions or decisions based on input data. [0048] Furthermore, the safety system 200 may include a driving model, [0051] The safety system 200 may in general generate data to control or assist to control the ECU and/or other components of the vehicle 100 to directly or indirectly control the driving of the vehicle 100. [0076-0078] The system may include a world simulation/rendering engine 608, which may be configured to render sensor data output corresponding to output from a first sensor (e.g. a new sensor, a sensor-deficiency-free sensor, or a working sensor) receiving the sensor input from one or more scenarios of the scenario database 606. [0114] In Example 15, the sensor data evaluation device of any one of examples 1 to 14 is disclosed, wherein the first testing data and/or the second testing data are simulator data.)
wherein the quantitative performance score provides a feedback signal for training the computer-implemented model; (Grau – [0029], [0114], [0001] Various aspects of this disclosure generally relate to sensor error detection, sensor degradation detection, perception system error detection, perception system retraining and/or perception system re-validation [0089] if the magnitude of difference between the first testing data and the second testing data is outside of a predetermined threshold, one or more processors may be configured to send a signal representing an instruction to a vehicle to stop operation.)
receiving second sensor data generated by the radar system of the vehicle when the radar system is operating using the radar parameter setting outputted by the computer-implemented model; and (Grau – [0077] receiving the sensor input from one or more scenarios of the scenario database 606. Second sensor data is implied.)
controlling the vehicle to perform a driving maneuver based on the second sensor data generated by the radar system when the radar system is operating using the radar parameter setting. (Grau – [Fig. 2], [0051])
Grau does not explicitly teach the following limitations, however Englard, in the same field of endeavor, teaches:
second sensor data (Englard – [0006] a second set of training data that includes (i) second sensor data indicative of real or simulated vehicle environments, the second sensor data corresponding to a second setting of the one or more sensor parameters, [0035] The sensors may be any type or types of sensors capable of sensing an environment through which the vehicle is moving, such as lidar, radar,)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the perception system of Grau with the second sensor data of Englard in order to output signals indicative of the current state of the vehicle's environment (Englard – [0036]).
the operational characteristics comprising at least one of an amplitude setting, a beamsteering setting, a signal waveform setting, or a phase offset setting, (Englard – [0074] In operation, the light source 210 emits an output beam of light 225 which may be continuous-wave, pulsed, or modulated in any suitable manner for a given application. [0077] More particularly, the controller 250 may analyze the time of flight or phase modulation for the beam of light 225 transmitted by the light source 210. [0114] The perception signals 506 and (in some embodiments) prediction signals 522 are input to a sensor control component 530, which processes the signals 506, 522 to generate sensor control signals 532 that control one or more parameters of one or more of the sensors 502. In particular, the sensor control component 530 controls scan line distributions (and possibly also other parameters) of one or more sensor devices that operate by probing the environment with multiple scan lines. For example, the sensor control component 530 may control scan line distributions of one or more lidar devices (e.g., the lidar system 200 of FIGS. 2 and 3) and/or radar devices. [0124] for a lidar or radar device, the sensor control component 630 may adjust the spatial distribution of scan lines produced by the device (with a higher density of scan lines at a desired area of focus), the center of the field of regard of the device, and/or horizontal and/or vertical widths of the field of regard.)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the perception system of Grau with the scan line control parameters of Englard in order to adjust the spatial distribution of scan lines (Englard – [0124]).
Regarding Claim 2, 10, 17, Grau further teaches:
wherein the state is identified via a convolutional neural network, wherein the state is outputted by the convolution neural network responsive to the first sensor data being inputted to the convolution neural network. (Grau – [0034], [0036] such as a convolutional neural network)
Regarding Claim 4, 12, 19, Grau further teaches:
wherein the acts further comprise: modifying respective radar parameter settings for each antenna of a multi- antenna radar system. (Grau – [0024])
Regarding Claim 5, 13, 20 Grau further teaches:
wherein the radar parameter setting is determined based on iteratively simulating a set of radar parameter settings and selecting the radar parameter setting that results in an optimal score associated with the goal function (Grau – [0029], [0084] [0031] Training may include iterating through training instances and using an objective function to teach the model to predict the output for new inputs)
Regarding Claim 7, 15, Grau further teaches:
wherein the computer-implemented model is trained using reinforcement learning in simulation. (Grau – [0030] reinforcement learning techniques.)
Regarding Claim 8, 14, Grau further teaches:
wherein the acts further comprise: performing a perception subtask based on the second sensor data utilizing the perception system, wherein the perception subtask comprises at least one of an object detection task, a freespace estimation task, a lane detection task, and a SLAM task, and (Grau – [0061] One or more components of the sensor subsystem 404 may send the sensor data to the perception unit 410. The perception unit 410 may include a sensor processing unit 411 and a sensor processing unit 412. The sensor processing units 411 and 412 may perform one or more perception operations on the sensor data. Such perception operations may include, but are not limited to, recognizing an object within the sensor data, Second sensor data is implied.)
wherein the vehicle is controlled to perform the driving maneuver based on output of the perception subtask. (Grau – [0027], [0052] autonomous vehicle operations may rely on one or more sensors to output sensor data for use in one or more perception operations.)
based on the simulation sensor data generated based on the set of radar parameter settings. (Grau – [0027], [0052], [0076-0078])
Grau does not explicitly teach the following limitations, however Englard, in the same field of endeavor, teaches:
second sensor data (Englard – [0006])
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the perception system of Grau with the second sensor data of Englard in order to output signals indicative of the current state of the vehicle's environment (Englard – [0036]).
Response to Arguments
Applicant’s arguments, see Page 7-8, filed 12/10/2025, with respect to the rejection under 35 U.S.C. § 103 have been fully considered and are not persuasive. Applicant argues Grau does not teach “a classification of the ambient environment”. The examiner disagrees Grau [0034] is cited in the office action to teach this limitation explicitly “environmental conditions, such as weather conditions, road conditions, and the like” and the classification model is used as an input for “sensor data, such as image data, radar data, LIDAR data and the like”. The many inputs into the sensor data allow Grau to “utilize synthetic data generation with an updated sensor model to overcome and compensate for differences between the original sensor data used for training and validation and the current sensor, which may exhibit altered characteristics.” No teaching of Grau limits Grau’s “state” to the hardware. Instead Grau teaches many models used to detect and overcome “sensor deficiencies” to include outputs from a “classification model” as an input in “radar data” therefor is considered a “radar parameter”.
Applicant argues, see Page 8-9, “the "stop operation" in Grau (Paragraphs [0089] and [0104]) is not a "simulated maneuver" whose outcome is "scored" to provide a feedback signal for training the computer-implemented model.” The examiner disagrees Grau [0114] is cited in the office action to teach that the maneuver described in Grau [0103] is simulated. The simulator data is the first and second testing data and that data would include ALL examples 1 through 59 and further citations are presented to map more explicitly to the steps undertook by Grau to arrive at a simulated maneuver. The “scoring” and “goal function” is interpreted as a broad analysis of data that seems to map to a threshold that is used to determine if a stop operation should take place (score) and this step is to simulate the results (goal) when considered in conjunction with Grau [0114].
Applicant argues, see Page 9, “Grau does not disclose training a model to output radar parameter settings (e.g., waveform, phase offset) to actively optimize the sensor's hardware operation”. This is not a limitation, and if this limitation was presented in the Claims, it would be taught by Grau [0029] in conjunction with any of the defined models and the perception system retraining in order to perform the sensor reconfiguration taught by Grau [0069]. The amendments do not seem to further limit the scoring system except to require “a feedback signal for training the computer-implemented model” that is taught by Grau [0089] when considering that the stop operation is simulated (Grau [0103], [0114]). The citations above have been restructured and incudes further citations only to assist in clarification with the one explicit example of a simulated vehicle maneuver. Grau [0048] and [0051] provide a more general application which would render further “simulated driving maneuver” as an obvious application.
Applicant argues, see Page 9-10, “there is no motivation to combine these references.” The examiner disagrees, Englard provides an obvious combination to teach well known multi sensor radar characteristics being utilized for training data. Grau teaches generic sensor data (Grau [0034]) and Englard provides specific sensor data that teaches the deficiencies of Grau when further determining a vehicle’s environment and the sensor direction (scan lines).
Applicant’s arguments, see Page 10, filed 12/10/2025, with respect to the rejection under 35 U.S.C. § 103 have been fully considered and are not persuasive. Applicant argues that the dependent Claims are allowable due to the dependency on independent claims. The examiner disagrees due to the above-mentioned rejections.
Applicant's remaining arguments amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims is understandable and distinguishable from other inventions.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's
disclosure or directed to the state of art is listed on the enclosed PTO-892.
The following is a brief description for relevant prior art that was cited but not applied:
Kristensen (US 20210286923) describes a method of generative learning that utilizes real-world sensor data and virtual sensor data.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON JAMES HENSON whose telephone number is (703)756-1841. The examiner can normally be reached Monday-Friday 9:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Resha H. Desai can be reached at (571) 270-7792. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRANDON JAMES HENSON/Examiner, Art Unit 3648
/RESHA DESAI/Supervisory Patent Examiner, Art Unit 3648