DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The pending application 18/064,428, filed on December 12, 2022, claims priority from foreign
application DE 10 2021 214 760.7, filed on December 21, 2021 in the Federal Republic of Germany.
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 15 JAN 2026 has been entered.
Response to Amendment
Applicant's amendment filed on 15 JAN 2026 has been entered. Claims 1, 10, 12, 14 and 15 have been amended. Claims 4 and 5 have been cancelled. Claims 1-3 and 6-15 are still pending in this application, with claims 1, 10, 14 and 15 being independent.
Response to Arguments
Applicant’s arguments filed 15 JAN 2026 have been fully considered.
Applicant’s arguments with respect to claim(s) 1, 10, 14 and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Upon further consideration of the claims, the prior art, and applicant’s arguments, a new ground of rejection is applied to the claims based on a new interpretation of Chen et al. (US 2022/0196798 A1, previously relied upon by the examiner).
Applicant argues that the “Patent Office does not explain why [0222] and [0227] of Chen discloses sensor calibrations at all, much less the claimed “sensor calibration of the radar sensor or the plurality of radar sensors in the form of correlations…” (applicant’s remarks p. 8)
The instant office action relies upon paragraphs [0112], [0122], and [0128], where Chen et al. describes the use of a feedback controller to adjust reconfigurable radio parameters.
“According to an aspect of the disclosure, the system 301 may include a feedback controller 316, which may be configured to determine a plurality of reconfigurable radio parameters 317, for example, based on output 318 of the radar processor 309. The reconfigurable radio parameters 317 may include a waveform, a modulation, a center frequency, a bandwidth, a polarization, a beamforming directivity, phase and/or amplitude values, e.g., control signals to the radar frontend, for example a radiofrequency lens, antennas, transmitters and receivers, and/or any other additional or alternative parameters.” (Chen et al. ¶ [0112])
The reconfigurable radio parameters described above are considered to be calibration settings of the radar device.
“According to some aspects of the disclosure, the feedback controller 316 may be configured to determine the plurality of reconfigurable radio parameters 317, for example, based on a reliability indicator from radar processor 309.” (Chen et al. ¶ [0122])
The purpose of sensor calibration is to improve the accuracy and reliability of the sensor measurements. Therefore, the process of adjusting the reconfigurable radio parameters based on a reliability indicator is considered to be a calibration process.
“The feedback controller 316 may be configured to adaptively determine the plurality of reconfigurable radio parameters in real time based on previous radar perception data corresponding to previously processed digital radar samples.” (Chen et al. ¶ [0128])
Additionally, adaptively determining the plurality of reconfigurable radio parameters is considered to be a process of calibrating the radar device in the form of correlations between the previous radar perception data and the previously processed digital radar samples.
For these reasons, Chen et al. is considered to anticipate the argued features, and applicant’s arguments are considered moot.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
system in claim 14: no structure found in spec
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 14 remains rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 14, claim limitation “system” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The disclosure is devoid of any structure that performs the function in the claim. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
For the purpose of prosecution, claim limitation “a system” has been interpreted as “a processor.”
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 6-9, 14 and 15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter according to the subject matter eligibility flowchart analysis described below:
PNG
media_image1.png
930
645
media_image1.png
Greyscale
Regarding Step 1:
The instant application includes the following independent claims directed to the following categories of patent eligible subject matter articulated in parenthesis:
(claim 1)… a method (i.e. process)
(claim 14)… a system (i.e. product)
(claim 15)… a non-transitory computer readable medium (i.e. product)
Regarding Step 2A, prong 1:
Claims 1, 14, and 15 recite the following elements which, under a broadest reasonable interpretation of the claimed invention constitute either mathematical calculations or mental processes for the articulated reasons given in parenthesis:
(claim 1, lines 3-5 and 14-16)… creating a training data set that includes radar… the radar data representing a map of surroundings of the radar sensor or of the plurality radar sensors… wherein the radar data include data based on measurements of the radar sensor or of the plurality of radar sensor or on simulations of radar measurements (the BRI of simulations of radar measurements is reasonably considered mathematical calculations that model the radar the behavior of the radar)
Claim 14 (lines 3-5 and 14-16) and claim 15 (lines 4-6 and 15-17) similarly recite the above limitations of claim 1.
Regarding Step 2A, prong 2:
Claims 1, 14, and 15 do not integrate the claimed invention into a practical application. Claims 1, 14, and 15 recite the following elements beyond the judicial exception, but fail to impose a meaningful limit on the judicial exception for the articulated reasons in parenthesis:
(claim 1, lines 3-5)… creating a training data set that includes radar data of a radar sensor or of a plurality of radar sensors, the radar data representing a map of surroundings of the radar sensor or of the plurality of radar sensors (applying to the technological environment of radars and machine learning, thus failing to impose a meaningful limit on the judicial exception)
Claims 14 and 15 fail to impose a meaningful limit on the judicial exception for the same reasons as claim 1 above.
(claim 15, lines 1-3)… a non-transitory computer-readable medium on which is stored a computer program including commands for training a radar-based object detection, the commands representing execution of the following steps… (amounts to merely using a computer / generic computer components as a tool to perform an abstract idea, thus failing to impose a meaningful limit on the judicial exception)
Regarding Step 2B:
Claims 1, 14, and 15 do not recite additional elements, taken individually and in combination, that results in the claims as a whole, amounting to significantly more than the exception for the following reasons given in parenthesis:
(claim 1, lines 3-5)… creating a training data set that includes radar data of a radar sensor or of a plurality of radar sensors, the radar data representing a map of surroundings of the radar sensor or of the plurality of radar sensors (applying to the technological environment of radars and machine learning, thus failing to amount to significantly more than the judicial exception)
Claims 14 and 15 fail to amount to significantly more than the judicial exception for the same reasons as claim 1.
(claim 15, lines 1-3)… a non-transitory computer-readable medium on which is stored a computer program including commands for training a radar-based object detection, the commands representing execution of the following steps… (amounts to merely using a computer / generic computer components as a tool to perform an abstract idea, thus failing to amount to significantly more than the judicial exception)
Dependent claims 2-3 and 6-7 are also rejected under 35 U.S.C. 101 as merely further limiting the abstract idea by defining inputs of the simulation.
Dependent claims 8-9 are also rejected under 35 U.S.C. 101 as applying the abstract idea in the technical field of machine learning.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 6-8, and 10-15 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Chen et al. (US 2022/0196798 A1, previously relied upon by the examiner)
Regarding claim 1 (Currently Amended), Chen et al. discloses:
A method for training a radar-based object detection (Chen et al. “The radar device may determine locations of objects (e.g., perform object detection) within an environment based on the received wireless signals.” - ¶ [0181]), comprising the following steps:
creating a training data set (Chen et al. data set generation 1302, Fig. 13; “The method 1300, at block 1302, may include generating a dataset. The radar processor 1104 may generate the dataset based on a scene, pre-defined parameters, channel modelling, ground truth parameters (e.g., ground truth target/object parameters), radar pipeline output, field test data, or some combination thereof.” - ¶ [0221]) that includes radar data (Chen et al. field test data 1310 or radar pipeline simulation 1312, Fig. 13) of a radar sensor or of a plurality of radar sensors (Chen et al. one or more transmit antennas 406, one or more receive antennas, Fig. 4), the radar data representing a map of surroundings (Chen et al. “The radar pipeline 1117 may generate a scene representative of the environment 1100" - ¶ [0194]) of the radar sensor or of the plurality of radar sensors; and
training the radar-based object detection based on the created training data set (Chen et al. “The training of the neural network (e.g., adapting the layers of the neural network) may use or may be based on any kind of training principle, such as backpropagation (e.g., using the backpropagation algorithm).” - ¶ [0078]; “The method 1300, at block 1318, may include performing loss backpropagation. In some aspects, the radar detector 1110 may backpropagate (e.g., use) the error value 1120 to adjust one or more of the weighted values of the machine learning algorithm. In some aspects, the radar detector 1110 may adjust the one or more weighted values based on the error value 1120 to reduce the error value 1120. In addition, the error value 1120 may be fed back to the radar detector 1110 to improve the object detection of the radar detector 1110. The method 1300 may be performed iteratively to continue to reduce the error value 1120.” - ¶ [0229]) for generating an output representation of the surroundings of the radar sensor or of the plurality of radar sensors, the output representation being configured as a point cloud of reflectance points of radar signals (Chen et al. “Range and Doppler processing creates a range-doppler map, and the AoA estimation creates a azimuth-elevation map for each range-doppler bin, thus resulting in a 4D voxel. A detector may then create a point cloud, which can then be an input for a perception pipeline.” - ¶ [0370]) or as a point cluster or as a plurality of point clusters of a radar road signature map display or as a reflectance grid, the reflectance grid is grid representation of the surroundings of the radar sensor or of the plurality of radar sensors, and each grid cell of the reflectance grid being provided with a reflectance value of the radar signals, using which a backscatter characteristic of radar signals of a respective spatial area of the surroundings is described (Examiner notes that claim 1 has been interpreted such that the following limitations are considered alternatives: a point cloud, a point cluster, a plurality of point clusters, or a reflectance grid.), wherein the radar data include data based on measurements (Chen et al. field test data 1310, Fig. 13) of the radar sensor or of the plurality of radar sensors or on simulations of radar measurements (Chen et al. radar pipeline simulation 1312, Fig. 13), and wherein sensor calibrations (Chen et al. “According to an aspect of the disclosure, the system 301 may include a feedback controller 316, which may be configured to determine a plurality of reconfigurable radio parameters 317, for example, based on output 318 of the radar processor 309. The reconfigurable radio parameters 317 may include a waveform, a modulation, a center frequency, a bandwidth, a polarization, a beamforming directivity, phase and/or amplitude values, e.g., control signals to the radar frontend, for example a radiofrequency lens, antennas, transmitters and receivers, and/or any other additional or alternative parameters.” - ¶ [0112]; “According to some aspects of the disclosure, the feedback controller 316 may be configured to determine the plurality of reconfigurable radio parameters 317, for example, based on a reliability indicator from radar processor 309.” - ¶ [0122]; “Further, the receiver may adjust the transmit waveform based on the error value to further reduce the determined error value.” - ¶ [0189]; where the reconfigurable radio parameters are considered to be calibration settings and the process of adjusting the reconfigurable radio parameters based on a reliability indicator and the error value of the machine learning algorithm is considered to be a calibration process) of the radar sensor or the plurality of radar sensors in the form of correlations between radar signals reflected at point targets situated in the surroundings and corresponding demodulated time signals (Chen et al. “The feedback controller 316 may be configured to adaptively determine the plurality of reconfigurable radio parameters in real time based on previous radar perception data corresponding to previously processed digital radar samples.” - ¶ [0128]; “The RF components 1208 (e.g., receive antennas and receive front end components) may receive the receive wireless signals 1218. The radar processor 1204 may generate a first dataset 1206 representative of the environment 1200 (e.g., a scene representative of the environment 1200). The scene may include an object representative of the object 1212 in the environment 1200. In some aspects, the first dataset 1206 may include IQ samples from the RF components 1208.” - ¶ [0206]; Examiner notes that “demodulated time signal” has been interpreted as “demodulated signals in the time domain.” The received signals are demodulated into IQ samples comprising time-series data for the in-phase and quadrature components, and the IQ samples are considered to be demodulated signals in the time domain.) of the radar sensor or the plurality of radar sensors are taken into account in the simulations (Chen et al. “In these and other aspects, the radar processor 1104 may determine the pre-defined parameters to include settings of the RF components 1108.” - ¶ [0222]; where the settings of the RF components (the reconfigurable radio parameters) are considered to be calibration settings of the radar device that are taken into account in the radar pipeline simulation as input to the scene generation 1306, which is input to the channel model and target scattering 1308 which is input into the radar pipeline simulation 1312, Fig. 13).
Regarding claim 2 (Previously Presented), Chen et al. discloses:
The method as recited in claim 1, wherein the radar data are raw data (Chen et al. “The digital (radar) reception data (including the digital (radar) reception data values) as output by the ADC 308 (or more than one ADC, i.e. in case of IQ output) is also referred to as radar measurement data or raw (radar) reception data (including raw (radar) reception data values also referred to as radar reception samples).” - ¶ [0106]) of FMCW (Frequency Modulated Continuous Wave) radar sensors (Chen et al. radar frontend 401 of FMCW radar device 400, Fig. 4; ¶ [0129]) and are demodulated time signals (Chen et al. “In some aspects, the first dataset 1206 may include IQ samples from the RF components 1208.” - ¶ [0206]; where the received signals are demodulated into IQ samples comprising time-series data for the in-phase and quadrature components and are therefore considered demodulated time signals); “In these and other aspects, the receiver 1222 may convert the transmit waveform into a vector of samples in either a frequency domain or a time domain.” - ¶ [0213]; Examiner notes that “demodulated time signal” has been interpreted as “demodulated signals in the time domain.”).
Regarding claim 3 (Original), Chen et al. discloses:
The method as recited in claim 2, wherein the radar data are based on an execution of a two-dimensional fast Fourier transform (Chen et al. “For this, the radar processor 402 performs two FFTs (Fast Fourier Transforms) to extract range information (by a first FFT, also denoted as range FFT) as well as radial velocity information (by a second FFT, also denoted as Doppler FFT) from the digital reception data values.” - ¶ [0136]; The fast Fourier transform of Chen et al. is considered to be two-dimensional because it is performed twice.) on the raw data and are frequency signals (Chen et al. “In these and other aspects, the receiver 1222 may convert the transmit waveform into a vector of samples in either a frequency domain or a time domain.” - ¶ [0213]; Examiner notes that “frequency signals” has been interpreted as “signals in the frequency domain.” Fourier transforms are known to convert signals from the time-domain to the frequency-domain.).
Regarding claim 6 (Previously Presented), Chen et al. discloses:
The method as recited in claim 1, wherein interference disruptions of various radar signals are taken into account in the simulations (Chen et al. "In other aspects, the radar processor 1104 may determine the pre-defined parameters to include at least one of noise source parameters, outlier parameters, interference parameters, multipath parameters, and object distribution parameters." - ¶ [0222]; where the pre-defined parameters are taken into account in the radar pipeline simulation as input to the scene generation 1306, which is input to the channel model and target scattering 1308 which is input into the radar pipeline simulation 1312, Fig. 13).
Regarding claim 7 (Previously Presented), Chen et al. discloses:
The method as recited in claim 1, wherein the training data set further includes pieces of calibration information relating to a sensor calibration of the radar sensor or the plurality of radar sensors (Chen et al. “In these and other aspects, the radar processor 1104 may determine the pre-defined parameters to include settings of the RF components 1108.” - ¶ [0222]; The settings of the RF components include the reconfigurable radio parameters which are considered to be the calibration settings.), and the pieces of calibration information are utilized as input data of the radar-based object detection (Chen et al. where the settings of the RF components are taken into account in the radar pipeline simulation as input to the scene generation 1306, which is input to the channel model and target scattering 1308 which is input into the radar pipeline simulation 1312, Fig. 13).
Regarding claim 8 (Original), Chen et al. discloses:
The method as recited in claim 1, wherein the radar-based object detection is a neural network (Chen et al. “The method 1300, at block 1314, may include performing object detection using a neural network (e.g., NN) …” - ¶ [0227]).
Regarding claim 10 (Currently Amended), Chen et al. discloses:
A method for radar-based surroundings detection, comprising the following steps:
receiving first radar data (Chen et al. field test data 1310, Fig. 13) of a radar sensor or of a plurality of radar sensors (Chen et al. one or more transmit antennas 406, one or more receive antennas, Fig. 4), the first radar data mapping surroundings of the radar sensor or of the plurality of radar sensors (Chen et al. “The radar pipeline 1117 may generate a scene representative of the environment 1100" - ¶ [0194]);
carrying out an object detection (Chen et al. “The radar device may determine locations of objects (e.g., perform object detection) within an environment based on the received wireless signals.” - ¶ [0181]) on the received first radar data, the object detection being trained by:
creating a training data set (Chen et al. data set generation 1302, Fig. 13; “The method 1300, at block 1302, may include generating a dataset. The radar processor 1104 may generate the dataset based on a scene, pre-defined parameters, channel modelling, ground truth parameters (e.g., ground truth target/object parameters), radar pipeline output, field test data, or some combination thereof.” - ¶ [0221]) that includes the first radar data of the radar sensor or of the plurality of radar sensors, the first radar data representing a map of surroundings (Chen et al. “The radar pipeline 1117 may generate a scene representative of the environment 1100" - ¶ [0194]) of the radar sensor or of the plurality of radar sensors, and
training the object detection based on the created training data set (Chen et al. “The training of the neural network (e.g., adapting the layers of the neural network) may use or may be based on any kind of training principle, such as backpropagation (e.g., using the backpropagation algorithm).” - ¶ [0078]; “The method 1300, at block 1318, may include performing loss backpropagation. In some aspects, the radar detector 1110 may backpropagate (e.g., use) the error value 1120 to adjust one or more of the weighted values of the machine learning algorithm. In some aspects, the radar detector 1110 may adjust the one or more weighted values based on the error value 1120 to reduce the error value 1120. In addition, the error value 1120 may be fed back to the radar detector 1110 to improve the object detection of the radar detector 1110. The method 1300 may be performed iteratively to continue to reduce the error value 1120.” - ¶ [0229]) for generating an output representation of the surroundings of the radar sensor or of the plurality of radar sensors, the output representation being configured as a point cloud of reflectance points of radar signals (Chen et al. “Range and Doppler processing creates a range-doppler map, and the AoA estimation creates a azimuth-elevation map for each range-doppler bin, thus resulting in a 4D voxel. A detector may then create a point cloud, which can then be an input for a perception pipeline.” - ¶ [0370]) or as a point cluster or as a plurality of point clusters of a radar road signature map display or as a reflectance grid, the reflectance grid is a grid representation of the surroundings of the radar sensor or of the plurality of radar sensors, and each grid cell of the reflectance grid being provided with a reflectance value of the radar signals, using which a backscatter characteristic of radar signals of a respective spatial area of the surroundings is described (Examiner notes that claim 10 has been interpreted such that the following limitations are considered alternatives: a point cloud, a point cluster, a plurality of point clusters, or a reflectance grid.); and
outputting the output representation of the surroundings of the radar sensors by the object detection, the output representation being configured as the point cloud of reflectance points of radar signals or as the point cluster or as the plurality of point clusters of the radar road signature map display or as the reflectance grid, wherein the first radar data include data based on measurements (Chen et al. field test data 1310, Fig. 13) of the radar sensor or of the plurality of radar sensors or on simulations of radar measurements (Chen et al. radar pipeline simulation 1312, Fig. 13), and wherein sensor calibrations (Chen et al. “According to an aspect of the disclosure, the system 301 may include a feedback controller 316, which may be configured to determine a plurality of reconfigurable radio parameters 317, for example, based on output 318 of the radar processor 309. The reconfigurable radio parameters 317 may include a waveform, a modulation, a center frequency, a bandwidth, a polarization, a beamforming directivity, phase and/or amplitude values, e.g., control signals to the radar frontend, for example a radiofrequency lens, antennas, transmitters and receivers, and/or any other additional or alternative parameters.” - ¶ [0112]; “According to some aspects of the disclosure, the feedback controller 316 may be configured to determine the plurality of reconfigurable radio parameters 317, for example, based on a reliability indicator from radar processor 309.” - ¶ [0122]; “Further, the receiver may adjust the transmit waveform based on the error value to further reduce the determined error value.” - ¶ [0189]; where the reconfigurable radio parameters are considered to be calibration settings and the process of adjusting the reconfigurable radio parameters based on a reliability indicator and the error value of the machine learning algorithm is considered to be a calibration process) of the radar sensor or the plurality of radar sensors in the form of correlations between radar signals reflected at point targets situated in the surroundings and corresponding demodulated time signals (Chen et al. “The feedback controller 316 may be configured to adaptively determine the plurality of reconfigurable radio parameters in real time based on previous radar perception data corresponding to previously processed digital radar samples.” - ¶ [0128]; “The RF components 1208 (e.g., receive antennas and receive front end components) may receive the receive wireless signals 1218. The radar processor 1204 may generate a first dataset 1206 representative of the environment 1200 (e.g., a scene representative of the environment 1200). The scene may include an object representative of the object 1212 in the environment 1200. In some aspects, the first dataset 1206 may include IQ samples from the RF components 1208.” - ¶ [0206]; Examiner notes that “demodulated time signal” has been interpreted as “demodulated signals in the time domain.” The received signals are demodulated into IQ samples comprising time-series data for the in-phase and quadrature components, and the IQ samples are considered to be demodulated signals in the time domain.) of the radar sensor or the plurality of radar sensors are taken into account in the simulations (Chen et al. “In these and other aspects, the radar processor 1104 may determine the pre-defined parameters to include settings of the RF components 1108.” - ¶ [0222]; where the settings of the RF components (the reconfigurable radio parameters) are taken into account in the radar pipeline simulation as input to the scene generation 1306, which is input to the channel model and target scattering 1308 which is input into the radar pipeline simulation 1312, Fig. 13).
Regarding claim 11 (Original), Chen et al. discloses:
The method as recited in claim 10, wherein the radar data are radar data of radar sensors of a vehicle (Chen et al. “vehicle 100 includes a radar device (or radar system) 101” - ¶ [0082], Fig. 1), and surroundings of the vehicle being mapped by the radar data.
Regarding claim 12 (Currently Amended), Chen et al. discloses:
The method as recited in claim 10, wherein the radar data are raw data (Chen et al. “The digital (radar) reception data (including the digital (radar) reception data values) as output by the ADC 308 (or more than one ADC, i.e. in case of IQ output) is also referred to as radar measurement data or raw (radar) reception data (including raw (radar) reception data values also referred to as radar reception samples).” - ¶ [0106]) of an FMCW (Frequency Modulated Continuous Wave) radar sensor (Chen et al. radar frontend 401 of FMCW radar device 400, Fig. 4; ¶ [0129]) and are demodulated time signals (Chen et al. “In some aspects, the first dataset 1206 may include IQ samples from the RF components 1208.” - ¶ [0206]; where the received signals are demodulated into IQ samples comprising time-series data for the in-phase and quadrature components and are therefore considered demodulated time signals); “In these and other aspects, the receiver 1222 may convert the transmit waveform into a vector of samples in either a frequency domain or a time domain.” - ¶ [0213]; Examiner notes that “demodulated time signal” has been interpreted as “demodulated signals in the time domain.”).
Regarding claim 13 (Original), Chen et al. discloses:
The method as recited in claim 12, wherein the radar data are based on an execution of a two-dimensional fast Fourier transform (Chen et al. “For this, the radar processor 402 performs two FFTs (Fast Fourier Transforms) to extract range information (by a first FFT, also denoted as range FFT) as well as radial velocity information (by a second FFT, also denoted as Doppler FFT) from the digital reception data values.” - ¶ [0136]; The fast Fourier transform of Chen et al. is considered to be two-dimensional because it is performed twice.) on the raw data and are frequency signals (Chen et al. “In these and other aspects, the receiver 1222 may convert the transmit waveform into a vector of samples in either a frequency domain or a time domain.” - ¶ [0213]; Examiner notes that “frequency signals” has been interpreted as “signals in the frequency domain.” Fourier transforms are known to convert signals from the time-domain to the frequency-domain.).
Regarding claim 14 (Currently Amended), Chen et al. discloses:
A system (Chen et al. radar processor 1104, Fig. 11) configured to train a radar-based object detection (Chen et al. “The radar device may determine locations of objects (e.g., perform object detection) within an environment based on the received wireless signals.” - ¶ [0181]), the system configured to:
create a training data set (Chen et al. data set generation 1302, Fig. 13; “The method 1300, at block 1302, may include generating a dataset. The radar processor 1104 may generate the dataset based on a scene, pre-defined parameters, channel modelling, ground truth parameters (e.g., ground truth target/object parameters), radar pipeline output, field test data, or some combination thereof.” - ¶ [0221]) that includes radar data (Chen et al. field test data 1310 or radar pipeline simulation 1312, Fig. 13) of a radar sensor or of a plurality of radar sensors (Chen et al. one or more transmit antennas 406, one or more receive antennas, Fig. 4), the radar data representing a map of surroundings (Chen et al. “The radar pipeline 1117 may generate a scene representative of the environment 1100" - ¶ [0194]) of the radar sensor or of the plurality of radar sensors;
train the radar-based object detection based on the created training data set (Chen et al. “The training of the neural network (e.g., adapting the layers of the neural network) may use or may be based on any kind of training principle, such as backpropagation (e.g., using the backpropagation algorithm).” - ¶ [0078]; “The method 1300, at block 1318, may include performing loss backpropagation. In some aspects, the radar detector 1110 may backpropagate (e.g., use) the error value 1120 to adjust one or more of the weighted values of the machine learning algorithm. In some aspects, the radar detector 1110 may adjust the one or more weighted values based on the error value 1120 to reduce the error value 1120. In addition, the error value 1120 may be fed back to the radar detector 1110 to improve the object detection of the radar detector 1110. The method 1300 may be performed iteratively to continue to reduce the error value 1120.” - ¶ [0229]; where loss backpropagation is considered to be a training method) for generating an output representation of the surroundings of the radar sensor or of the plurality of radar sensors, the output representation being configured as a point cloud of reflectance points of radar signals (Chen et al. “Range and Doppler processing creates a range-doppler map, and the AoA estimation creates a azimuth-elevation map for each range-doppler bin, thus resulting in a 4D voxel. A detector may then create a point cloud, which can then be an input for a perception pipeline.” - ¶ [0370]) or as a point cluster or as a plurality of point clusters of a radar road signature map display or as a reflectance grid, the reflectance grid is a grid representation of the surroundings of the radar sensor or of the plurality of radar sensors, and each grid cell of the reflectance grid being provided with a reflectance value of the radar signals, using which a backscatter characteristic of radar signals of a respective spatial area of the surroundings is described (Examiner notes that claim 14 has been interpreted such that the following limitations are considered alternatives: a point cloud, a point cluster, a plurality of point clusters, or a reflectance grid.), wherein the radar data include data based on measurements (Chen et al. field test data 1310, Fig. 13) of the radar sensor or of the plurality of radar sensors or on simulations of radar measurements (Chen et al. radar pipeline simulation 1312, Fig. 13), and wherein sensor calibrations (Chen et al. “According to an aspect of the disclosure, the system 301 may include a feedback controller 316, which may be configured to determine a plurality of reconfigurable radio parameters 317, for example, based on output 318 of the radar processor 309. The reconfigurable radio parameters 317 may include a waveform, a modulation, a center frequency, a bandwidth, a polarization, a beamforming directivity, phase and/or amplitude values, e.g., control signals to the radar frontend, for example a radiofrequency lens, antennas, transmitters and receivers, and/or any other additional or alternative parameters.” - ¶ [0112]; “According to some aspects of the disclosure, the feedback controller 316 may be configured to determine the plurality of reconfigurable radio parameters 317, for example, based on a reliability indicator from radar processor 309.” - ¶ [0122]; “Further, the receiver may adjust the transmit waveform based on the error value to further reduce the determined error value.” - ¶ [0189]; where the reconfigurable radio parameters are considered to be calibration settings and the process of adjusting the reconfigurable radio parameters based on a reliability indicator and the error value of the machine learning algorithm is considered to be a calibration process) of the radar sensor or the plurality of radar sensors in the form of correlations between radar signals reflected at point targets situated in the surroundings and corresponding demodulated time signals (Chen et al. “The feedback controller 316 may be configured to adaptively determine the plurality of reconfigurable radio parameters in real time based on previous radar perception data corresponding to previously processed digital radar samples.” - ¶ [0128]; “The RF components 1208 (e.g., receive antennas and receive front end components) may receive the receive wireless signals 1218. The radar processor 1204 may generate a first dataset 1206 representative of the environment 1200 (e.g., a scene representative of the environment 1200). The scene may include an object representative of the object 1212 in the environment 1200. In some aspects, the first dataset 1206 may include IQ samples from the RF components 1208.” - ¶ [0206]; Examiner notes that “demodulated time signal” has been interpreted as “demodulated signals in the time domain.” The received signals are demodulated into IQ samples comprising time-series data for the in-phase and quadrature components, and the IQ samples are considered to be demodulated signals in the time domain.) of the radar sensor or the plurality of radar sensors are taken into account in the simulations (Chen et al. “In these and other aspects, the radar processor 1104 may determine the pre-defined parameters to include settings of the RF components 1108.” - ¶ [0222]; where the settings of the RF components (the reconfigurable radio parameters) are taken into account in the radar pipeline simulation as input to the scene generation 1306, which is input to the channel model and target scattering 1308 which is input into the radar pipeline simulation 1312, Fig. 13).
Regarding claim 15 (Currently Amended), Chen et al. discloses:
A non-transitory computer readable medium (Chen et al. ““memory” is understood as a computer-readable medium (e.g., a non-transitory computer-readable medium)” - ¶ [0065]) on which is stored a computer program (Chen et al. “The controller may be configured, for example, by program code (e.g., software) to control the operation of a system” - ¶ [0095]) including commands for training a radar-based object detection (Chen et al. “The radar device may determine locations of objects (e.g., perform object detection) within an environment based on the received wireless signals.” - ¶ [0181]), the commands representing execution of the following steps:
creating a training data set (Chen et al. data set generation 1302, Fig. 13; “The method 1300, at block 1302, may include generating a dataset. The radar processor 1104 may generate the dataset based on a scene, pre-defined parameters, channel modelling, ground truth parameters (e.g., ground truth target/object parameters), radar pipeline output, field test data, or some combination thereof.” - ¶ [0221]) that includes radar data (Chen et al. field test data 1310 or radar pipeline simulation 1312, Fig. 13) of a radar sensor or of a plurality of radar sensors (Chen et al. one or more transmit antennas 406, one or more receive antennas, Fig. 4), the radar data representing a map of surroundings (Chen et al. “The radar pipeline 1117 may generate a scene representative of the environment 1100" - ¶ [0194]) of the radar sensor or of the plurality of radar sensors; and
training a radar-based object detection based on the created training data set (Chen et al. “The training of the neural network (e.g., adapting the layers of the neural network) may use or may be based on any kind of training principle, such as backpropagation (e.g., using the backpropagation algorithm).” - ¶ [0078]; “The method 1300, at block 1318, may include performing loss backpropagation. In some aspects, the radar detector 1110 may backpropagate (e.g., use) the error value 1120 to adjust one or more of the weighted values of the machine learning algorithm. In some aspects, the radar detector 1110 may adjust the one or more weighted values based on the error value 1120 to reduce the error value 1120. In addition, the error value 1120 may be fed back to the radar detector 1110 to improve the object detection of the radar detector 1110. The method 1300 may be performed iteratively to continue to reduce the error value 1120.” - ¶ [0229]) for generating an output representation of the surroundings of the radar sensor or of the plurality of radar sensors, the output representation being configured as a point cloud of reflectance points of radar signals (Chen et al. “Range and Doppler processing creates a range-doppler map, and the AoA estimation creates a azimuth-elevation map for each range-doppler bin, thus resulting in a 4D voxel. A detector may then create a point cloud, which can then be an input for a perception pipeline.” - ¶ [0370]) or as a point cluster or as a plurality of point clusters of a radar road signature map display or as a reflectance grid, the reflectance grid is a grid representation of the surroundings of the radar sensor or of the plurality of radar sensors, and each grid cell of the reflectance grid being provided with a reflectance value of the radar signals, using which a backscatter characteristic of radar signals of a respective spatial area of the surroundings is described (Examiner notes that claim 15 has been interpreted such that the following limitations are considered alternatives: a point cloud, a point cluster, a plurality of point clusters, or a reflectance grid.), wherein the radar data include data based on measurements (Chen et al. field test data 1310, Fig. 13) of the radar sensor or of the plurality of radar sensors or on simulations of radar measurements (Chen et al. radar pipeline simulation 1312, Fig. 13), and wherein sensor calibrations (Chen et al. “According to an aspect of the disclosure, the system 301 may include a feedback controller 316, which may be configured to determine a plurality of reconfigurable radio parameters 317, for example, based on output 318 of the radar processor 309. The reconfigurable radio parameters 317 may include a waveform, a modulation, a center frequency, a bandwidth, a polarization, a beamforming directivity, phase and/or amplitude values, e.g., control signals to the radar frontend, for example a radiofrequency lens, antennas, transmitters and receivers, and/or any other additional or alternative parameters.” - ¶ [0112]; “According to some aspects of the disclosure, the feedback controller 316 may be configured to determine the plurality of reconfigurable radio parameters 317, for example, based on a reliability indicator from radar processor 309.” - ¶ [0122]; “Further, the receiver may adjust the transmit waveform based on the error value to further reduce the determined error value.” - ¶ [0189]; where the reconfigurable radio parameters are considered to be calibration settings and the process of adjusting the reconfigurable radio parameters based on a reliability indicator and the error value of the machine learning algorithm is considered to be a calibration process) of the radar sensor or the plurality of radar sensors in the form of correlations (Chen et al. “In some aspects, the radar detector 1110 may determine the error value 1120 such that the error value 1120 indicates an accuracy of the determination of the object parameters of the objects. The radar detector 1110 may calculate the KL cost function according to Equation 1, the object parameters, and the ground truth object parameters to determine the error value 1120.” - ¶ [0228]) between radar signals reflected at point targets situated in the surroundings and corresponding demodulated time signals (Chen et al. “The feedback controller 316 may be configured to adaptively determine the plurality of reconfigurable radio parameters in real time based on previous radar perception data corresponding to previously processed digital radar samples.” - ¶ [0128]; “The RF components 1208 (e.g., receive antennas and receive front end components) may receive the receive wireless signals 1218. The radar processor 1204 may generate a first dataset 1206 representative of the environment 1200 (e.g., a scene representative of the environment 1200). The scene may include an object representative of the object 1212 in the environment 1200. In some aspects, the first dataset 1206 may include IQ samples from the RF components 1208.” - ¶ [0206]; Examiner notes that “demodulated time signal” has been interpreted as “demodulated signals in the time domain.” The received signals are demodulated into IQ samples comprising time-series data for the in-phase and quadrature components, and the IQ samples are considered to be demodulated signals in the time domain.) of the radar sensor or the plurality of radar sensors are taken into account in the simulations (Chen et al. “In these and other aspects, the radar processor 1104 may determine the pre-defined parameters to include settings of the RF components 1108.” - ¶ [0222]; where the settings of the RF components (the reconfigurable radio parameters) are taken into account in the radar pipeline simulation as input to the scene generation 1306, which is input to the channel model and target scattering 1308 which is input into the radar pipeline simulation 1312, Fig. 13).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 9 remains rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 2022/0196798 A1, previously relied upon by the examiner) in view of Milz et al. (DE 102019111608 A1, cited by applicant in IDS filed January 27, 2023 and previously relied upon by the examiner).
Regarding claim 9 (Previously Presented), Chen et al. discloses:
[Note: what is not explicitly disclosed by Chen et al. has been struck-through]
The method as recited in claim 8, wherein the neural network is a recurrent network structure (Chen et al. “The neural network may be any kind of neural network, such as a convolutional neural network, an autoencoder network, a variational autoencoder network, a sparse autoencoder network, a recurrent neural network, a deconvolutional network, a generative adversarial network, a forward thinking neural network, a sum-product neural network, and the like.” - ¶ [0078])
Milz et al. discloses:
wherein the neural network is trained to filter out influences of objects dynamically moved relative to the radar sensor or to the plurality of radar sensors (Milz et al. “A reduced point cloud is generated by removing the at least one dynamic object from the point cloud by the electronic computing device depending on the information obtained when determining the dynamic object by means of the first neural network and/or the second neural network.” – ¶ [0005]).
It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Milz et al. into the invention of Chen et al. to yield the invention of claim 9 above. Both Chen et al. and Milz et al. are considered analogous arts to the claimed invention as they both disclose neural networks for radar-based object detection in autonomous vehicles. Chen et al. discloses using a recurrent neural network (Chen et al. ¶ [0095], [0332]). However, Chen et al. fails to explicitly disclose that the recurrent neural network is trained to filter out objects dynamically moving relative to the radar sensor. This feature is disclosed by Milz et al. where a neural network is used to identify and remove dynamic objects from the point cloud (Milz et al. ¶ [0005]). The combination of Chen et al. and Milz et al. would be obvious with a reasonable expectation of success to improve the environmental map and the positioning accuracy of the vehicle as well as increase the efficiency of the data processing. (Milz et al. ¶ [0006], [0009]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAOMI M WOLFORD whose telephone number is (571)272-3929. The examiner can normally be reached Monday - Friday, 8:30 am - 4:30 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Resha Desai can be reached at (571)270-7792. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NAOMI M. WOLFORD
Examiner
Art Unit 3648
/N.M.W./Examiner, Art Unit 3648
4 MAR 2026
/RESHA DESAI/Supervisory Patent Examiner, Art Unit 3648