Prosecution Insights
Last updated: April 19, 2026
Application No. 18/448,334

RADAR-BASED ENVIRONMENTAL DETECTION SYSTEM FOR MOTOR VEHICLES

Final Rejection §103
Filed
Aug 11, 2023
Examiner
DOZE, PETER DAVON
Art Unit
3648
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Robert Bosch GmbH
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
91%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
18 granted / 22 resolved
+29.8% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
33 currently pending
Career history
55
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
59.3%
+19.3% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed 10/15/2025 has been entered. Claims 1-15 are pending. Response to Arguments Applicant’s arguments, see ‘Rejections of claims 1-2 and 4 Under 35 U.S.C. 102(a)(1)’, filed 10/15/2025, with respect to the rejection(s) of claim(s) 1, 2, and 4 under 35 U.S.C. 102(a)(1) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Blaes (US 12221115 B1). Applicant’s arguments, see ‘Rejection of Claims 3 and 5 under 35 U.S.C. 103’, filed 10/15/2025, with respect to the rejection(s) of claim(s) 3 and 5 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection for claim 1 is made in view of Blaes (US 12221115 B1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4, 6, 7, 9, 11, 12, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Major (2019) Vehicle Detection With Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors [cited from attached pdf] in view of Blaes (US 12221115 B1). Regarding claim 1 Major discloses A radar-based environmental detection system for motor vehicles (Introduction Paragraph 2 line 2, "A typical automotive radar"), comprising: at least one radar sensor configured to provide location data regarding objects in an environment of the motor vehicle (Introduction Paragraph 3 lines 1-7, “The radar data is a 3D tensor, with the first two dimensions making up range-azimuth (polar) space, and the third Doppler dimension which contains velocity information. This tensor is typically processed using a Constant False-Alarm Rate (CFAR) algorithm to get a sparse 2D point-cloud which separates the targets of interest from the surrounding clutter”); and a neural network configured to convert the location data into an environmental model which represents spatio-temporal object data of the objects (Introduction Paragraph 1 lines 3- Paragraph 2 line 2, “A variety of sensors such as LiDAR, short-range radars, long-range radars…have been used for perception. The most prevalent sensor to provide detail-rich 3D information in automotive environments is the LiDAR. Radar presents a low-cost alternative to LiDAR as a range sensor”; Section 4.1 Paragraph 1 line 1-Paragraph 3 line 1, "As described in Section 2, the radar tensor is three dimensional: it has two spatial dimensions, range and azimuth, accompanied by a third, Doppler dimension, which represents the velocity of objects relative to the radar, up to a certain aliasing velocity. We propose two solutions to process the full 3D tensor. The first approach is to remove the Doppler dimension by summing the signal power over that dimension. The input of the model is a range-azimuth tensor, hence we call this solution the Range-Azimuth (RA) model. The second approach is to also provide range-Doppler and azimuth-Doppler tensors as input" where the neural network is learning from environmental data in relation to the radar which is tantamount to an environmental model), wherein the neural networks conditioned to give priority to outputting environmental models in which at least one predetermined physical relationship between the location data and the spatio-temporal object data is satisfied (Section 4.4 Paragraph 2 lines 3-8, "The feature maps are run through additional convolutional layers that predict confidence values for each feature location to determine whether the corresponding location in the input tensor contains an object of a certain class with a size close to a pre-defined height and width. Multiple pre-defined sizes can be used for each feature location" where the priority comes from the confidence values; Section 4.3 Paragraph 1 lines 1-3, “Due to the nature of automotive environments, exploiting the temporal aspect of the signal can provide benefits to detection quality as well as enable access to velocity information. To this end, and in order to capture the dynamics of the scene…” where it is the connecting the temporal aspects (i.e., velocity) of the object with its location (environment/scene); Section 5.3 Contribution of the Doppler Dimension to Detection Paragraph 1 lines 1-2, “With the Doppler dimension, the signal has characteristics which may help the detection of objects. For example, objects close to each other in physical space might be separated in the Doppler dimension.” Paragraph 2, “To see whether our solution with Doppler helps detection, we compare our models based on mAP scores…Because the difference in mAP is relatively small compared to the standard deviation, we used bootstrap hypothesis testing suggested by Efron and Tibshirani [3] to estimate the confidence. The hypothesis is that the RAD model achieves a significantly better mAP score. Based on 10000 redraws the obtained p-value was 0.0031, which expresses high confidence that the RAD model does help with detection”). Major does not disclose wherein the location data includes at least one acceleration associated with each of the objects. Major states that it uses the velocity in conjunction with the object location to get a sense of the dynamic scene and it also states that it uses the velocity to help detect objects but it doesn’t explicitly state that it uses the confidence levels of the velocity in the same way as it does with the height and width of the object when making a detection. In using the velocity to help detect an object using the confidence level in this way would be advantageous in that it facilitates a more accurate determination and it helps to differentiate two objects if their velocities are similar. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify using the velocity to detect objects by adding the use of confidence levels to get more accurate results. Blaes discloses Wherein the location data includes at least one acceleration associated with each of the objects (Column 3 lines 13-18, “In some cases, the output of the deep neural network may be an object bounding box, an occupancy value, and/or state of the object (e.g., trajectory, acceleration, speed, size, current physical position, object classification, instance segmentation, etc.)”). Major discusses identifying or tracking individual objects in the driving environment of a vehicle for autonomous driving and they discuss using velocity and other attributes for detection, but it does not discuss the use of acceleration. In a potentially chaotic driving environment, an ADAS system recognizing and using acceleration would be advantageous for improved distinguishing and tracking of objects on the road. If the system could determine that two vehicles have, in the moment, similar velocities but different accelerations, it would help to distinguish between them if they are close to each other in physical space. Additionally, if the ADAS system knew the velocity of an object and the change in its velocity it would be able to better predict where an object will be and therefore would be better able to respond to said object to avoid a collision. It could recognize a driver speeding up and decide not to change lanes. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Major with Blaes to add in the use of acceleration so that the ADAS system can make more accurate decisions and better distinguish between objects/vehicles. Regarding claim 2 the combination of Major and Blaes discloses The environmental detection system according to claim 1, Major further discloses wherein the neural network is conditioned by having been trained with synthetic training data which are compatible with the at least one predetermined physical relationship (Section 5.2 Paragraph 3-4, "The range-azimuth and range-Doppler inputs are normalized by making each range-row zero-mean and unit-variance, using statistics computed over the training -set. All of the discussed models used the same set of prior box shapes, 8 in total. Widths: 1.9m, 3.5m. Lengths: 4.21m, 6.1m, 11m, 18m. As all of the input images spanned the same space, we defined the ground truth and prior boxes in meters and then mapped them to [0, 1] X [0, 1] for the loss function. The input to the SSD head was a single feature map with a size of 64×64, corresponding to a 47m x 47m area, so prior boxes were spaced approximately 73cm from each other” where the boxes are objects in a bigger environment). Regarding claim 4 the combination of Major and Blaes discloses The environmental detection system according to claim 1, Major further discloses wherein the neural network is conditioned by including, between two layers, a filter that converts a first set of intermediate values into a second set of intermediate values according to the at least one predetermined physical relationship (Section 4.1.1 Range-Azimuth Model Paragraph 1 lines 1-4, “The feature extractor used for our Range-Azimuth (RA) model is motivated by the Feature Pyramid Network (FPN) architecture by Lin et al. [14]. It consists of multiple consecutive convolutional layers” where when the input goes through multiple layers before reaching the output there are multiple intermediate values and a convolution layer is a filter that transforms the data). Regarding claim 6 Major discloses A radar-based environmental detection system for motor vehicles (Introduction Paragraph 2 line 2, "A typical automotive radar"), comprising: at least one radar sensor configured to provide location data regarding objects in an environment of the motor vehicle (Introduction Paragraph 3 lines 1-7, “The radar data is a 3D tensor, with the first two dimensions making up range-azimuth (polar) space, and the third Doppler dimension which contains velocity information. This tensor is typically processed using a Constant False-Alarm Rate (CFAR) algorithm to get a sparse 2D point-cloud which separates the targets of interest from the surrounding clutter”); and a neural network configured to convert the location data into an environmental model which represents spatio-temporal object data of the objects (Introduction Paragraph 1 lines 3- Paragraph 2 line 2, “A variety of sensors such as LiDAR, short-range radars, long-range radars…have been used for perception. The most prevalent sensor to provide detail-rich 3D information in automotive environments is the LiDAR. Radar presents a low-cost alternative to LiDAR as a range sensor”; Section 4.1 Paragraph 1 line 1-Paragraph 3 line 1, "As described in Section 2, the radar tensor is three dimensional: it has two spatial dimensions, range and azimuth, accompanied by a third, Doppler dimension, which represents the velocity of objects relative to the radar, up to a certain aliasing velocity. We propose two solutions to process the full 3D tensor. The first approach is to remove the Doppler dimension by summing the signal power over that dimension. The input of the model is a range-azimuth tensor, hence we call this solution the Range-Azimuth (RA) model. The second approach is to also provide range-Doppler and azimuth-Doppler tensors as input" where the neural network is learning from environmental data in relation to the radar which is tantamount to an environmental model), wherein the neural network is conditioned to give priority to outputting environmental models in which at least one predetermined physical relationship between the location data and the spatio- temporal object data is satisfied (Section 4.4 Paragraph 2 lines 3-8, "The feature maps are run through additional convolutional layers that predict confidence values for each feature location to determine whether the corresponding location in the input tensor contains an object of a certain class with a size close to a pre-defined height and width. Multiple pre-defined sizes can be used for each feature location" where the priority comes from the confidence values; Section 4.3 Paragraph 1 lines 1-3, “Due to the nature of automotive environments, exploiting the temporal aspect of the signal can provide benefits to detection quality as well as enable access to velocity information. To this end, and in order to capture the dynamics of the scene…” where it is the connecting the temporal aspects (i.e., velocity) of the object with its location (environment/scene); Section 5.3 Contribution of the Doppler Dimension to Detection Paragraph 1 lines 1-2, “With the Doppler dimension, the signal has characteristics which may help the detection of objects. For example, objects close to each other in physical space might be separated in the Doppler dimension.” Paragraph 2, “To see whether our solution with Doppler helps detection, we compare our models based on mAP scores…Because the difference in mAP is relatively small compared to the standard deviation, we used bootstrap hypothesis testing suggested by Efron and Tibshirani [3] to estimate the confidence. The hypothesis is that the RAD model achieves a significantly better mAP score. Based on 10000 redraws the obtained p-value was 0.0031, which expresses high confidence that the RAD model does help with detection”). Major does not disclose wherein the spatio-temporal object data includes at least one longitudinal velocity and at least one lateral velocity associated with each of the objects. Major states that it uses the velocity in conjunction with the object location to get a sense of the dynamic scene and it also states that it uses the velocity to help detect objects but it doesn’t explicitly state that it uses the confidence levels of the velocity in the same way as it does with the height and width of the object when making a detection. In using the velocity to help detect an object using the confidence level in this way would be advantageous in that it facilitates a more accurate determination and it helps to differentiate two objects if their velocities are similar. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify using the velocity to detect objects by adding the use of confidence levels to get more accurate results. Blaes discloses Wherein the spatio-temporal object data includes at least one longitudinal velocity and at least one lateral velocity associated with each of the objects (Column 22 lines 52-57, "The machine learning component 720 may be trained to output two-dimensional (or three-dimensional) velocity data for objects in response to radar data input to the machine learning model. The velocity data may be a two-dimensional velocity or three-dimensional velocity that is determined using a global frame of reference"). Major discusses identifying or tracking individual objects in the driving environment of a vehicle for autonomous driving and they discuss using velocity and other attributes for detection, but it does not discuss the details of the velocity or whether it is using lateral or longitudinal velocity. It would be advantageous to include lateral and longitudinal velocity into the machine learning algorithm to facilitate distinguishing between vehicles/object close to each other in physical space and to better track/predict where an object will be with the ADAS system. For example, if the ADAS system can only determine a radial velocity then it won’t be able to make decisions on if another vehicle is changing lanes. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Major with Blaes to include longitudinal and lateral velocity into its velocity determination to facilitate better tracking and distinguishing objects. Regarding claim 7 the combination of Major and Blaes discloses The environmental detection system according to claim 6, Major further discloses wherein the neural network is conditioned by having been trained with synthetic training data which are compatible with the at least one predetermined physical relationship (Section 5.2 Paragraph 3-4, "The range-azimuth and range-Doppler inputs are normalized by making each range-row zero-mean and unit-variance, using statistics computed over the training -set. All of the discussed models used the same set of prior box shapes, 8 in total. Widths: 1.9m, 3.5m. Lengths: 4.21m, 6.1m, 11m, 18m. As all of the input images spanned the same space, we defined the ground truth and prior boxes in meters and then mapped them to [0, 1] X [0, 1] for the loss function. The input to the SSD head was a single feature map with a size of 64×64, corresponding to a 47m x 47m area, so prior boxes were spaced approximately 73cm from each other” where the boxes are objects in a bigger environment). Regarding claim 9 the combination of Major and Blaes discloses The environmental detection system according to claim 6, Major further discloses wherein the neural network is conditioned by including, between two layers, a filter that converts a first set of intermediate values into a second set of intermediate values according to the at least one predetermined physical relationship (Section 4.1.1 Range-Azimuth Model Paragraph 1 lines 1-4, “The feature extractor used for our Range-Azimuth (RA) model is motivated by the Feature Pyramid Network (FPN) architecture by Lin et al. [14]. It consists of multiple consecutive convolutional layers” where when the input goes through multiple layers before reaching the output there are multiple intermediate values and a convolution layer is a filter that transforms the data). Regarding claim 11 Major discloses A radar-based environmental detection system for motor vehicles (Introduction Paragraph 2 line 2, "A typical automotive radar"), comprising: at least one radar sensor configured to provide location data regarding objects in an environment of the motor vehicle (Introduction Paragraph 3 lines 1-7, “The radar data is a 3D tensor, with the first two dimensions making up range-azimuth (polar) space, and the third Doppler dimension which contains velocity information. This tensor is typically processed using a Constant False-Alarm Rate (CFAR) algorithm to get a sparse 2D point-cloud which separates the targets of interest from the surrounding clutter”); and a neural network configured to convert the location data into an environmental model which represents spatio-temporal object data of the objects (Introduction Paragraph 1 lines 3- Paragraph 2 line 2, “A variety of sensors such as LiDAR, short-range radars, long-range radars…have been used for perception. The most prevalent sensor to provide detail-rich 3D information in automotive environments is the LiDAR. Radar presents a low-cost alternative to LiDAR as a range sensor”; Section 4.1 Paragraph 1 line 1-Paragraph 3 line 1, "As described in Section 2, the radar tensor is three dimensional: it has two spatial dimensions, range and azimuth, accompanied by a third, Doppler dimension, which represents the velocity of objects relative to the radar, up to a certain aliasing velocity. We propose two solutions to process the full 3D tensor. The first approach is to remove the Doppler dimension by summing the signal power over that dimension. The input of the model is a range-azimuth tensor, hence we call this solution the Range-Azimuth (RA) model. The second approach is to also provide range-Doppler and azimuth-Doppler tensors as input" where the neural network is learning from environmental data in relation to the radar which is tantamount to an environmental model), wherein the neural network is conditioned to give priority to outputting environmental models in which at least one predetermined physical relationship between the location data and the spatio- temporal object data is satisfied (Section 4.4 Paragraph 2 lines 3-8, "The feature maps are run through additional convolutional layers that predict confidence values for each feature location to determine whether the corresponding location in the input tensor contains an object of a certain class with a size close to a pre-defined height and width. Multiple pre-defined sizes can be used for each feature location" where the priority comes from the confidence values; Section 4.3 Paragraph 1 lines 1-3, “Due to the nature of automotive environments, exploiting the temporal aspect of the signal can provide benefits to detection quality as well as enable access to velocity information. To this end, and in order to capture the dynamics of the scene…” where it is the connecting the temporal aspects (i.e., velocity) of the object with its location (environment/scene); Section 5.3 Contribution of the Doppler Dimension to Detection Paragraph 1 lines 1-2, “With the Doppler dimension, the signal has characteristics which may help the detection of objects. For example, objects close to each other in physical space might be separated in the Doppler dimension.” Paragraph 2, “To see whether our solution with Doppler helps detection, we compare our models based on mAP scores…Because the difference in mAP is relatively small compared to the standard deviation, we used bootstrap hypothesis testing suggested by Efron and Tibshirani [3] to estimate the confidence. The hypothesis is that the RAD model achieves a significantly better mAP score. Based on 10000 redraws the obtained p-value was 0.0031, which expresses high confidence that the RAD model does help with detection”). Major does not disclose wherein the location data includes at least one acceleration associated with each of the objects, and wherein the spatio-temporal object data includes at least one longitudinal velocity and at least one lateral velocity associated with each of the objects. Major states that it uses the velocity in conjunction with the object location to get a sense of the dynamic scene and it also states that it uses the velocity to help detect objects but it doesn’t explicitly state that it uses the confidence levels of the velocity in the same way as it does with the height and width of the object when making a detection. In using the velocity to help detect an object using the confidence level in this way would be advantageous in that it facilitates a more accurate determination and it helps to differentiate two objects if their velocities are similar. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify using the velocity to detect objects by adding the use of confidence levels to get more accurate results. Blaes discloses Wherein the location data includes at least one acceleration associated with each of the objects (Column 3 lines 13-18, “In some cases, the output of the deep neural network may be an object bounding box, an occupancy value, and/or state of the object (e.g., trajectory, acceleration, speed, size, current physical position, object classification, instance segmentation, etc.)”), and wherein the spatio-temporal object data includes at least one longitudinal velocity and at least one lateral velocity associated with each of the objects (Column 22 lines 52-57, "The machine learning component 720 may be trained to output two-dimensional (or three-dimensional) velocity data for objects in response to radar data input to the machine learning model. The velocity data may be a two-dimensional velocity or three-dimensional velocity that is determined using a global frame of reference"). Major discusses identifying or tracking individual objects in the driving environment of a vehicle for autonomous driving and they discuss using velocity and other attributes for detection, but it does not discuss the use of acceleration nor the details of the velocity or whether it is using lateral or longitudinal velocity. In a potentially chaotic driving environment, an ADAS system recognizing and using acceleration would be advantageous for improved distinguishing and tracking of objects on the road. If the system could determine that two vehicles have, in the moment, similar velocities but different accelerations, it would help to distinguish between them if they are close to each other in physical space. Additionally, if the ADAS system knew the velocity of an object and the change in its velocity it would be able to better predict where an object will be and therefore would be better able to respond to said object to avoid a collision. It could recognize a driver speeding up and decide not to change lanes. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Major with Blaes to add in the use of acceleration so that the ADAS system can make more accurate decisions and better distinguish between objects/vehicles. Additionally, it would be advantageous to include lateral and longitudinal velocity into the machine learning algorithm to facilitate distinguishing between vehicles/object close to each other in physical space and to better track/predict where an object will be with the ADAS system. For example, if the ADAS system can only determine a radial velocity then it won’t be able to make decisions on if another vehicle is changing lanes. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Major with Blaes to include longitudinal and lateral velocity into its velocity determination to facilitate better tracking and distinguishing objects. Regarding claim 12 the combination of Major and Blaes discloses The environmental detection system according to claim 11, Major further discloses wherein the neural network is conditioned by having been trained with synthetic training data which are compatible with the at least one predetermined physical relationship (Section 5.2 Paragraph 3-4, "The range-azimuth and range-Doppler inputs are normalized by making each range-row zero-mean and unit-variance, using statistics computed over the training -set. All of the discussed models used the same set of prior box shapes, 8 in total. Widths: 1.9m, 3.5m. Lengths: 4.21m, 6.1m, 11m, 18m. As all of the input images spanned the same space, we defined the ground truth and prior boxes in meters and then mapped them to [0, 1] X [0, 1] for the loss function. The input to the SSD head was a single feature map with a size of 64×64, corresponding to a 47m x 47m area, so prior boxes were spaced approximately 73cm from each other” where the boxes are objects in a bigger environment). Regarding claim 14 the combination of Major and Blaes discloses The environmental detection system according to claim 11, Major further discloses wherein the neural network is conditioned by including, between two layers, a filter that converts a first set of intermediate values into a second set of intermediate values according to the at least one predetermined physical relationship (Section 4.1.1 Range-Azimuth Model Paragraph 1 lines 1-4, “The feature extractor used for our Range-Azimuth (RA) model is motivated by the Feature Pyramid Network (FPN) architecture by Lin et al. [14]. It consists of multiple consecutive convolutional layers” where when the input goes through multiple layers before reaching the output there are multiple intermediate values and a convolution layer is a filter that transforms the data). Claim 3, 8, 13 is rejected under 35 U.S.C. 103 as being unpatentable over Major (2019) Vehicle Detection With Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors in view of Blaes (US 12221115 B1) further in view of Wei (2016) SSD: Single Shot MultiBox Detector [both cited from attached pdf]. Regarding claim 3 the combination of Major and Blaes discloses The environmental detection system according to claim 1, Major discloses wherein the network is conditioned by the fact that in training the network for determining weights of the neural network (Equation 1; Equation 3; Section 4.4 Paragraph 5 line 8, "Here, α_t is a class-dependent weighting factor" ), a loss function (Equation 3). Major does not explicitly disclose the variables of a loss function that contains a physical term which minimizes a deviation from the at least one predetermined physical relationship. Wei discloses The variables of a loss function that contains a physical term which minimizes a deviation from the at least one predetermined physical relationship (Equation 1; Section Training objective lines 6-9, "The localization loss is a Smooth L1 loss [6] between the predicted box (l) and the ground truth box (g) parameters. Similar to Faster R-CNN [2], we regress to offsets for the center (cx, cy) of the default bounding box (d) and for its width (w) and height (h)." where L_loc is a function of size and loss functions are used to minimize deviations). Major and Wei are both considered analogous art as they both concern training a model based on sensors that can be a radar sensor. Major discloses a loss function from Wei and does not disclose enough details, so L_loc. Wei is being added to explicitly show that the L_loc loss function is a function of size for the model. Having a loss function which depends on the length, width and position ensures that the model accurately models those values as it is specifically minimizing the deviations of those parameters. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Major with Wei but ensuring that the loss function uses length, width, position so that the model can be accurate. Regarding claim 8 the combination of Major and Blaes discloses The environmental detection system according to claim 6, Major discloses wherein the network is conditioned by the fact that in training the network for determining weights of the neural network (Equation 1; Equation 3; Section 4.4 Paragraph 5 line 8, "Here, α_t is a class-dependent weighting factor" ), a loss function (Equation 3). Major does not explicitly disclose the variables of a loss function that contains a physical term which minimizes a deviation from the at least one predetermined physical relationship. Wei discloses The variables of a loss function that contains a physical term which minimizes a deviation from the at least one predetermined physical relationship (Equation 1; Section Training objective lines 6-9, "The localization loss is a Smooth L1 loss [6] between the predicted box (l) and the ground truth box (g) parameters. Similar to Faster R-CNN [2], we regress to offsets for the center (cx, cy) of the default bounding box (d) and for its width (w) and height (h)." where L_loc is a function of size and loss functions are used to minimize deviations). Major and Wei are both considered analogous art as they both concern training a model based on sensors that can be a radar sensor. Major discloses a loss function from Wei and does not disclose enough details, so L_loc. Wei is being added to explicitly show that the L_loc loss function is a function of size for the model. Having a loss function which depends on the length, width and position ensures that the model accurately models those values as it is specifically minimizing the deviations of those parameters. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Major with Wei but ensuring that the loss function uses length, width, position so that the model can be accurate. Regarding claim 13 the combination of Major and Blaes discloses The environmental detection system according to claim 11, Major discloses wherein the network is conditioned by the fact that in training the network for determining weights of the neural network (Equation 1; Equation 3; Section 4.4 Paragraph 5 line 8, "Here, α_t is a class-dependent weighting factor" ), a loss function (Equation 3). Major does not explicitly disclose the variables of a loss function that contains a physical term which minimizes a deviation from the at least one predetermined physical relationship. Wei discloses The variables of a loss function that contains a physical term which minimizes a deviation from the at least one predetermined physical relationship (Equation 1; Section Training objective lines 6-9, "The localization loss is a Smooth L1 loss [6] between the predicted box (l) and the ground truth box (g) parameters. Similar to Faster R-CNN [2], we regress to offsets for the center (cx, cy) of the default bounding box (d) and for its width (w) and height (h)." where L_loc is a function of size and loss functions are used to minimize deviations). Major and Wei are both considered analogous art as they both concern training a model based on sensors that can be a radar sensor. Major discloses a loss function from Wei and does not disclose enough details, so L_loc. Wei is being added to explicitly show that the L_loc loss function is a function of size for the model. Having a loss function which depends on the length, width and position ensures that the model accurately models those values as it is specifically minimizing the deviations of those parameters. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Major with Wei but ensuring that the loss function uses length, width, position so that the model can be accurate. Claim 5, 10, 15 is rejected under 35 U.S.C. 103 as being unpatentable over Major (2019) Vehicle Detection With Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors in view of Blaes (US 12221115 B1) further in view of Fireman (US 20090099862 A1). Regarding claim 5 the combination of Major and Blaes discloses The environmental detection system according to claim 1, Major discloses the use of multiple layers (Section 4.1.1 Range-Azimuth Model Paragraph 1 lines 1-4, “The feature extractor used for our Range-Azimuth (RA) model is motivated by the Feature Pyramid Network (FPN) architecture by Lin et al. [14]. It consists of multiple consecutive convolutional layers”). Major does not disclose wherein at least two hidden layers of the neural network are trained to convert a first set of intermediate values into a second set of intermediate values according to the at least one predetermined physical relationship. Fireman discloses Wherein at least two hidden layers of the neural network are trained to convert a first set of intermediate values into a second set of intermediate values according to the at least one predetermined physical relationship (Paragraph 0019, "According to one exemplary embodiment, the method may include where the (a) may include capturing the at least one aspect of the data, wherein the at least one aspect may include: at least one temporal duration… at least one location; at least one proximity between a plurality of resources; at least one change of location by a resource; at least one rate of change of the location; at least one movement from a first location to a second location of a resource"; Paragraph 0189, "The units of the neural network may generally be categorized into three types of different groups (layers), according to their functions, as illustrated in FIG. 8. A first layer, input layer 804, may be assigned to accept a set of data representing an input pattern, a second layer, output layer 808, may be assigned to provide a set of data representing an output pattern, and an arbitrary number of intermediate layers, hidden layers 806, and may convert the input pattern to the output pattern" where if the input goes through multiple layers before reaching the output there are multiple intermediate values). Major and Fireman are both considered analogous art as they both concern training a model based on sensors that can be a radar sensor. Major discloses convolution layers in between the input and output layer but does not disclose that the layers are hidden. Hidden layers are useful for modelling non-linear relationships between parameters and allow the network to hand more complex situations. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Major with Fireman to include hidden layers so that the neural network can model more complex environments. Regarding claim 10 the combination of Major and Blaes discloses The environmental detection system according to claim 6, Major discloses the use of multiple layers (Section 4.1.1 Range-Azimuth Model Paragraph 1 lines 1-4, “The feature extractor used for our Range-Azimuth (RA) model is motivated by the Feature Pyramid Network (FPN) architecture by Lin et al. [14]. It consists of multiple consecutive convolutional layers”). Major does not disclose wherein at least two hidden layers of the neural network are trained to convert a first set of intermediate values into a second set of intermediate values according to the at least one predetermined physical relationship. Fireman discloses Wherein at least two hidden layers of the neural network are trained to convert a first set of intermediate values into a second set of intermediate values according to the at least one predetermined physical relationship (Paragraph 0019, "According to one exemplary embodiment, the method may include where the (a) may include capturing the at least one aspect of the data, wherein the at least one aspect may include: at least one temporal duration… at least one location; at least one proximity between a plurality of resources; at least one change of location by a resource; at least one rate of change of the location; at least one movement from a first location to a second location of a resource"; Paragraph 0189, "The units of the neural network may generally be categorized into three types of different groups (layers), according to their functions, as illustrated in FIG. 8. A first layer, input layer 804, may be assigned to accept a set of data representing an input pattern, a second layer, output layer 808, may be assigned to provide a set of data representing an output pattern, and an arbitrary number of intermediate layers, hidden layers 806, and may convert the input pattern to the output pattern" where if the input goes through multiple layers before reaching the output there are multiple intermediate values). Major and Fireman are both considered analogous art as they both concern training a model based on sensors that can be a radar sensor. Major discloses convolution layers in between the input and output layer but does not disclose that the layers are hidden. Hidden layers are useful for modelling non-linear relationships between parameters and allow the network to hand more complex situations. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Major with Fireman to include hidden layers so that the neural network can model more complex environments. Regarding claim 15 the combination of Major and Blaes discloses The environmental detection system according to claim 11, Major discloses the use of multiple layers (Section 4.1.1 Range-Azimuth Model Paragraph 1 lines 1-4, “The feature extractor used for our Range-Azimuth (RA) model is motivated by the Feature Pyramid Network (FPN) architecture by Lin et al. [14]. It consists of multiple consecutive convolutional layers”). Major does not disclose wherein at least two hidden layers of the neural network are trained to convert a first set of intermediate values into a second set of intermediate values according to the at least one predetermined physical relationship. Fireman discloses Wherein at least two hidden layers of the neural network are trained to convert a first set of intermediate values into a second set of intermediate values according to the at least one predetermined physical relationship (Paragraph 0019, "According to one exemplary embodiment, the method may include where the (a) may include capturing the at least one aspect of the data, wherein the at least one aspect may include: at least one temporal duration… at least one location; at least one proximity between a plurality of resources; at least one change of location by a resource; at least one rate of change of the location; at least one movement from a first location to a second location of a resource"; Paragraph 0189, "The units of the neural network may generally be categorized into three types of different groups (layers), according to their functions, as illustrated in FIG. 8. A first layer, input layer 804, may be assigned to accept a set of data representing an input pattern, a second layer, output layer 808, may be assigned to provide a set of data representing an output pattern, and an arbitrary number of intermediate layers, hidden layers 806, and may convert the input pattern to the output pattern" where if the input goes through multiple layers before reaching the output there are multiple intermediate values). Major and Fireman are both considered analogous art as they both concern training a model based on sensors that can be a radar sensor. Major discloses convolution layers in between the input and output layer but does not disclose that the layers are hidden. Hidden layers are useful for modelling non-linear relationships between parameters and allow the network to hand more complex situations. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Major with Fireman to include hidden layers so that the neural network can model more complex environments. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lu (US 20230410323 A1) is pertinent to the instant application as it discusses a vehicle with a radar that uses weights and a loss function with a neural network. Also, it discusses connecting the location of an object with its velocity. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER D DOZE whose telephone number is (571)272-0392. The examiner can normally be reached Monday-Friday 7:40am - 5:40pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vladimir Magloire can be reached at (571) 270-5144. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER DAVON DOZE/Examiner, Art Unit 3648 /VLADIMIR MAGLOIRE/Supervisory Patent Examiner, Art Unit 3648
Read full office action

Prosecution Timeline

Aug 11, 2023
Application Filed
Jul 25, 2025
Non-Final Rejection — §103
Oct 15, 2025
Response Filed
Jan 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585007
RECURSIVE DETERMINISTIC MAXIMUM LIKELIHOOD ESTIMATION OF DIRECTION OF ARRIVAL IN AUTOMOTIVE RADAR SENSING
2y 5m to grant Granted Mar 24, 2026
Patent 12571907
INVERSE SYNTHETIC APERTURE, MULTIBAND RADAR DETECTION OF HIDDEN OBJECTS WITH SPATIALLY STRUCTURED TRACKING OF OBJECT CARRIER
2y 5m to grant Granted Mar 10, 2026
Patent 12553990
HYBRID CLUTTER SUPPRESSION USING ELECTRONICALLY SCANNED ANTENNAS
2y 5m to grant Granted Feb 17, 2026
Patent 12541019
Co-Existence Operations Involving a Radar-Enabled User Equipment and Radio Network Nodes
2y 5m to grant Granted Feb 03, 2026
Patent 12529780
METHOD AND DEVICE FOR DETERMINING THE RELATIVE PERMITTIVITY OF A MATERIAL USING A GROUND-PENETRATING RADAR
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
91%
With Interview (+8.9%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month