Prosecution Insights
Last updated: April 18, 2026
Application No. 18/394,943

ENHANCED RADAR OBJECT DETECTION VIA DYNAMIC AND STATIC DOPPLER SPECTRUM PARTITIONING

Final Rejection §102§103
Filed
Dec 22, 2023
Examiner
ABRAHAM, JOHN BISHOY SAM
Art Unit
3646
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
GM Global Technology Operations LLC
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
5 granted / 7 resolved
+19.4% vs TC avg
Strong +40% interview lift
Without
With
+40.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
37 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
13.7%
-26.3% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
22.3%
-17.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments/amendments, see Page 10, lines 9-13, filed 01/13/2026, with respect to the 35 U.S.C. §101 rejection of claims 15-20 have been fully considered and are persuasive. The 35 U.S.C. §101 rejection of claims 15-20 has been withdrawn. Applicant’s arguments/amendments, see Page 10, lines 3-5, filed 01/13/2026, with respect to the 35 U.S.C. §112(b) rejection of claim 6 have been fully considered and are persuasive. The 35 U.S.C. §112(b) rejection of claim 6 has been withdrawn. Applicant’s arguments/amendments, see Page 11, line 1-7, filed 01/13/2026, with respect to 35 U.S.C. §102 rejection of claims 1-5, 7-10 and 14-20 have been considered but are not persuasive. Applicant fails to present an argument with respect to the 35 U.S.C. §102 rejection of claims 1-5, 7-10 and 14-20. While the Applicant correctly recounts on Page 9, lines 12-14 that the Examiner explained further search and consideration would be required to decide if their amendments overcome the prior art, the Applicant neglects to recount that when discussing the rejection on Page 11, lines 1-5. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. Additional reference is made to the 35 U.S.C. §102 rejection of claims 1 and 15 below. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 7, 10, 14-17 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Schumann et al. (O. Schumann, J. Lombacher, M. Hahn, C. Wöhler and J. Dickmann, "Scene Understanding With Automotive Radar," in IEEE Transactions on Intelligent Vehicles, vol. 5, no. 2, pp. 188-203, June 2020), hereinafter Schumann. [AltContent: textbox (Figure 1, Schumann)] PNG media_image1.png 242 1544 media_image1.png Greyscale Regarding claims 1 and 15, Schumann teaches a vehicle system and a method for enhancing object detection in a vehicle, the vehicle system comprising: at least one radar device configured to detect signals reflected by objects in a vicinity of the vehicle (Pg. 189, col. 2, lines 3-5; All our experiments are executed with a network of automotive radar sensors…); and a control module in communication with the at least one radar device, the control module configured to (Pg. 189, col. 2, lines 9-10; The sensors are connected via Ethernet to one central computer): generate a radar tensor based on the signals detected by the at least one radar device (Pg. 189, Fig. 1, grey box 1 (Raw 2D Point Cloud with RCS and vr) and grey box 2 (Coordinate Transformation Ego-Motion Compensation)), the radar tensor formed of range values, angle values, and doppler values of the objects in the vicinity of the vehicle (Pg. 189, col. 2 lines 17-20; Each of the measured radar targets pi is equipped with the following properties: range, azimuth angle, Doppler velocity); partition the radar tensor into static reflections and dynamic reflections sperate from the static reflections (Pg. 190, Col. 1, lines 14-24; Because semantic information about dynamic objects should be available as soon as possible so that they can be utilized in tracking algorithms and path planning, accumulation of multiple measurement cycles is not desired. However, semantic information about static objects is needed on slower time scales, e.g. for localization, so that data can be accumulated over time. Hence, two branches emerge from this point in our pipeline: one for the classification of static objects and one for the classification of moving objects.); extract features from the static reflections with a first machine learning module (Fig. 1, static objects; Pg. 190 col 1, lines 23-30, The first step in the “static” branch is the grid processing… The grid consists of several layers to capture different radar properties of the objects. This grid is then classified by a convolutional neural network); extract features from the dynamic reflections with a second machine learning module different than the first machine learning module (Fig. 1, Dynamic Objects; Pg. 190 col 1, lines 37-33, In the second branch, the 2D point cloud with additional RCS and ego-motion compensated Doppler velocity values is directly used as input for a recurrent instance segmentation network.); generate a static feature map (Pg. 193, col. 2, Fig. 5, Semantic Radar Grid (lower right); Pg. 190, col. 1, lines 31-33; This results in a semantic radar grid of the same size as the radar grid in which a label is assigned for each pixel.) based on the extracted features from the static reflections (Pg. 190, col. 1, lines 28-31; The grid consists of several layers to capture different radar properties of the objects. This grid is then classified by a convolutional neural network, similar to those used in semantic segmentation of camera images.), the static feature map including static feature vectors (Pg. 192, col. 2, lines 5-11; Thus, the output (Examiner’s note: the output of the static machine learning module, see Fig. 4) can be interpreted as a vector of class probabilities…class probabilities as well as a class label are assigned to each cell) each associated with a range value of the radar tensor, an angle value of the radar tensor (Pg. 191 col 1, lines 11-14, To make semantic segmentation of static objects possible, the quantities range, angle, Doppler velocity and amplitude that the radar sensor measures for each target have to be transformed into a more beneficial structure.), and a learnable feature identified by the first machine learning module (Pg. 191, Table 1, Static Radar data mapped to the six classes and Pg. 193, col. 2, Fig. 5 Classifier Output (lower left figure); Examiner’s note: mapping the static radar data to one of the six classes is equivalent to the associated learnable feature of the static feature vector of the instant application; in paragraphs [0053]-[0056] of the instant application, learnable features is equated with extracted features and examples of features are described through the parameters concatenated in the multilayer perceptron module 326. In the disclosure of Schumann, the static feature vector is six dimensional and each value represents a probability that a radar data cell is identified according to each of the six classes that can be assigned to the static return; car, building, curbstone, pole, vegetation or other see Pg. 193, col. 2, Fig. 5, Classifier Output (lower left figure)); generate a dynamic feature map (Pg. 199, Fig. 13, Predicted Instance Segmentation (Right)) based on the extracted features from the dynamic reflections (Pg. 196, col. 1, lines 11-20; In the point feature generation module, a high-dimensional feature vector is generated for each point pi by using a cascade of three PointNet++ multi-scale-grouping (MSG) modules and matching feature propagation modules [11], [32]. Only the positions in car coordinates x⁽ᶜᶜ⁾ and y⁽ᶜᶜ⁾ as well as σi and vˆi are used as input for the first MSG module. The output of this point feature generation module is a kf-dimensional feature vector fi for each input point pi.), the dynamic feature map including dynamic feature vectors each associated with a range value of the radar tensor, an angle value of the radar tensor (Pg. 196, col. 1, lines 1-2; The network expects as input a point cloud with Np points p1, . . . , pNp . Each point pi is defined by four spatial coordinates… as well as the measured RCS value σi and the ego-motion compensated Doppler velocity), and a learnable feature identified by the second machine learning module (Pg. 200, col. 1, lines 5-9; The memory update module has therefore learned to encode the combined features of the input point cloud and the current memory states to new features which carry information of the previous times and the current input.); merge the extracted features from the static reflections in the static feature map and the extracted features from the dynamic reflections in the dynamic feature map; generate a vector map based on the merged extracted features in the static feature map and the dynamic feature map, the vector map including vectors each with at least one of the static feature vectors and at least one of the dynamic feature vectors (Pg. 201, col. 2, Fig. 16; Combined semantic radar point cloud with information from both the static and dynamic object branch.); and detect static objects and dynamic objects in the vicinity of the vehicle based on the vector map (Pg. 201, col. 2, lines 1-12; Predictions originate from both classification branches…the classifier correctly identified the moving car and the pedestrian group at the right hand side close to the poles.). Regarding claim 2, Schumann discloses the vehicle system of claim 1, wherein the first machine learning module (Pg. 189, Fig. 1, caption, line 3; convolutional neural network) and the second machine learning module (Pg. 189, Fig. 1, caption, line 4; recurrent instance segmentation network) are separate deep neural networks. Regarding claim 3, Schumann discloses the vehicle system of claim 1, wherein the control module is configured to determine Doppler bins for the static reflections (Pg. 191, col. 1, lines 15-17; Doppler information is solely used to remove measurements of moving objects from the received data as they contain no semantic information about static objects.) based on a velocity of the vehicle (Pg. 190, col. 1, lines 13-14; With the odometry data of the test vehicle the ego-motion compensated Doppler velocity is determined.). Regarding claim 4, Schumann discloses the vehicle system of claim 3, wherein the control module is configured to separate the determined Doppler bins for the static reflections from the radar tensor to partition the radar tensor into static reflections (Pg. 190, col. 1, lines 25-30; The first step in the “static” branch is the grid processing. Here, radar data are accumulated over time and as a result information about the shape and material properties become clearer. To reduce the blur from moving objects in the grid, all targets with velocity > 0.3 m/s are removed from the point cloud.) and dynamic reflections (Pg. 190, col. 1, lines 39-45; In the second branch, the 2D point cloud with additional RCS and ego-motion compensated Doppler velocity values is directly used as input for a recurrent instance segmentation network… Notice that no velocity threshold is used in this branch because also targets with a very small Doppler velocity can belong to a dynamic object (e.g. measurements of the standing leg of a moving pedestrian).). Regarding claim 5, Schumann discloses the vehicle system of claim 3, wherein: each Doppler bin associates one of the range values and one of the angle values at the velocity of the vehicle (Pg. 190, col. 1, lines 7-14; In the next processing step, the coordinates of all targets are transformed into a single car coordinate system x(cc)i , y(cc)i with origin at the center of the rear axle of the respective test vehicle. In addition to this moving coordinate system, also Cartesian coordinates x(gc)i , y(gc)i in a stationary global coordinate system are calculated. With the odometry data of the test vehicle the ego-motion compensated Doppler velocity ˆv(sc)i is determined.). Regarding claim 7, Schumann discloses the vehicle system of claim 5, further comprising a sensor configured to detect the velocity of the vehicle (Pg. 190, col. 2, lines 21-23; The position and motion state of the vehicle was estimated via dead reckoning by using on-board sensors like the speedometer of the wheels, the steering angle and the gyroscope.), wherein the control module is configured to receive one or more signals from the sensor indicative of the velocity of the vehicle. Regarding claims 10 and 20, Schuman disclose the vehicle system of claim 1 and the method of claim 15, wherein merging the extracted features from the static reflections and the extracted features from the dynamic reflections includes concatenating the static feature map and the dynamic feature map. (Pg. 201, Fig. 16; Combined semantic radar point cloud with information from both the static and dynamic object branch.). Regarding claim 14, Schumann discloses a vehicle including the vehicle system of claim 1 (Pg. 189, col. 2, lines 3-10; All our experiments are executed with a network of auto- motive radar sensors whose measurement cycles are performed asynchronously. The sensors are mounted at different positions on test vehicles, see Fig. 2… The sensors are connected via Ethernet to one central computer). Regarding claim 16, Schumann discloses the method of claim 15, wherein partitioning the radar tensor includes determining Doppler bins for the static reflections (Pg. 191, col. 1, lines 15-17; Doppler information is solely used to remove measurements of moving objects from the received data as they contain no semantic information about static objects.). Regarding claim 17, Schumann discloses the method of claim 16 wherein partitioning the radar tensor includes separating the determined Doppler bins for the static reflections from the radar tensor (Pg. 190, col. 1, lines 25-30; The first step in the “static” branch is the grid processing. Here, radar data are accumulated over time and as a result information about the shape and material properties become clearer. To reduce the blur from moving objects in the grid, all targets with velocity > 0.3 m/s are removed from the point cloud.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 11-13 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Schumann et al. (O. Schumann, J. Lombacher, M. Hahn, C. Wöhler and J. Dickmann, "Scene Understanding With Automotive Radar," in IEEE Transactions on Intelligent Vehicles, vol. 5, no. 2, pp. 188-203, June 2020) in view of Tyagi (US PG Pub 20210293927), hereinafter Tyagi. Regarding claim 11, Schumann discloses the vehicle system of claim 1. Schumann fails to teach further comprising a vehicle control module in communication with the control module, the vehicle control module configured to receive one or more signals from the control module indicative of the detected static objects and dynamic objects. However, Tyagi teaches a vehicle radar system (Fig. 1, radar system 102), further comprising a vehicle control module in communication with the control module, the vehicle control module configured to receive one or more signals from the control module indicative of the detected static objects and dynamic objects ([0057] In some cases, the classification data 312 is provided to one or more of the vehicle-based subsystems 202 (of FIG. 2) to enable the vehicle-based subsystem 202 to make a decision based on the object classes 314, as described above with respect to FIGS. 1 and 2.). Schumann and Tyagi are both considered to be analogous to the claimed invention because they are in the same field of endeavor of applied machine learning for automotive radar technology. A person of ordinary skill in the art would have had the technological capabilities to incorporate the functionality of the control module sending information on neighboring objects to the vehicle control module of Tyagi with the vehicle system of Schumann to yield a predictable result of providing environmental information to control systems to improve automobile safety. Additionally, providing such environmental information is on the critical path of developing safe autonomously driven vehicles as noted by Schumann (Pg. 188, col. 1, lines 1-6; Autonomous driving is currently one of the greatest challenges in the car industry and at the same time autonomous vehicles play a central role in future mobility concepts. Perceiving and understanding the environment of an autonomous vehicle with a suitable sensor setup is one of the major topics in the field.). Regarding claim 12, Schumann as modified by Tyagi discloses the vehicle system of claim 11. Schuman fails to teach wherein the vehicle control module is configured to control at least one vehicle control system based on the one or more signals However, Tyagi teaches wherein the vehicle control module is configured to control at least one vehicle control system based on the one or more signals ([0029] The radar data provided by the radar system 102 enables the autonomous-driving system 206 to perform an appropriate action to avoid a collision with the object 108. Such actions can include emergency braking, changing lanes, adjusting the vehicle 104's speed, or a combination thereof. In some cases, the type of action is determined based on the object class associated with the detected object 108.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Schumann in view of Tyagi to integrate the signals provided by the control module to actuate vehicular control systems as taught by Tyagi to gain the advantage of improved road safety as is currently done with modern automobiles which have Advanced driver-assistance systems (ADAS); and also since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143). Regarding claim 13, Schumann as modified by Tyagi discloses the vehicle system of claim 11. Schuman further discloses wherein the vehicle control module is configured to generate a map including the detected static objects and dynamic objects in the vicinity of the vehicle based on the one or more signals (Pg. 201, Fig. 16; Combined semantic radar point cloud with information from both the static and dynamic object branch. (The vehicle is located at x=2,y=0)). Regarding claim 21, Schumann discloses the method of claim 15. Schumann fails to teach generating one or more control signals indicative of the detected static objects and dynamic objects; and controlling at least one vehicle control system based on the one or more control signals. However, Tyagi teaches generating one or more control signals indicative of the detected static objects and dynamic objects ([0057] In some cases, the classification data 312 is provided to one or more of the vehicle-based subsystems 202 (of FIG. 2) to enable the vehicle-based subsystem 202 to make a decision based on the object classes 314, as described above with respect to FIGS. 1 and 2.); and controlling at least one vehicle control system based on the one or more control signals ([0029] The radar data provided by the radar system 102 enables the autonomous-driving system 206 to perform an appropriate action to avoid a collision with the object 108. Such actions can include emergency braking, changing lanes, adjusting the vehicle 104's speed, or a combination thereof. In some cases, the type of action is determined based on the object class associated with the detected object 108.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Schumann in view of Tyagi to integrate the signals provided by the control module to actuate vehicular control systems as taught by Tyagi to gain the advantage of improved road safety as is currently done with modern automobiles which have Advanced driver-assistance systems (ADAS); and also since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143). Claim(s) 22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Schumann et al. (O. Schumann, J. Lombacher, M. Hahn, C. Wöhler and J. Dickmann, "Scene Understanding With Automotive Radar," in IEEE Transactions on Intelligent Vehicles, vol. 5, no. 2, pp. 188-203, June 2020) in view of Bonaccorso (Bonaccorso, Giuseppe. Mastering Machine Learning Algorithms (2nd Edition) – 19 Deep Convolutional Networks, Pp. 557-586, Packt Publishing. (2020)). Regarding claims 22 and 24, Schumann discloses the vehicle system of claim 1 and the method of claim 15, accordingly the rejections of claims 1 and 15 above are incorporated. Schumann fails to disclose the control module is configured to apply layers of 1 x1 convolutions to the static feature map and the dynamic feature map to generate the vector map based on the merged extracted features in the static feature map and the dynamic feature map. Bonaccorso teaches that it is known in the art to apply layers of 1 x1 convolutions to the different feature maps to generate a vector map based on the merged extracted features in the different feature maps (Page 569, lines 12-15; As the output has still p feature maps and we need to output q channels, the process employs a trick: processing each feature map with q 1 x 1 kernels (in this way, the output will have q layers and the same dimensions). It would have been obvious to one having ordinary skill before the effective filing date of the claimed invention was made to apply 1x1 convolutions to the static and dynamic feature maps to generate the vector map of combined features as taught by Bonaccorso, since such a modification would result in improving the memory consumption and model training as noted by Bonaccorso (Page 569, lines 19-20; this approach is extremely effective in optimizing the training and prediction processes, as well as the memory consumption in any scenario.). Claim(s) 23 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Schumann et al. (O. Schumann, J. Lombacher, M. Hahn, C. Wöhler and J. Dickmann, "Scene Understanding With Automotive Radar," in IEEE Transactions on Intelligent Vehicles, vol. 5, no. 2, pp. 188-203, June 2020) in view of Blaes (US Pat. 12,221,115 with filing date of 04/29/2022). Regarding claims 23 and 25, Schumann discloses the vehicle system of claim 1 and the method of claim 15, accordingly the rejections of claims 1 and 15 above are incorporated. Schumann further discloses generating a bounding box for each detected static object (Pg. 190, col. 2, lines 40- 43; The data were annotated in occupancy grids using bounding box labeling. That is, a box was created around each visible object in the occupancy grid map and a class label was assigned to this box). Schumann fails to disclose wherein the control module is configured to: generate a bounding box for each detected static object and each dynamic object and parameters associated with each bounding box, the parameters include a position, a shape, an orientation, and a speed; and output one or more signals based on the bounding box for each detected static object and each dynamic object for controlling the vehicle. However, Blaes teaches systems and methods for vehicle radar-based perception systems (Abstract; A vehicle may use a perception system to capture data about an environment proximate to the vehicle. The perception system may output the data about the environment to a system configured to determine positions of objects relative to the perception system over time.) with wherein the control module is configured to: generate a bounding box for each detected static object and each dynamic object and parameters associated with each bounding box, the parameters include a position, a shape, an orientation, and a speed (col. 3, lines 14-18; the output of the deep neural network may be an object bounding box, an occupancy value, and/or state of the object (e.g., trajectory, acceleration, speed, size, current physical position, object classification, instance segmentation, etc.).); and output one or more signals based on the bounding box for each detected static object and each dynamic object for controlling the vehicle (col. 25, lines 33-52; The perception component 910, the planning component 912, the tracking component 914, and/or the radar component 916 may include one or more machine-learned (ML) models and/or other computer-executable instructions such as those described herein. In general, the perception component 910 may determine what is in the environment surrounding the vehicle 904 and the planning component 912 may determine how to operate the vehicle 904 according to information received from the perception component 910. For example, the planning component 912 may determine trajectory based at least in part on the perception data and/or other information such as, for example, one or more maps, localization information (e.g., where the vehicle 904 is in the environment relative to a map and/or features detected by the perception component 910), and/or the like. The trajectory may comprise instructions for controller(s) of the vehicle 904 to actuate drive components of the vehicle 904 to effectuate a steering angle and/or steering rate, which may result in a vehicle position, vehicle velocity, and/or vehicle acceleration.). Schumann and Blaes are both considered to be analogous to the claimed invention because they are in the same field of endeavor of vehicular radar machine learning perception system technology. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the vehicle system and method of Schumann by including the annotated bounding box teachings of Blaes to yield a predictable result of utilizing a well-established technique for representing the output of a machine learning module. For applicant’s benefit portions of the cited reference(s) have been cited to aid in the review of the rejection(s). While every attempt has been made to be thorough and consistent within the rejection it is noted that the PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS. See MPEP 2141.02 VI. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20220404490 discloses a method, apparatus and computer program product to generate a model of one or more objects relative to a vehicle. In the context of a method, radar information is received in the form of in-phase quadrature (IQ) data and the IQ data is converted to one or more first range-doppler maps. The method further includes evaluating the one or more first range-doppler maps with a machine learning model to generate the model that captures the detection of the one or more objects relative to the vehicle. US 11195038 discloses a device for extracting dynamic information comprises a convolutional neural network, wherein the device is configured to receive a sequence of data blocks acquired over time, each data block is a multi-dimensional representation of a scene. The convolutional neural network is configured to receive the sequence as input and to output dynamic information on the scene in response, wherein the convolutional neural network comprises a plurality of modules configured to carry out a specific processing task for extracting the dynamic information. US 20210181758 discloses methods for tracking a current and/or previous position, velocity, acceleration, and/or heading of an object using sensor data to determine whether to associate a current object detection generated from recently received (e.g., current) sensor data with a previous object detection generated from earlier sensor data. The technique includes using multiple types of sensor data to detect objects. An ML model is trained to receive outputs associated with different sensor types and/or a track associated with an object and determine a data structure comprising a region of interest, object classification, and/or a pose associated with the object. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN BS ABRAHAM whose telephone number is (571)272-4145. The examiner can normally be reached Monday - Friday 9:00 am - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jack Keith can be reached at (571)272-6878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JBSA/Examiner, Art Unit 3646 /PETER M POON/Supervisory Patent Examiner, Art Unit 3643
Read full office action

Prosecution Timeline

Dec 22, 2023
Application Filed
Oct 31, 2025
Non-Final Rejection — §102, §103
Jan 07, 2026
Interview Requested
Jan 13, 2026
Response Filed
Jan 14, 2026
Examiner Interview Summary
Apr 02, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12584991
UWB-BASED IN-VEHICLE 3D LOCALIZATION OF MOBILE DEVICES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+40.0%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month