Prosecution Insights
Last updated: April 19, 2026
Application No. 18/904,437

Radar-Based Occupancy Grid Map

Non-Final OA §101§103
Filed
Oct 02, 2024
Examiner
SANTOS, KIRSTEN JADE M
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Aptiv Technologies AG
OA Round
1 (Non-Final)
53%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
32 granted / 60 resolved
+1.3% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
26.2%
-13.8% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 60 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a non-final office action on the merits. Claims 1-17 are currently pending and are addressed below. The examiner notes that the fundamentals of the rejection are based on the broadest reasonable interpretation of the claim language. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application EP23214726 filed on 12/03/2023. Information Disclosure Statement The information disclosure statement (IDS) submitted on October 2, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-17 are rejected under 35 U.S.C. 101 because the claims are directed towards a which fall within at least one of the statutory categories. STEP 2A (Prong 1) Claim 1 A computer-implemented method for driving assistance in a vehicle, the method comprising: generating, based on radar point sensor data of an environment of the vehicle, a three-dimensional occupancy grid map generating, based on the radar point sensor data, a number of feature grid maps, wherein a respective feature dimension of each of the FGMs corresponds to a feature of the radar point sensor data generating, based on the 3D OGM and the number of FGMs, a refined occupancy grid providing the refined OGM for usage by an assistance system of the vehicle The examiner submits that the foregoing bolded limitations constitute a mental process because under its broadest reasonable interpretation, the claim covers performance of limitations in the human mind. The “generating” step in the context of the claims, encompasses a person looking at the sensor data collected (obtained, received, acquired, etc.) and forming a simple judgement (determination, analysis, comparison, etc.) regarding the classification and organization of clustered data either mentally, or using pen and paper. Additionally, these steps can be reduced to mentally organizing, classifying, or labeling data, then performing a mathematical calculation by taking input values and applying convolutions. Thus, claim 1 recites at least one mental process. STEP 2A (Prong 2) Claim 1 A computer-implemented method for driving assistance in a vehicle, the method comprising: generating, based on radar point sensor data of an environment of the vehicle, a three-dimensional occupancy grid map generating, based on the radar point sensor data, a number of feature grid maps wherein a respective feature dimension of each of the FGMs corresponds to a feature of the radar point sensor data generating, based on the 3D OGM and the number of FGMs, a refined occupancy grid providing the refined OGM for usage by an assistance system of the vehicle The examiner submits that the identified additional limitations do not integrate the previously discussed abstract ideas into practical applications. Regarding the additional limitation of, “providing,” it is a form of insignificant extra- solution activity. The “providing” step is recited at a high level of generality (as a general means of information processing/rendering and execution of outputting, generating, displaying the received data) and are post solution actions which are also forms of insignificant extra-solution activities. As such, the additional elements of claim 1 do not integrate the abstract idea into practical application. STEP 2B Claims 1-17 do not include additional elements (considered individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above. General application of an exception using a generic computer component cannot provide an inventive concept. Thus, since claim 1 is: (a) directed towards abstract ideas, (b) does not recite additional elements that integrate the judicial exception into a practical application, and (c) does not recite additional elements that amount to significantly more than the judicial exception, it is clear that claim 1 is directed towards non-statutory subject matter. Regarding claims 15 and 17, please refer to the rejection of claim 1 as they are commensurate in scope, with claim 1 directed towards a method, claim 15 to a control unit, and claim 17 to a non-transitory computer storage medium. Dependent claims 2-14 and 16 do not recite any further limitations that cause the claims to be patent eligible. The limitations of the dependent claims are directed towards additional aspects of the judicial exception and/or additional elements that do not integrate the judicial exception into a practical application. As such, claims 1-17 are rejected under 35 U.S.C 101 as being drawn to an abstract idea without significantly more, and thus are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-17 are rejected under 35 U.S.C. 103 as being unpatentable over Yang Chun et al. (DE102020216196A1), hereinafter referred to as Chun, in view of, Mohajerin Nima et al. (US2020148215A1), hereinafter referred to as Nima. Regarding claim 1, Chun discloses: a computer-implemented method for driving assistance in a vehicle, the method (see at least Chun, ¶¶ [0004]-[0006]) comprising: generating, based on radar point sensor data of an environment of the vehicle, a three-dimensional occupancy grid map (see at least Chun, ¶¶ [0004], [0009]-[0010], [0016]-[0018], discloses the generation of an occupancy grid map based on input sensor data acquired from (but not exclusive to) a radar sensor system, such as, standard radar output and a multitude of reflection signals; the occupancy grid map renders reflection signals with each pixel assigned a measurement attribute and the entirety of the data is utilized for object detection, classification, and boundaries of a detected object) generating, based on the radar point sensor data, a number of feature grid maps (see at least Chun, ¶¶ [0016], [0024], which discloses using the input data to extract a number of feature grid maps obtained from input data) wherein a respective feature dimension of each of the FGMs corresponds to a feature of the radar point sensor data (see at least Chun, ¶¶ [0027], which discloses an example of wherein a respective feature dimension of each of the FGMs corresponds to a feature of the radar point sensor data, for example, physical dimensions of a feature are identified in order to calculate a resolution and cover points Chun is silent on, however, in the same field of endeavor, Nima teaches: generating, based on the 3D OGM and the number of FGMs, a refined occupancy grid (see at least Nima, ¶¶ [0009]-[0012], [0022], [0030], 0083]-[0085], which discloses the generation of a corrected (refined) occupancy grid map based on the original occupancy grid map and number of extracted features) providing the refined OGM for usage by an assistance system of the vehicle (see at least Nima, ¶¶ [0099]-[0102], [0150]-[0153] which discloses feeding back the corrected (refined) OGM as input for performing prediction by the assistance system of the vehicle) It would have been obvious to a person of ordinary skill in the art to modify Chun to include generating, based on the 3D OGM and the number of FGMs, a refined occupancy grid and providing the refined OGM for usage by an assistance system of the vehicle as taught by Nima. Incorporating the teaching of Nima would allow for corrective terms to be applied to the input data that accurately accounts for a vehicle environment’s dynamic and unstructured nature. Regarding claim 2, Chun is silent on, however, in the same field of endeavor, Nima teaches: the method of claim 1 wherein: the refined OGM includes at least one of: a refined 3D OGM and a feature map (see at least Nima, ¶¶ [0099]-[0102], [0150]-[0153] which discloses feeding back the corrected (refined) OGM and feature map as input for performing prediction by the assistance system of the vehicle) a dimension of the feature map indicates one or more traffic infrastructure elements of the environment (see at least Nima, ¶¶ [0080]-[0081] which discloses an example of the feature map indicative of static information about the environment, such as known roads (traffic infrastructure elements), [0122]-[0124] which discloses the OGM features extracted that include changing the dimensionality) It would have been obvious to a person of ordinary skill in the art to modify Chun to include a refined 3D OGM and a feature map and a dimension of the feature map indicates one or more traffic infrastructure elements of the environment as taught by Nima. The examiner would like to note that the reference of Chun incorporates some type of modification, or corrective application to the captured occupancy grid map, however, the process of refinement and incorporation of the extracted feature maps are not as explicitly mentioned as in Nima. Incorporating the teaching of Nima would allow for corrective terms to be applied to the input data that accurately accounts for a vehicle environment’s dynamic and unstructured nature. Regarding claim 3, Chun discloses: the method of claim 1 wherein the number of FGMs includes one or more of: a radar cross section FGM with a dimension indicating a radar cross section of detected stationary environment elements (see at least Chun, ¶¶ [0009], [0012], which discloses a radar cross section FGM with a dimension indicating a radar cross section of detected stationary environment elements) a radial velocity FGM with a dimension indicating a radial velocity for detected stationary environment elements (see at least Chun, ¶¶ [0009], [0012], which discloses a radial velocity FGM with a dimension indicating a radial velocity for detected stationary environment elements) a range FGM with a dimension indicating a distance to detected stationary environment elements (see at least Chun, ¶¶ [0009], [0012], which discloses a range FGM with a dimension indicating a distance to detected stationary environment elements) Regarding claim 4, Chun is silent on, however, in the same field of endeavor, Nima teaches: the method of claim 1 wherein generating the refined OGM includes: using a convolutional neural network and inputting the 3D OGM and the number of FGMs into the CNN (see at least Nima, ¶¶ [0008]-[0009], [0068], [0099]-[0102], [0150]-[0153] which discloses inputting the number of feature grid maps and original occupancy grid map into the neural network in order to generate the corrected (refined) occupancy grid map) It would have been obvious to a person of ordinary skill in the art to modify Chun to include using a convolutional neural network and inputting the 3D OGM and the number of FGMs into the CNN as taught by Nima. The examiner would like to note that the occupancy grid mapping method in Chun incorporates the use of a convoluted neural network, however, the process of refinement using corrective factors is not as explicitly disclosed. Incorporating the teachings of Nima would allow for the recursive generation of predicted occupancy grid maps by applying a classifier of the corrective term (that allows for distinguishing between static and dynamic objects in patterns and input data) and the input occupancy grid map. Regarding claim 5, Chun is silent on, however, in the same field of endeavor, Nima teaches: the method of claim 4 further comprising, by the CNN: applying two-dimensional convolutions to x and y spatial dimensions of the 3D OGM and of the number of FGMs (see at least Nima, ¶¶ [0093], [0097], [0152], discloses encoding a convolutional layer using a neural network in order to increase depth from the encoder that has x and y spatial dimensions and a greater depth than the input data) treating a z dimension of the 3D OGM and the feature dimension of the number of FGMs as channels (see at least Nima, ¶¶ [0079]-[0081] which discloses treating a z dimension of the 3D OGM and the feature dimension of the number of FGMs as channels) It would have been obvious to a person of ordinary skill in the art to modify Chun to include applying two-dimensional convolutions to x and y spatial dimensions of the 3D OGM and of the number of FGMs and treating a z dimension of the 3D OGM and the feature dimension of the number of FGMs as channels as taught by Nima. The examiner would like to note that the occupancy grid mapping method in Chun incorporates the use of a convoluted neural network, however, the process of refinement using corrective factors is not as explicitly disclosed. Incorporating the teachings of Nima would allow for the recursive generation of predicted occupancy grid maps by applying a classifier of the corrective term (that allows for distinguishing between static and dynamic objects in patterns and input data) and the input occupancy grid map. A convolutional layer further enhances the accuracy of predicted occupancy grid maps where the observed (input) occupancy grid map is used to initialize its state and apply corrective factors. Regarding claim 6, Chun is silent on, however, in the same field of endeavor, Nima teaches: the method of claim 5 further comprising repeating the two-dimensional convolutions along the z dimension (see at least Nima, ¶¶ [0093], [0097], [0152], discloses encoding a convolutional layer using a neural network in order to increase depth from the encoder that has x, y, and z spatial dimensions) It would have been obvious to a person of ordinary skill in the art to modify Chun to include the method of claim 5 further comprising repeating the two-dimensional convolutions along the z dimension as taught by Nima. Incorporating the teachings of Nima would allow for the recursive generation of predicted occupancy grid maps by applying a classifier of the corrective term (that allows for distinguishing between static and dynamic objects in patterns and input data) and the input occupancy grid map. A convolutional layer further enhances the accuracy of predicted occupancy grid maps where the observed (input) occupancy grid map is used to initialize its state and apply corrective factors. Regarding claim 7, Chun is silent on, however, in the same field of endeavor, Nima teaches: the method of claim 5 further comprising: applying a two-dimensional convolution to the x and y dimension of the 3D OGM for any layer of the z dimension of the 3D OGM separately (see at least Nima, ¶¶ [0093], [0097], [0152], discloses encoding a convolutional layer using a neural network in order to increase depth from the encoder that has x and y spatial dimensions and a greater depth than the input data; it is disclosed that the convolution may be implemented using separate encoder units for any later) applying a one-dimensional convolution to the z dimension of the 3D OGM for any cell of the x and y dimensions separately (see at least Nima, ¶¶ [0093], [0097], [0152], discloses encoding a convolutional layer using a neural network in order to increase depth from the encoder that has x and y spatial dimensions and a greater depth than the input data; it is disclosed that the convolution may be implemented using separate encoder units for any later) It would have been obvious to a person of ordinary skill in the art to modify Chun to include applying a two-dimensional convolution to the x and y dimension of the 3D OGM for any layer of the z dimension of the 3D OGM separately and applying a one-dimensional convolution to the z dimension of the 3D OGM for any cell of the x and y dimensions separately as taught by Nima. Incorporating the teachings of Nima would allow for the recursive generation of predicted occupancy grid maps by applying a classifier of the corrective term (that allows for distinguishing between static and dynamic objects in patterns and input data) and the input occupancy grid map. A convolutional layer further enhances the accuracy of predicted occupancy grid maps where the observed (input) occupancy grid map is used to initialize its state and apply corrective factors. It would have been obvious to a person of ordinary skill in the art to modify Chun to include applying a two-dimensional convolution to the x and y dimension of the 3D OGM for any layer of the z dimension of the 3D OGM separately and applying a one-dimensional convolution to the z dimension of the 3D OGM for any cell of the x and y dimensions separately as taught by Nima. Incorporating the teachings of Nima would allow for the recursive generation of predicted occupancy grid maps by applying a classifier of the corrective term (that allows for distinguishing between static and dynamic objects in patterns and input data) and the input occupancy grid map. A convolutional layer further enhances the accuracy of predicted occupancy grid maps where the observed (input) occupancy grid map is used to initialize its state and apply corrective factors. Regarding claim 8, Chun is silent on, however, in the same field of endeavor, Nima teaches: the method of claim 7 further comprising: concatenating results of the convolutions (see at least Chun, ¶¶ [0096]-[0097] which discloses the concatenation operation of all features) maximum-reducing the z dimension of the concatenated results (see at least Chun, ¶¶ [0097] which discloses reducing the z dimension of the concatenated results) successively down sampling the x and y dimensions and successively up sampling the x and y dimensions (see at least Chun, ¶¶ [0098] which discloses down and up sampling the x and y dimensions of the output) It would have been obvious to a person of ordinary skill in the art to modify Chun to include concatenating results of the convolutions, maximum-reducing the z dimension of the concatenated results, successively down sampling the x and y dimensions and successively up sampling the x and y dimensions as taught by Chun. Incorporating the teachings of Nima would allow for the recursive generation of predicted occupancy grid maps by applying a classifier of the corrective term (that allows for distinguishing between static and dynamic objects in patterns and input data) and the input occupancy grid map. A convolutional layer further enhances the accuracy of predicted occupancy grid maps where the observed (input) occupancy grid map is used to initialize its state and apply corrective factors. Regarding claim 9, Chun is silent on, however, in the same field of endeavor, Nima teaches: the method of claim 8 further comprising: repeating the up sampled results along the z dimension and concatenating the repeated up sampled results with the concatenated repeated results of the convolutions along the channels (see at least Chun, ¶¶ [0096]-[0098] which discloses repeating the up sampled results along the z dimension and concatenating the repeated up sampled results with the concatenated repeated results of the convolutions along the channels) It would have been obvious to a person of ordinary skill in the art to modify Chun to include repeating the up sampled results along the z dimension and concatenating the repeated up sampled results with the concatenated repeated results of the convolutions along the channels as taught by Chun. Incorporating the teachings of Nima would allow for the recursive generation of predicted occupancy grid maps by applying a classifier of the corrective term (that allows for distinguishing between static and dynamic objects in patterns and input data) and the input occupancy grid map. Regarding claim 10, Chun is silent on, however, in the same field of endeavor, Nima teaches: the method of claim 9 further comprising reducing a channel dimension to one for outputting the refined 3D OGM (see at least Nima, ¶¶ [0093]-[0095], which discloses reducing a channel dimension) It would have been obvious to a person of ordinary skill in the art to modify Chun to include the method of claim 9 further comprising reducing a channel dimension to one for outputting the refined 3D OGM as taught by Chun. Incorporating the teachings of Nima would allow for the recursive generation of predicted occupancy grid maps by applying a classifier of the corrective term (that allows for distinguishing between static and dynamic objects in patterns and input data) and the input occupancy grid map. Regarding claim 11, Chun is silent on, however, in the same field of endeavor, Nima teaches: the method of claim 10, further comprising reducing the z dimension for outputting the feature map (see at least Nima, ¶¶ [0093], [0097], [0152], discloses encoding a convolutional layer using a neural network in order to increase depth from the encoder that has x and y spatial dimensions and a greater depth than the input data; it is disclosed that the convolution may be implemented using separate encoder units for any later) It would have been obvious to a person of ordinary skill in the art to modify Chun to include reducing the z dimension for outputting the feature map as taught by Chun. Incorporating the teachings of Nima would allow for the recursive generation of predicted occupancy grid maps by applying a classifier of the corrective term (that allows for distinguishing between static and dynamic objects in patterns and input data) and the input occupancy grid map. Regarding claim 12, Chun is silent on, however, in the same field of endeavor, Nima teaches: the method of claim 11 wherein reducing the z dimension to output the feature map includes: determining two cumulative maxima along the z dimension and concatenating the two cumulative maxima and results of the reduced channel dimension (see at least Nima, ¶¶ [0093]-[0095], [0102] which discloses determining two cumulative maxima along the z dimension and concatenating the two cumulative maxima and results of the reduced channel dimension) It would have been obvious to a person of ordinary skill in the art to modify Chun to include the method of claim 11 wherein reducing the z dimension to output the feature map includes: determining two cumulative maxima along the z dimension and concatenating the two cumulative maxima and results of the reduced channel dimension as taught by Chun. Incorporating the teachings of Nima would allow for the recursive generation of predicted occupancy grid maps by applying a classifier of the corrective term (that allows for distinguishing between static and dynamic objects in patterns and input data) and the input occupancy grid map. Regarding claim 13, Chun discloses: the method of claim 1 further comprising adaptively re-centering the 3D OGM and the number of FGMs in dependency from a current orientation of the vehicle (see at least Chun, ¶¶ [0010], which discloses representing by a two-dimensional or three-dimensional boundary box that specifies the position and orientation of the vehicle relative to the coordinates of the radar sensor system to achieve a data structure corresponding to the occupancy grid, the assigned measurement attributes can be normalized, this means adaptively re-centering the 3D OGM and the number of FGMs in dependency from a current orientation of the vehicle) Regarding claim 14, Chun discloses: the method of claim 13 wherein adaptively re-centering the 3D OGM and the number of FGMs in dependency from a current orientation of the vehicle includes: in response to determining that an offset between the current orientation of the vehicle deviates from a reference point of the 3D OGM and the number of FGMs exceeds a given threshold, re-aligning the 3D OGM and the number of FGMs with the current orientation of the vehicle by an integer translation of the 3D OGM and the number of FGMs (see at least Chun, ¶¶ [0010], which discloses representing by a two-dimensional or three-dimensional boundary box that specifies the position and orientation of the vehicle relative to the coordinates of the radar sensor system to achieve a data structure corresponding to the occupancy grid, the assigned measurement attributes can be normalized, this means in response to determining that an offset between the current orientation of the vehicle deviates from a reference point of the 3D OGM and the number of FGMs exceeds a given threshold, re-aligning the 3D OGM and the number of FGMs with the current orientation of the vehicle by an integer translation of the 3D OGM and the number of FGMs Regarding claim 15, Chun discloses: an electronic control unit comprising (see at least Chun, ¶¶ [0004]-[0006]): memory configured to store instructions and at least one processor configured to execute the instructions (see at least Chun, ¶¶ [0004]-[0006]), wherein the instructions include: generating, based on radar point sensor data of an environment of a vehicle, a three-dimensional occupancy grid map (see at least Chun, ¶¶ [0004], [0009]-[0010], [0016]-[0018], discloses the generation of an occupancy grid map based on input sensor data acquired from (but not exclusive to) a radar sensor system, such as, standard radar output and a multitude of reflection signals; the occupancy grid map renders reflection signals with each pixel assigned a measurement attribute and the entirety of the data is utilized for object detection, classification, and boundaries of a detected object) generating, based on the radar point sensor data, a number of feature grid maps (see at least Chun, ¶¶ [0016], [0024], which discloses using the input data to extract a number of feature grid maps obtained from input data) wherein a respective feature dimension of each of the FGMs corresponds to a feature of the radar point sensor data (see at least Chun, ¶¶ [0027], which discloses an example of wherein a respective feature dimension of each of the FGMs corresponds to a feature of the radar point sensor data, for example, physical dimensions of a feature are identified in order to calculate a resolution and cover points) Chun is silent on, however, in the same field of endeavor, Nima teaches: generating, based on the 3D OGM and the number of FGMs, a refined occupancy grid (see at least Nima, ¶¶ [0009]-[0012], [0022], [0030], 0083]-[0085], which discloses the generation of a corrected (refined) occupancy grid map based on the original occupancy grid map and number of extracted features) providing the refined OGM for usage by an assistance system of the vehicle (see at least Nima, ¶¶ [0099]-[0102], [0150]-[0153] which discloses feeding back the corrected (refined) OGM as input for performing prediction by the assistance system of the vehicle) It would have been obvious to a person of ordinary skill in the art to modify Chun to include generating, based on the 3D OGM and the number of FGMs, a refined occupancy grid and providing the refined OGM for usage by an assistance system of the vehicle as taught by Nima. Incorporating the teaching of Nima would allow for corrective terms to be applied to the input data that accurately accounts for a vehicle environment’s dynamic and unstructured nature. Regarding claim 16, Chun discloses: a vehicle comprising: a radar system for collecting radar point sensor data; and the electronic control unit of claim 15, wherein the electronic control unit is communicatively coupled to the radar system (see at least Chun, ¶¶ [0004]-[0006]) Regarding claim 17, Chun discloses: a non-transitory computer-readable medium (see at least Chun, ¶¶ [0004]-[0006]), comprising instructions including: generating, based on radar point sensor data of an environment of a vehicle, a three-dimensional occupancy grid map (see at least Chun, ¶¶ [0004], [0009]-[0010], [0016]-[0018], discloses the generation of an occupancy grid map based on input sensor data acquired from (but not exclusive to) a radar sensor system, such as, standard radar output and a multitude of reflection signals; the occupancy grid map renders reflection signals with each pixel assigned a measurement attribute and the entirety of the data is utilized for object detection, classification, and boundaries of a detected object) generating, based on the radar point sensor data, a number of feature grid maps (FGMs) (see at least Chun, ¶¶ [0016], [0024], which discloses using the input data to extract a number of feature grid maps obtained from input data) wherein a respective feature dimension of each of the FGMs corresponds to a feature of the radar point sensor data (see at least Chun, ¶¶ [0027], which discloses an example of wherein a respective feature dimension of each of the FGMs corresponds to a feature of the radar point sensor data, for example, physical dimensions of a feature are identified in order to calculate a resolution and cover points) Chun is silent on, however, in the same field of endeavor, Nima teaches: generating, based on the 3D OGM and the number of FGMs, a refined occupancy grid (see at least Nima, ¶¶ [0009]-[0012], [0022], [0030], 0083]-[0085], which discloses the generation of a corrected (refined) occupancy grid map based on the original occupancy grid map and number of extracted features) providing the refined OGM for usage by an assistance system of the vehicle (see at least Nima, ¶¶ [0099]-[0102], [0150]-[0153] which discloses feeding back the corrected (refined) OGM as input for performing prediction by the assistance system of the vehicle) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIRSTEN JADE M SANTOS whose telephone number is (571)272-7442. The examiner can normally be reached Monday: 8:00 am - 4:00 pm, 6:00-8:00 pm (+ with flex). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached at (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KIRSTEN JADE M SANTOS/Examiner, Art Unit 3664 /RACHID BENDIDI/Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Oct 02, 2024
Application Filed
Feb 11, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566072
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12552255
VEHICULAR DISPLAY HAVING RECHARGING MODULE WITH ANNEXATION INTERFACE
2y 5m to grant Granted Feb 17, 2026
Patent 12530931
DISTRIBUTED DIAGNOSTICS ARCHITECTURE FOR A VEHICLE
2y 5m to grant Granted Jan 20, 2026
Patent 12522483
APPARATUS AND METHOD FOR AUTOMATICALLY DETERMINING THE MOVEMENT SPACE AND AUTONOMOUSLY OPTIMIZING THE DRIVING BEHAVIOR OF AN OPERATING AUTOMATED GUIDED VEHICLE COMPRISING LOADING IN DYNAMIC PRODUCTION AND LOGISTICS ENVIRONMENTS
2y 5m to grant Granted Jan 13, 2026
Patent 12454272
METHOD FOR ESTIMATING AN ACCIDENT RISK OF AN AUTONOMOUS VEHICLE
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
53%
Grant Probability
88%
With Interview (+34.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 60 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month