Prosecution Insights
Last updated: April 19, 2026
Application No. 18/272,773

RADAR PERCEPTION

Non-Final OA §103
Filed
Jul 17, 2023
Examiner
WOLFORD, NAOMI M
Art Unit
3648
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Five AI Limited
OA Round
2 (Non-Final)
54%
Grant Probability
Moderate
2-3
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
126 granted / 232 resolved
+2.3% vs TC avg
Strong +41% interview lift
Without
With
+40.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
259
Total Applications
across all art units

Statute-Specific Performance

§101
1.5%
-38.5% vs TC avg
§103
56.0%
+16.0% vs TC avg
§102
20.1%
-19.9% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 232 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority The pending application 18/272,773, filed on 17 July 2023, is a national stage application filed under 35 U.S.C. 371 of PCT/EP2022/051036, filed on 18 January 2023, and claims priority from foreign application GB2100683.8, filed on 19 January 2021 in the United Kingdom of Great Britain and Northern Ireland. Response to Amendment Applicant’s amendment filed on 15 DEC 2025 has been entered. Claims 1, 6-7, 10-11, 13, 15, 21 and 23 have been amended. Claims 4-5, 14, 16, 22, and 24 have been cancelled. Claims 1-3, 6-13, 15, 17-21 and 23 are still pending in this application, with claims 1, 10 and 23 being independent. Applicant’s amendments to the claims have overcome the objection(s) raised in the previous office action dated 13 AUG 2025. Applicant’s amendments to the claims have overcome the rejections under 35 U.S.C. 112(b) raised in the previous office action dated 13 AUG 2025. Response to Arguments Applicant’s arguments filed 15 DEC 2025 have been fully considered. Applicant’s arguments with respect to claim(s) 1, 10, and 23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Regarding the examiner’s rejection of claim 1 under 35 U.S.C. 103 as unpatentable over Liu et al. (WO 2020/022110 A1), the applicant argues that the cited reference fails to disclose all of the features of the claimed invention, specifically “wherein the ML perception component comprises a bounding box detector or other object detector, the extracted information comprising object position, orientation and/or size information for at least one detected object.” Applicant argues that Liu’s deep learning model does not include an ML perception component comprising “a bounding box detector or other object detector.” (Applicant’s remarks p. 9, second paragraph) Although applicant’s argument is moot, examiner notes that the ML perception component of applicant’s claim lacks a structural or functional definition. Under broadest reasonable interpretation, the entirety of the object discrimination device 1 in figure 1 of Liu could be considered the “ML perception component” because the object discrimination device 1 includes the deep learning model. The object discrimination device also comprises object detection unit 21. Therefore, under this interpretation, Liu teaches that the ML perception component comprising a bounding box detector or other object detector. However, in light of applicant’s arguments, the new ground of rejection is made, citing a different embodiment of Liu, in order to clarify the interpretation of the claims applied to the prior art, and move prosecution forward. Applicant argues that “Liu’s object detection is performed by the object detector 21 (not the deep learning model), occurs prior to the generation of the radar detection images, and occurs prior to the application of Liu’s deep learning model to those radar detection images.” (Applicant’s remarks p. 9, second paragraph) Although applicant’s argument is moot, examiner notes that as discussed above, under broadest reasonable interpretation, Liu teaches that the ML perception component, considered to be the object discrimination device 1 in figure 1, comprises object detector, object detection unit 21, and the image generation section 32. The object detection unit 21 and the image generation section 32 are functional blocks of the object discrimination device 1, considered to be the ML perception component. The image generating unit 32 generates the images used as input to the deep learning model 23, another component of the object discrimination device 1. The claim also does not require that the object detection take place after generating the image. Further, claim 1 does not require that the ML perception component or the object detector are implemented as a deep learning model. However, in light of applicant’s arguments, the new ground of rejection is made, citing a different embodiment of Liu, in order to clarify the interpretation of the claims applied to the prior art, and move prosecution forward. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (WO 2020/022110 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner). Regarding claim 1 (Currently Amended), Liu et al. discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] A computer-implemented method (Liu et al. “The storage unit 13 stores radar data input from the radar device 2, programs executed by the processor constituting the control unit 12, and the like.” - ¶ [0040]) of perceiving structure in a radar point cloud, the method comprising: generating a discretised image representation of the radar point cloud (Liu et al. The image generating unit 43 generates a radar detection image of the entire observation area based on radar data of the entire observation area.” - ¶ [0081]) having: (i) an (Liu et al. any of the channels of the pixels in the radar detection image, ¶ [0081]) indicating whether or not each pixel of the discretised image representation corresponds to a point in the radar point cloud and: (ii) a Doppler channel containing, for each occupied pixel, a Doppler velocity of the corresponding point in the radar point cloud (Liu et al. “Next, the image generating unit 32 sets the pixel value (values of each RGB channel) of the pixel at the position corresponding to the cell Cj based on the reflection intensity, Doppler velocity, and range of the cell Cj (ST206).” - ¶ [0089]), or (iii) a radar cross section (RCS) channel containing, for each occupied pixel, an RCS value of the corresponding point in the radar point cloud for use by machine learning (ML) perception component (Examiner notes that items (ii) and (iii) are alternatives such that only one of (ii) or (iii) is required); inputting the discretised image representation to the machine learning (ML) perception component (Liu et al. “The object detection and discrimination unit 42 inputs the radar detection image of the entire observation area generated by the image generation unit 43 into a rained deep learning model…” - ¶ [0082]), which has been trained to extract information about structure exhibited in the radar point cloud from (i) the occupancy channel and: (ii) the Doppler channel, or (iii) the RCS channels (Liu et al. “Next, in the object detection and discrimination unit 42, the radar detection image of the entire observation area generated by the image generation unit 43 is input into the trained deep learning model, object detection and object discrimination are performed in the deep learning model, and the object discrimination result output from the deep learning model is obtained (ST112).” - ¶ [0086]; “Next, the object discrimination result and position information for each detected object are output (ST107).” - ¶ [0087]); and wherein the ML perception component comprises a bounding box detector or other object detector (Liu et al. “in this embodiment, in addition to object discrimination, object detection to detect object regions is also performed using a deep learning model.” - ¶ [0079]), the extracted information comprising object position, orientation and/or size information for at least one detected object (Liu et al. “Then, the reflection intensity, Doppler velocity, and range of the selected cell Cj are obtained from the radar data of the entire observation area (ST205).” - ¶ [0088]). Additionally, although Liu et al. does not explicitly disclose an occupancy channel indicating whether or not each pixel of the discretised image representation corresponds to a point in the radar point cloud, Liu et al. does disclose that the control unit converts the radar data into an image and “generates a radar detection image based on the radar data, storing information on reflection intensity, speed, and distance corresponding to the position of each pixel in multiple channels for each pixel.” (Liu et al. ¶ [0009]). “Next, the image generating unit 32 sets the pixel value (values of each RGB channel) of the pixel at the position corresponding to the cell Cj based on the reflection intensity, Doppler velocity, and range of the cell Cj (ST206).” (Liu et al. ¶ [0089]). Radar points must exist in order for the radar data to be stored in the multiple channels of the pixels. Therefore, any channel of a given pixel that receives and stores information from the radar data can indicate that the pixel corresponds to a point in the radar point cloud and be considered an occupancy channel. Regarding claim 2 (Original), the first embodiment of Liu et al. discloses: The method of claim 1, wherein the ML perception component has a neural network architecture (Liu et al. “Faster R-CNN (regions with convolutional neural network) is suitable deep with this search function.” - ¶ [0082]). Regarding claim 3 (Original), the first embodiment of Liu et al. discloses: The method of claim 2, wherein the ML perception component has a convolutional neural network (CNN) architecture (Liu et al. “Faster R-CNN (regions with convolutional neural network) is suitable deep with this search function.” - ¶ [0082]). Regarding claim 6 (Currently Amended), Liu discloses: The method of claim 1, wherein the radar point cloud is an accumulated radar point cloud comprising points accumulated over multiple radar sweeps (Liu et al. “The data synthesis unit 72 synthesizes (integrates) the radar data of the object area at a plurality of times acquired by the area data extraction unit 31, and generates synthesized radar data of the object area.” - ¶ [0125]; “Specifically, for example, when radar data at four different times is synthesized, it is determined that the timing for output is when the frame order is a multiple of four.” - ¶ [0138]). Claim(s) 7-8 remain rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (WO 2020/022110 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner) as applied to claim 6 above, and further in view of Fontijne et al. (WO 2020/113160 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner). Regarding claim 7 (Currently Amended), Liu et al. discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] The method of claim 1, wherein the points of the radar point cloud have been captured by a moving radar system (Liu et al. “it may be mounted on a vehicle and the discrimination results of surrounding objects may be used to control collision avoidance.” - ¶ [0035]), (Liu et al. “The data synthesis unit 72 synthesizes (integrates) the radar data of the object area at a plurality of times acquired by the area data extraction unit 31, and generates synthesized radar data of the object area.” - ¶ [0125]; “Specifically, for example, when radar data at four different times is synthesized, it is determined that the timing for output is when the frame order is a multiple of four.” - ¶ [0138]). Fontijne et al. discloses: wherein ego motion of the radar system during the multiple radar sweeps is determined (Fontijne et al. “Next, the ego motion between frames 902 and 904 is obtained.” - ¶ [0075]) It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Fontijne et al. into the invention of Liu et al. to yield the invention of claim 7 above. Both Liu et al. and Fontijne et al. are considered analogous arts to the claimed invention as they both disclose radar systems for vehicles that utilize machine learning for object detection. Liu et al. discloses the method of claim 1, wherein the points of the radar point cloud have been captured by a moving radar system (Liu et al. ¶ [0035]). However, Liu et al. fails to explicitly disclose wherein ego motion of the radar system during the multiple radar sweeps is determined and used to accumulate the points in a common static frame for generating the discretised image representation. This feature is disclosed by Fontijne et al. where the latent-space ego-motion compensation technique transforms the radar images received at different times to world coordinates using the second input frame as the world frame (Fontijne et al. latent-space ego-motion compensation technique, Fig. 9; ¶ [0074]-[0078]). The combination of Liu et al. and Fontijne et al. would be obvious with a reasonable expectation of success to improve performance for object classification and detection (Fontijne et al. ¶ [0072]). Regarding claim 8 (Previously Presented), Liu et al. discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] The method of claim 6, wherein the discretised image representation has (ii) the Doppler channel, and the points of the radar point cloud have been captured by a moving radar system (Liu et al. “it may be mounted on a vehicle and the discrimination results of surrounding objects may be used to control collision avoidance.” - ¶ [0035]) Fontijne et al. discloses: wherein the discretised image representation has (ii) the Doppler channel, and the points of the radar point cloud have been captured by a moving radar system (Fontijne et al. radar-camera sensor module 120 is mounted on vehicle 100, Fig. 1), wherein ego motion of the radar system, wherein ego motion of the radar system during the multiple radar sweeps is determined (Fontijne et al. “Next, the ego motion between frames 902 and 904 is obtained.” - ¶ [0075]), and wherein the Doppler velocities (Fontijne et al. “More specifically, an RNN has the ability to look at the position of the objects over multiple time steps (versus only at a given point in time). Based on the known position over a time period, computing the velocity is just a matter of calculating how fast that position has moved.” - ¶ [0060]) are ego motion-compensated Doppler velocities determined by compensating for the determined ego motion (Fontijne et al. latent-space ego-motion compensation technique, Fig. 9; ¶ [0074]-[0078]). It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Fontijne et al. into the invention of Liu et al. to yield the invention of claim 8 above. Both Liu et al. and Fontijne et al. are considered analogous arts to the claimed invention as they both disclose radar systems for vehicles that utilize machine learning for object detection. Liu et al. discloses the method of claim 6, wherein the points of the radar point cloud have been captured by a moving radar system (Liu et al. ¶ [0035]). However, Liu et al. fails to explicitly disclose wherein ego motion of the radar system during the multiple radar sweeps is determined, and wherein the Doppler velocities are ego motion-compensated Doppler velocities determined by compensating for the determined ego motion. This feature is disclosed by Fontijne et al. where the latent-space ego-motion compensation technique transforms the radar images received at different times to world coordinates using the second input frame as the world frame (Fontijne et al. latent-space ego-motion compensation technique, Fig. 9; ¶ [0074]-[0078]). The combination of Liu et al. and Fontijne et al. would be obvious with a reasonable expectation of success to improve performance for object classification and detection (Fontijne et al. ¶ [0072]). Claim(s) 9 remains rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (WO 2020/022110 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner) in view of Fontijne et al. (WO 2020/113160 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner) as applied to claim 7 above, and further in view of Mercep et al. (US 2018/0314921 A1, previously relied upon by the examiner). Regarding claim 9 (Previously Presented), Liu et al. as modified above discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] The method of claim 7 Mercep et al. discloses: wherein the ego motion is determined via odometry (Mercep et al. “The measurement integration system 310 can include an ego motion unit 313 to compensate for movement of at least one sensor capturing the raw measurement data 301, for example, due to the vehicle driving or moving in the environment. The ego motion unit 313 can estimate motion of the sensor capturing the raw measurement data 301, for example, by utilizing tracking functionality to analyze vehicle motion information, such as global positioning system (GPS) data, inertial measurements, vehicle odometer data, video images, or the like.” - ¶ [0038]). It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Mercep et al. into the invention of Liu et al. as modified above to yield the invention of claim 9. Liu et al., Fontijne et al. and Mercep et al. and are considered analogous arts to the claimed invention as they disclose radar systems for vehicles that utilize machine learning for object detection. Liu et al. discloses the method of claim 7. However, Liu et al. fails to explicitly disclose wherein the ego motion is determined via odometry. Examiner notes that although Fontijne et al. does not explicitly disclose that the ego motion is determined via odometry, Fontijne et al. does disclose that the change in position of the radar sensor that is mounted on a vehicle is obtained via a sensor (Fontijne et al. “The ego motion between frames 902 and 904 is the change in the position of the radar sensor. This can be obtained in various ways, such as GPS or other sensors, or the neural network can estimate the motion, including rotation (i.e., a change in orientation of the vehicle 100).” - ¶ [0075]). Further, odometers are well known for measuring the change in position, or distance traveled, by a vehicle. The use of odometry is explicitly disclosed by Mercep et al. where “The ego motion unit 313 can estimate motion of the sensor capturing the raw measurement data 301, for example, by utilizing tracking functionality to analyze vehicle motion information, such as global positioning system (GPS) data, inertial measurements, vehicle odometer data, video images, or the like.” (Mercep et al. ¶ [0038]). The combination of Liu et al., Fontijne et al. and Mercep et al. would be obvious with a reasonable expectation of success to improve performance for object classification and detection (Fontijne et al. ¶ [0072]) and spatially align raw measurement data to world coordinates (Mercep et al. ¶ [0049]). Claim(s) 10-11, 13, 15 and 17-19 remain rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (WO 2020/022110 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner) in view of Cohen et al. (US 2019/0391250 A1, previously relied upon by the examiner) and Fontijne et al. (WO 2020/113160 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner). Regarding claim 10 (Currently Amended), Liu et al. discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] A computer system (Liu et al. object discrimination device 1 includes storage unit 13 and control unit 12, Fig. 8) for perceiving structure in a radar point cloud, the computer system comprising: at least one memory configured to store computer-readable instructions (Liu et al. “The storage unit 13 stores radar data input from the radar device 2, programs executed by the processor constituting the control unit 12, and the like.” - ¶ [0040]); and at least one processor coupled to the at least one memory and configured to execute the computer-readable instructions (Liu et al. “The control unit 12 is configured by a processor, and each unit of the control unit 12 is realized by the processor executing a program stored in the storage unit 13.” - ¶ [0041]), which upon execution cause the at least one processor to implement operations comprising: generating a discretised image representation of the radar point cloud (Liu et al. The image generating unit 43 generates a radar detection image of the entire observation area based on radar data of the entire observation area.” - ¶ [0081]) having: (i) an (Liu et al. any of the channels of the pixels in the radar detection image, ¶ [0081]) indicating whether or not each pixel of the discretised image representation corresponds to a point in the radar point cloud and: (ii) a Doppler channel containing, for each occupied pixel, a Doppler velocity of the corresponding point in the radar point cloud (Liu et al. “Next, the image generating unit 32 sets the pixel value (values of each RGB channel) of the pixel at the position corresponding to the cell Cj based on the reflection intensity, Doppler velocity, and range of the cell Cj (ST206).” - ¶ [0089]), or a radar cross section (RCS) channel containing for each occupied pixel, an RCS value of the corresponding point in the radar point cloud for use by a machine learning (ML) perception component (Examiner notes that items (ii) and (iii) are alternatives such that only one of (ii) or (iii) is required); inputting the discretised image representation to the machine learning (ML) perception component (Liu et al. “The object detection and discrimination unit 42 inputs the radar detection image of the entire observation area generated by the image generation unit 43 into a rained deep learning model…” - ¶ [0082]), which has been trained to extract information about structure exhibited in the radar point cloud from (i) the occupancy channel and: (ii) the Doppler channel, or (iii) the RCS channels (Liu et al. “Next, in the object detection and discrimination unit 42, the radar detection image of the entire observation area generated by the image generation unit 43 is input into the trained deep learning model, object detection and object discrimination are performed in the deep learning model, and the object discrimination result output from the deep learning model is obtained (ST112).” - ¶ [0086]; “Next, the object discrimination result and position information for each detected object are output (ST107).” - ¶ [0087]); and wherein the ML perception component comprises a bounding box detector or other object detector (Liu et al. “in this embodiment, in addition to object discrimination, object detection to detect object regions is also performed using a deep learning model.” - ¶ [0079]), the extracted information comprising object position, orientation, and/or size information for at least one detected object (Liu et al. “Then, the reflection intensity, Doppler velocity, and range of the selected cell Cj are obtained from the radar data of the entire observation area (ST205).” - ¶ [0088]); and Additionally, although Liu et al. does not explicitly disclose an occupancy channel indicating whether or not each pixel of the discretised image representation corresponds to a point in the radar point cloud, Liu et al. does disclose that the control unit converts the radar data into an image and “generates a radar detection image based on the radar data, storing information on reflection intensity, speed, and distance corresponding to the position of each pixel in multiple channels for each pixel.” (Liu et al. ¶ [0009]). “Next, the image generating unit 32 sets the pixel value (values of each RGB channel) of the pixel at the position corresponding to the cell Cj based on the reflection intensity, Doppler velocity, and range of the cell Cj (ST206).” (Liu et al. ¶ [0089]). Radar points must exist in order for the radar data to be stored in the multiple channels of the pixels. Therefore, any channel of a given pixel that receives and stores information from the radar data can indicate that the pixel corresponds to a point in the radar point cloud and be considered an occupancy channel. Cohen et al. discloses: (iii) a radar cross section (RCS) channel containing, for each occupied pixel, an RCS value (Cohen et al. “In some example implementations, the first radar data 114 and/or the second radar data 116 may include position information indicative of a location of objects in the environment, e.g., a range and azimuth relative to the vehicle 106 or a position in a local or global coordinate system. The first sensor data 114 and/or the second sensor data 116 may also include signal strength information… In some instances, the signal strength may be a radar cross-section (RCS) measurement.” - ¶ [0021]) of the corresponding point in the radar point cloud for use by the ML perception component (Cohen et al. “In some implementations, the clustering component may utilize algorithmic processing, e.g., DBSCAN, computer learning processing, e.g., K-means unsupervised learning, and/or additional clustering techniques” - ¶ [0049]); wherein the radar point cloud is transformed for generating a discretised image representation of the transformed radar point by: applying clustering to the radar point cloud (Cohen et al. “At operation 118, the process 100 can determine one or more clusters from the first radar data 114.” - ¶ [0022]), and thereby identifying at least one moving object cluster within the radar point cloud (Cohen et al. “In the visualization 122, the positions of the detected points 124 may generally correspond to a position of one or more detected objects.” - ¶ [0022]), the points of the radar point cloud being time-stamped (Cohen et al. “For example, the timestamp may be a single parameter of a sensor measurement used to cluster the one or more points…” - ¶ [0034]), having been captured over a non-zero accumulation window (Cohen et al. “However, because the first radar sensor 108 and the second radar sensor 110 may have different pulse intervals and/or different scanning intervals, the scans may not be exactly at the same time. In some examples, scans occurring within about 100 milliseconds may be considered to be proximate in time or substantially simultaneous.” - ¶ [0026]), determining a motion model for the moving object cluster, by fitting one or more parameters of the motion model to the time-stamped points of that cluster (Cohen et al. “As described herein, the updated point cluster 136 may be useful to the autonomous vehicle 106 to identify objects, predict actions the objects may take, and/or maneuver in the environment relative to the objects, among other things.” - ¶ [0030]) Fontijne et al. discloses: using the motion model to transform the time-stamped points (Fontijne et al. “Each camera and radar frame may be timestamped.” - ¶ [0048]) of the moving object cluster to a common reference time (Fontijne et al. latent-space ego-motion compensation technique, Fig. 9; ¶ [0074]-[0078]). It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Cohen et al. and Fontijne et al. into the invention of Liu et al. to yield the invention of claim 10 above. Liu et al., Cohen et al. and Fontijne et al. are considered analogous arts to the claimed invention as they disclose vehicle radar systems for object detection. Liu et al. discloses limitations of claim 10 outlined above. However, Liu et al. fails to explicitly disclose clustering time-stamped radar points, determining motion models for the radar point clusters and transforming time-stamped points to a common reference time. These features are disclosed by Cohen et al. and Fontijne et al. where Cohen et al. discloses accumulating and clustering time-stamped radar points, and updating the point clusters in order “to identify objects, predict actions the objects may take, and/or maneuver in the environment relative to the objects, among other things.” (Cohen et al. ¶ [0030]) and Fontijne et al. discloses the latent-space ego-motion compensation technique transforms the radar images received at different times to world coordinates using the second input frame as the world frame (Fontijne et al. latent-space ego-motion compensation technique, Fig. 9; ¶ [0074]-[0078]) The combination of Liu et al., Cohen et al. and Fontijne et al. would be obvious with a reasonable expectation of success to improve performance for object classification and detection (Fontijne et al. ¶ [0072]) and more efficiently and accurately detect and characterize objects (Cohen et al. ¶ [0015]). Regarding claim 11 (Currently Amended), Liu et al. as modified above discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] The computer system of claim 10 Cohen et al. discloses: wherein the clustering identifies multiple moving object clusters (Cohen et al. “In this example, the process 100 uses multiple sensors with overlapping fields of view to determine point clusters indicative of objects in the environment of the autonomous vehicle.” - ¶ [0018]) Fontijne et al. discloses: a motion model is determined for each of the multiple moving object clusters and used to transform the respective time-stamped points of each cluster to the common reference time (Fontijne et al. Fig. 9; ¶ [0078]) wherein the transformed point cloud comprises the transformed points of the multiple object clusters (Fontijne et al. “Based on the ego motion from a previous step of the process, each frame’s 902 and 904 feature map 910 and 912 is transformed to a new feature map 914 and 916, respectively, in world coordinates… In the example of FIG. 9, the second frame 904 is chosen as the world frame, and thus, there is no transformation between the feature map 912 and 916. In contrast, the first frame 902 is transformed. In the example of FIG. 9, this transformation is simply a translation on the x-axis.” - ¶ [0076]-[0077]; Fig. 9). It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Cohen et al. into the invention of Liu et al. as modified above to yield the invention of claim 11. Liu et al., Cohen et al. and Fontijne et al. are considered analogous arts to the claimed invention as they disclose vehicle radar systems for object detection. Liu et al. as modified above discloses computer system of claim 10. However, Liu et al. fails to explicitly disclose wherein the clustering identifies multiple moving object clusters, and a motion model is determined for each of the multiple moving object clusters and used to transform the the respective time-stamped points of that cluster to the common reference time; wherein the transformed point cloud comprises the transformed points of the multiple object cluster. These features are disclosed by Cohen et al. and Fontijne et al. where Cohen et al. discloses the clustering identifies multiple objects (Cohen et al. ¶ [0018]) and Fontijne et al. discloses the motion models of the moving object clusters are transformed to a common reference time (Fontijne et al. latent-space ego-motion compensation technique, Fig. 9; ¶ [0074]-[0078]). The combination of Liu et al., Cohen et al. and Fontijne et al. would be obvious with a reasonable expectation of success to improve performance for object classification and detection (Fontijne et al. ¶ [0072]) and more efficiently and accurately detect and characterize objects (Cohen et al. ¶ [0015]). Regarding claim 13 (Currently Amended), Liu et al. as modified above discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] The computer system of claim 10 Cohen et al. discloses: wherein the clustering is based on timestamps of the time-stamped points (Cohen et al. “Timestamps from the sensor scans may be used to determine whether scans are within the threshold. In some instances, the threshold time may be determined as a part of the clustering (or association) performed. For example, the timestamp may be a single parameter of a sensor measurement used to cluster the one or more points and, in at least some instances, a threshold may be associated with the timestamps.” - ¶ [0034]), and wherein the clustering in density-based (Cohen et al. “In some implementations, the clustering component may utilize algorithmic processing, e.g., DBSCAN, computer learning processing, e.g., K-means unsupervised learning, and/or additional clustering techniques” - ¶ [0049]) and uses a time threshold to determine whether or not to assign a point to the moving object cluster (Cohen et al. “Timestamps from the sensor scans may be used to determine whether scans are within the threshold. In some instances, the threshold time may be determined as a part of the clustering (or association) performed. For example, the timestamp may be a single parameter of a sensor measurement used to cluster the one or more points and, in at least some instances, a threshold may be associated with the timestamps.” - ¶ [0034]), wherein the point is assigned to the moving object cluster only if a difference between the timestamp of the point and the timestamp of another point assigned to the moving cluster is less than the time threshold (Cohen et al. “Clustering may be performed based on any physical parameters associated with the points (including, but not limited to, velocities, signal strength, location, nearest neighbors, a time stamp the measurement was performed, etc.), as well as corresponding threshold differences in any of the aforementioned parameters. Inclusion may also be based on a combination of these and other information.” - ¶ [0027]). It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Cohen et al. into the invention of Liu et al. as modified above to yield the invention of claim 13. Liu et al., Cohen et al. and Fontijne et al. are considered analogous arts to the claimed invention as they disclose vehicle radar systems for object detection. Liu et al. as modified above discloses the computer system of claim 10. However, Liu et al. fails to explicitly disclose the clustering is based on the timestamps, wherein the clustering in density-based. These features are disclosed by Cohen et al. where Cohen et al. discloses the “the timestamp may be a single parameter of a sensor measurement used to cluster the one or more points and, in at least some instances, a threshold may be associated with the timestamps.” (Cohen et al. ¶ [0034]). The combination of Liu et al., Cohen et al. and Fontijne et al. would be obvious with a reasonable expectation of success to improve performance for object classification and detection (Fontijne et al. ¶ [0072]) and more efficiently and accurately detect and characterize objects (Cohen et al. ¶ [0015]). Regarding claim 15 (Currently Amended), Liu et al. as modified above discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] The computer system of claim 10 Cohen et al. discloses: wherein the clustering is based on the Doppler velocities (Cohen et al. “For example, when the radar sensor is a Doppler-type sensor, velocity of the objects may be used to determine the point cluster 126.” - ¶ [0024]), and wherein the clustering is density-based (Cohen et al. “In some implementations, the clustering component may utilize algorithmic processing, e.g., DBSCAN, computer learning processing, e.g., K-means unsupervised learning, and/or additional clustering techniques” - ¶ [0049]) and uses a velocity threshold to determine whether or not to assign a point to the moving object cluster, and wherein the point is assigned to the moving object cluster only if a difference between the Doppler velocity of the point and the Doppler velocity of another point assigned to the moving object cluster is less than the velocity threshold (Cohen et al. “Clustering may be performed based on any physical parameters associated with the points (including, but not limited to, velocities, signal strength, location, nearest neighbors, a time stamp the measurement was performed, etc.), as well as corresponding threshold differences in any of the aforementioned parameters. Inclusion may also be based on a combination of these and other information.” - ¶ [0027]). It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Cohen et al. into the invention of Liu et al. as modified above to yield the invention of claim 15. Liu et al., Cohen et al. and Fontijne et al. are considered analogous arts to the claimed invention as they disclose vehicle radar systems for object detection. Liu et al. as modified above discloses the computer system of claim 10. However, Liu et al. fails to explicitly disclose the clustering is based on the Doppler velocities, wherein the clustering is density-based. These features are disclosed by Cohen et al. where Cohen et al. discloses “Clustering may be performed based on any physical parameters associated with the points (including, but not limited to, velocities, signal strength, location, nearest neighbors, a time stamp the measurement was performed, etc.), as well as corresponding threshold differences in any of the aforementioned parameters. Inclusion may also be based on a combination of these and other information.” (Cohen et al. ¶ [0027]). The combination of Liu et al., Cohen et al. and Fontijne et al. would be obvious with a reasonable expectation of success to improve performance for object classification and detection (Fontijne et al. ¶ [0072]) and more efficiently and accurately detect and characterize objects (Cohen et al. ¶ [0015]). Regarding claim 17 (Previously Presented), Liu et al. as modified above discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] The computer system of claim 10 Cohen et al. discloses: wherein Doppler velocities of the (or each) moving object cluster are used to determine the motion model for that cluster (Cohen et al. “As described herein, the updated point cluster 136 may be useful to the autonomous vehicle 106 to identify objects, predict actions the objects may take, and/or maneuver in the environment relative to the objects, among other things.” - ¶ [0030]). It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Cohen et al. into the invention of Liu et al. as modified above to yield the invention of claim 17. Liu et al., Cohen et al. and Fontijne et al. are considered analogous arts to the claimed invention as they disclose vehicle radar systems for object detection. Liu et al. as modified above discloses the computer system of claim 10. However, Liu et al. fails to explicitly disclose wherein Doppler velocities of the (or each) moving object cluster are used to determine the motion model for that cluster. These features are disclosed by Cohen et al. where Cohen et al. discloses the moving object cluster data is used to predict actions that the detected objects will take (Cohen et al. ¶ [0030]). The combination of Liu et al., Cohen et al. and Fontijne et al. would be obvious with a reasonable expectation of success to improve performance for object classification and detection (Fontijne et al. ¶ [0072]) and more efficiently and accurately detect and characterize objects (Cohen et al. ¶ [0015]). Regarding claim 18 (Previously Presented), Liu et al. as modified above discloses: The computer system of claim 10, wherein the discretised image representation has one or more motion channels that encode, for each occupied pixel corresponding to a point of (one of) the moving object cluster(s), motion information about that point derived from the motion model of that moving object cluster (Liu et al. “Next, the image generating unit 32 sets the pixel value (values of each RGB channel) of the pixel at the position corresponding to the cell Cj based on the reflection intensity, Doppler velocity, and range of the cell Cj (ST206).” - ¶ [0089]). Regarding claim 19 (Previously Presented), Liu et al. as modified above discloses: The method of claim 1, wherein the radar point cloud has only two spatial dimensions (Liu et al. “In this case, the area data extraction unit 31 performs coordinate conversion to convert the polar coordinate system of the radar into an XY orthogonal coordinate system.” - ¶ [0049]). Claim(s) 12 remains rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (WO 2020/022110 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner) in view of Cohen et al. (US 2019/0391250 A1, previously relied upon by the examiner) and Fontijne et al. (WO 2020/113160 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner) as applied to claim 11 above, and further in view of Moosmann et al. (“Motion Estimation from Range Images in Dynamic Outdoor Scenes,” cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner). Regarding claim 12 (Previously Presented), Liu et al. as modified above discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] The computer system of claim 11 Moosmann et al. discloses: wherein the transformed point cloud additionally comprises untransformed static object points of the radar point cloud (Moosmann et al. the transformed point clouds are indicated by the purple smearing lines showing the most prominent motion estimates over 30 frames and the untransformed static objects of the radar point cloud are represented by the black dots, Fig. 7). It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Moosmann et al. into the invention of Liu et al. as modified above to yield the invention of claim 12. Liu et al., Cohen et al., Fontijne et al. and Moosmann et al. are considered analogous arts to the claimed invention as they disclose vehicle radar systems for object detection. Liu et al. as modified above discloses computer system of claim 11. However, Liu et al. fails to explicitly disclose the transformed point cloud additionally comprises untransformed static object points of the radar point cloud. This feature is disclosed by Moosmann et al. where the transformed point clouds are indicated by the purple smearing lines showing the most prominent motion estimates over 30 frames and the untransformed static objects of the radar point cloud are represented by the black dots (Moosmann et al. Fig. 7). The combination of Liu et al., Cohen et al., Fontijne et al. and Moosmann et al. would be obvious with a reasonable expectation of success to improve performance for object classification and detection (Fontijne et al. ¶ [0072]), more efficiently and accurately detect and characterize objects (Cohen et al. ¶ [0015]) and “help data association and thus motion estimation in low-resolution areas” (Moosmann et al. Section VII. Conclusions and Future Work, p. 6). Claim(s) 20 remains rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (WO 2020/022110 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner) as applied to claim 1 above, and further in view of Deng et al. (US 11,113,584 B2, previously relied upon by the examiner). Regarding claim 20 (Previously Presented), Liu et al. discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] The method of claim 1, Although Liu et al. does not explicitly disclose that the radar point cloud has three spatial dimensions, Liu et al. does disclose the radar point cloud has two spatial dimensions (Liu et al. “In this case, the area data extraction unit 31 performs coordinate conversion to convert the polar coordinate system of the radar into an XY orthogonal coordinate system.” - ¶ [0049]). Deng et al. discloses: wherein the radar point cloud has three spatial dimensions, and the discretised image representation additionally includes a height channel (Deng et al. “Each point contains location information X, Y, Z, location of a point, reflection intensity of the point as well as a Doppler velocity between the point and the RADAR device.” – Col. 24, line 65 – Col. 25, line 1; “After processing by the one or more processors/computers identified above utilizing DNN 816, output 820 includes the following parameters for detected objects: object classification (e.g., car, pedestrian, etc.); 3D position of the object (e.g., X, Y, Z coordinates of the object); 3D dimensions of the object (e.g., length, width, height of the object); the direction of the object (e.g., heading); and the velocity of the object.” Col. 25, lines 11-18) It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Deng et al. into the invention of Liu et al. to yield the invention of claim 20 above. Both Liu et al. and Deng et al. are considered analogous arts to the claimed invention as they both disclose vehicle radar systems for object detection. Liu et al. discloses the method of claim 1. However, Liu et al. fails to explicitly disclose wherein the radar point cloud has three spatial dimensions, and the discretised image representation additionally includes a height channel. This feature is disclosed by Deng et al. where each of the radar points contains 3D location data such that the 3D dimensions of detected objects can be determined (Deng et al. Col. 24, line 65 – Col. 25, line 1; Col. 25, lines 11-18). The combination of Liu et al. and Deng et al. would be obvious with a reasonable expectation of success to achieve 4D object detection (Deng et al. Col. 24, lines 11-17) in order to improve object detection (Deng et al. Col. 1, line 66 – Col. 2, line 2). Claim(s) 21 remains rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (WO 2020/022110 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner) as applied to claim 1 above, and further in view of Moosmann et al. (“Motion Estimation from Range Images in Dynamic Outdoor Scenes,” cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner). Regarding claim 21 (Currently Amended), Liu et al. discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] The method of claim 6 Moosmann et al. discloses: wherein the accumulated radar point cloud includes points captured from an object that exhibit smearing effects caused by motion of the object during the multiple radar sweeps, and the discretised image representation retains the smearing effects. It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Moosmann et al. into the invention of Liu et al. to yield the invention of claim 21 above. Both Liu et al. and Moosmann et al. are considered analogous arts to the claimed invention as they both disclose vehicle radar systems for object detection. Liu et al. discloses the method of claim 1. However, Liu et al. fails to explicitly disclose the accumulated radar point cloud includes points captured from an object that exhibit smearing effects caused by motion of the object during the multiple radar sweeps, and the discretised image representation retains the smearing effects. This feature is disclosed by Moosmann et al. where the accumulated radar point cloud indicated by smearing lines showing the most prominent motion estimates over 30 frames (Moosmann et al. Fig. 7). The combination of Liu et al. and Moosmann et al. would be obvious with a reasonable expectation of success to “help data association and thus motion estimation in low-resolution areas” (Moosmann et al. Section VII. Conclusions and Future Work, p. 6). Claim(s) 23 remains rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (WO 2020/022110 A1, cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner) in view of Moosmann et al. (“Motion Estimation from Range Images in Dynamic Outdoor Scenes,” cited by applicant in IDS dated 17 JUL 2023, previously relied upon by the examiner). Regarding claim 23 (Currently Amended), Liu et al. discloses: [Note: what is not explicitly taught by Liu et al. has been struck-through] A non-transitory computer readable medium storing computer program instructions (Liu et al. “The storage unit 13 stores radar data input from the radar device 2, programs executed by the processor constituting the control unit 12, and the like.” - ¶ [0040]), the computer program instructions configured so as, when executed on one or more processors (Liu et al. “The control unit 12 is configured by a processor, and each unit of the control unit 12 is realized by the processor executing a program stored in the storage unit 13.” - ¶ [0041]), to cause the one or more processors to perform operations comprising: generating a discretised image representation of a radar point cloud (Liu et al. The image generating unit 43 generates a radar detection image of the entire observation area based on radar data of the entire observation area.” - ¶ [0081]) having (i) an (Liu et al. any of the channels of the pixels in the radar detection image, ¶ [0081]) indicating whether or not each pixel of the discretised image representation corresponds to a point in the radar point cloud and: (ii) a Doppler channel containing, for each occupied pixel, a Doppler velocity of the corresponding point in the radar point cloud (Liu et al. “Next, the image generating unit 32 sets the pixel value (values of each RGB channel) of the pixel at the position corresponding to the cell Cj based on the reflection intensity, Doppler velocity, and range of the cell Cj (ST206).” - ¶ [0089]), or (iii) a radar cross section (RCS) channel containing for each occupied pixel, an RCS value of the corresponding point in the radar point cloud for use by a machine learning (ML) perception component (Examiner notes that items (ii) and (iii) are alternatives such that only one of (ii) or (iii) is required); inputting the discretised image representation to the machine learning (ML) perception component (Liu et al. “The object detection and discrimination unit 42 inputs the radar detection image of the entire observation area generated by the image generation unit 43 into a rained deep learning model…” - ¶ [0082]), which has been trained to extract information about structure exhibited in the radar point cloud from (i) the occupancy channel and: (ii) the Doppler channel, or (iii) the RCS channels (Liu et al. “Next, in the object detection and discrimination unit 42, the radar detection image of the entire observation area generated by the image generation unit 43 is input into the trained deep learning model, object detection and object discrimination are performed in the deep learning model, and the object discrimination result output from the deep learning model is obtained (ST112).” - ¶ [0086]; “Next, the object discrimination result and position information for each detected object are output (ST107).” - ¶ [0087]); and wherein the ML perception component comprises a bounding box detector or other object detector (Liu et al. “in this embodiment, in addition to object discrimination, object detection to detect object regions is also performed using a deep learning model.” - ¶ [0079]), the extracted information comprising object position, orientation, and/or size information for at least one detected object (Liu et al. “Then, the reflection intensity, Doppler velocity, and range of the selected cell Cj are obtained from the radar data of the entire observation area (ST205).” - ¶ [0088]); wherein the radar point cloud is an accumulated radar point cloud comprising points accumulated over multiple radar sweeps (Liu et al. “The data synthesis unit 72 synthesizes (integrates) the radar data of the object area at a plurality of times acquired by the area data extraction unit 31, and generates synthesized radar data of the object area.” - ¶ [0125]; “Specifically, for example, when radar data at four different times is synthesized, it is determined that the timing for output is when the frame order is a multiple of four.” - ¶ [0138]); and Additionally, although Liu et al. does not explicitly disclose an occupancy channel indicating whether or not each pixel of the discretised image representation corresponds to a point in the radar point cloud, Liu et al. does disclose that the control unit converts the radar data into an image and “generates a radar detection image based on the radar data, storing information on reflection intensity, speed, and distance corresponding to the position of each pixel in multiple channels for each pixel.” (Liu et al. ¶ [0009]). “Next, the image generating unit 32 sets the pixel value (values of each RGB channel) of the pixel at the position corresponding to the cell Cj based on the reflection intensity, Doppler velocity, and range of the cell Cj (ST206).” (Liu et al. ¶ [0089]). Radar points must exist in order for the radar data to be stored in the multiple channels of the pixels. Therefore, any channel of a given pixel that receives and stores information from the radar data can indicate that the pixel corresponds to a point in the radar point cloud and be considered an occupancy channel. Moosmann et al. discloses: wherein the accumulated radar point cloud includes points captured from an object that exhibit smearing effects caused by motion of the object during the multiple radar sweeps, and the discretised image representation retains the smearing effects (Moosmann et al. the accumulated radar point cloud indicated by smearing lines showing the most prominent motion estimates over 30 frames, Fig. 7) It would have been obvious to someone with ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the features as disclosed by Moosmann et al. into the invention of Liu et al. to yield the invention of claim 23 above. Both Liu et al. and Moosmann et al. are considered analogous arts to the claimed invention as they both disclose vehicle radar systems for object detection. Liu et al. discloses the limitations of claim 23 outlined above. However, Liu et al. fails to explicitly disclose the accumulated radar point cloud includes points captured from an object that exhibit smearing effects caused by motion of the object during the multiple radar sweeps, and the discretised image representation retains the smearing effects. This feature is disclosed by Moosmann et al. where the accumulated radar point cloud indicated by smearing lines showing the most prominent motion estimates over 30 frames (Moosmann et al. Fig. 7). The combination of Liu et al. and Moosmann et al. would be obvious with a reasonable expectation of success to “help data association and thus motion estimation in low-resolution areas” (Moosmann et al. Section VII. Conclusions and Future Work, p. 6). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAOMI M WOLFORD whose telephone number is (571)272-3929. The examiner can normally be reached Monday - Friday, 8:30 am - 4:30 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Resha Desai can be reached at (571)270-7792. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NAOMI M. WOLFORD Examiner Art Unit 3648 /N.M.W./Examiner, Art Unit 3648 13 FEB 2026 /RESHA DESAI/Supervisory Patent Examiner, Art Unit 3648
Read full office action

Prosecution Timeline

Jul 17, 2023
Application Filed
Aug 07, 2025
Non-Final Rejection — §103
Dec 15, 2025
Response Filed
Feb 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592066
OBSTACLE IDENTIFICATION METHOD, VEHICLE-MOUNTED DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12584997
STANDING WAVE RADAR, OCCUPANT DETECTION SYSTEM, AND OBJECT DETECTION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12559623
RESIN COMPOSITION AND ELECTROMAGNETIC WAVE ABSORBER
2y 5m to grant Granted Feb 24, 2026
Patent 12523765
Driver Assistance System and Device and Method for Determining Object Status Parameter for Driver Assistance System
2y 5m to grant Granted Jan 13, 2026
Patent 12517244
APPARATUS FOR DRIVER ASSISTANCE AND METHOD OF CONTROLLING THE SAME
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
54%
Grant Probability
95%
With Interview (+40.9%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 232 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month