Prosecution Insights
Last updated: April 19, 2026
Application No. 18/645,559

RADAR POINT CLOUD AGGREGATION OF DYNAMIC OBJECTS WITH MINIMIZED DISPARITY

Non-Final OA §102§103
Filed
Apr 25, 2024
Examiner
ZHU, NOAH YI MIN
Art Unit
3648
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
GM Global Technology Operations LLC
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
49 granted / 60 resolved
+29.7% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
39 currently pending
Career history
99
Total Applications
across all art units

Statute-Specific Performance

§101
4.4%
-35.6% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
23.4%
-16.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 60 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 11-12 and 16 are objected to for the following informalities: In Claim 11, line 2, the word “determining” should be “determine.” In Claim 12, line 3, the word “and” should be “an.” In Claim 16, line 1, remove the additional comma. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5 and 8-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Long (Long et al., “Full-Velocity Radar Returns by Radar-Camera Fusion,” 2021). Regarding Claim 1, Long discloses: A method of operating a host vehicle, comprising: receiving a first detection of a first reflection point from an object during a first time frame of a radar ([pg. 2]: “radar points”; “object detection”); determining a first position and a first Doppler frequency of the first detection ([pg. 1]: “3D positions”; “Doppler effect”; [pg. 4]: “Radar provides an estimate of the target position, q … Radar also provides the signed radial speed, ṙ”); updating the first position to a first predicted position in a second time frame using the first Doppler frequency ([pg. 7]: “The point-wise velocity estimate makes it possible to compensate the motion of dynamic objects”; Eq. 19; “Then p0 is transformed to current radar coordinates by known egomotion”), wherein updating includes: determining an object-based component of the first Doppler frequency for the first detection from the first Doppler frequency by removing an effect of a velocity of the host vehicle from the first Doppler frequency ([pg. 4]: “Now Eq.(7) is actually the egomotion-corrected Doppler speed.”; “The raw Doppler speed, ṙraw, is the radial component of the relative velocity between target and sensor, ṁ – ċ”; “ċ is the known ego-velocity”); shifting the first detection from the first position to an intermediate position in the second time frame using the object-based component of the first Doppler frequency ([pg. 7]: “compensate the motion of dynamic objects”; Eq. 19; Examiner note: The detection point is shifted from first position pi to intermediate position p0 in Eq. 19.); shifting the first detection from the intermediate position to the first predicted position in the second time frame using a vehicle-based component of the first Doppler frequency ([pg. 7]: “Then p0 is transformed to current radar coordinates by known egomotion”; Examiner note: Then the detection point is shifted by a vehicle-based component.); receiving a second detection of a second reflection point from the object ([pg. 2]: “To align radar frames, in addition to compensating egomotion, we shall consider the motion of moving points in consecutive frames”); and detecting the object from the first predicted position in the second time frame and the second detection ([pg. 2]: “it is often essential to accumulate multiple prior radar frames to acquire sufficiently dense point clouds for downstream tasks, e.g., object detection.”; Fig. 6). Regarding Claim 8, Long discloses: A system for operating a host vehicle, comprising: a processor ([pg. 5]: “compute”; [pg. 7]: “processing”) configured to: receive a first detection of a first reflection point from an object during a first time frame of a radar ([pg. 2]); determine a first position and a first Doppler frequency of the first detection ([pg. 1]; [pg. 4]); update the first position to a first predicted position in a second time frame using the first Doppler frequency ([pg. 7]), wherein updating includes: determining an object-based component of the first Doppler frequency for the first detection from the first Doppler frequency by removing an effect of a velocity of the host vehicle from the first Doppler frequency ([pg. 4]); shifting the first detection from the first position to an intermediate position in the second time frame using the object-based component of the first Doppler frequency ([pg. 7]); shifting the first detection from the intermediate position to the first predicted position in the second time frame using a vehicle-based component of the first Doppler frequency ([pg. 7]); receive a second detection of a second reflection point from the object ([pg. 2]); and detect the object from the first predicted position in the second time frame and the second detection ([pg. 2]; Fig. 6). Regarding Claims 2 and 9, Long discloses: the method further comprising: receiving the second detection of the second reflection point from the object during the first time frame ([pg. 2]: “single frame”; “at least two radar hits”; [pg. 7]: “radar hits acquired in a single sweep”; “point-wise”); determining a second position of the second detection and a second Doppler frequency for the second detection ([pg. 1]: “3D positions”; “Doppler effect”; [pg. 2]: “point-wise”); updating the second position to a second predicted position in the second time frame based on calculations using the second Doppler frequency ([pg. 7]: Eq. 19; “transformed to current radar coordinates by known egomotion”); and detecting the object from the first predicted position in the second time frame and the second predicted position in the second time frame ([pg. 2]: “accumulate multiple prior radar frames”; “object detection.”; Fig. 6). Regarding Claims 3 and 10, Long discloses: the method further comprising updating the first predicted position in the second time frame to a second predicted position in a third time frame based on a first calculation using the first Doppler frequency and the velocity of the host vehicle obtained in the second time frame ([pg. 2]: “multiple prior radar frames”; [pg. 7]: Eq. 19; “known egomotion”; “up to 25 frames”). Regarding Claims 4 and 11, Long discloses: the method further comprising receiving the second detection within the second time frame, determining a second position of the second detection and a second Doppler frequency for the second detection in the second time frame ([pg. 2]: “consecutive frames”; “point-wise”), and updating the second position to a third predicted position in the third time frame using a second calculation based on the second Doppler frequency ([pg. 7]: Eq. 19; “known egomotion”; “up to 25 frames”). Regarding Claims 5 and 12, Long discloses: wherein detecting the object further comprises determining at least one of: (i) a position of the object; (ii) a shape of the object; (iii) an orientation of the object; and (iv) a class of the object ([pg. 7]: “we apply a pose estimation method”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-7 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Long (Long et al., “Full-Velocity Radar Returns by Radar-Camera Fusion,” 2021) in view of Abbott (US 2024/0280372). Regarding Claims 6, 13, and 20, Long teaches: wherein the first time frame is one of a plurality of temporally-spaced time frames ([pg. 2]: “multiple prior radar frames”; “consecutive frames”; [pg. 7]: “25 frames”). Long generally teaches the idea of selecting a subset of frames and accumulating prior and consecutive frames ([pg. 2]: “multiple prior radar frames”; “consecutive frames”; “carefully decide the number of frames”; [pg. 7]: “error increases with the number of frames”), but does not explicitly teach: selecting a subset of the plurality of temporally-spaced time frames using a moving time window. Abbott teaches: selecting a subset of the plurality of temporally-spaced time frames using a moving time window (Abbott [0040]: “sliding window of multiple frames may be used to build a dense 3D point cloud”). It would have been obvious to one of ordinary skill in the art to modify Long and select a subset of frames using a moving time window, as taught by Abbott. Moving time windows are considered ordinary and well-known in the art, and they are beneficial for reducing error and improving detection. Regarding Claims 7 and 14, Long does not explicitly teach – but Abbott teaches: the method further comprising controlling the host vehicle to navigate the host vehicle with respect to the object based on the first predicted position and the second detection (Abbott [0075]: “”). It would have been obvious to one of ordinary skill in the art to modify Long and navigate the host vehicle with respect to the object, as taught by Abbott. The disclosure of Long is directed to autonomous driving, and navigating a vehicle based on detections is considered ordinary and well-known in the art. Navigating the host vehicle with respect to the object is beneficial for enabling safer vehicle operation. Regarding Claim 15, Long teaches: A host vehicle, comprising: … a processor ([pg. 5]: “compute”; [pg. 7]: “processing”) configured to: receive a first detection of a first reflection point from an object during a first time frame of a radar ([pg. 2]); determine a first position and a first Doppler frequency of the first detection ([pg. 1]; [pg. 4]); update the first position to a first predicted position in a second time frame using the first Doppler frequency ([pg. 7]), wherein updating includes: determining an object-based component of the first Doppler frequency for the first detection from the first Doppler frequency by removing an effect of a velocity of the host vehicle from the first Doppler frequency ([pg. 4]); shifting the first detection from the first position to an intermediate position in the second time frame using the object-based component of the first Doppler frequency ([pg. 7]); shifting the first detection from the intermediate position to the first predicted position in the second time frame using a vehicle-based component of the first Doppler frequency ([pg. 7]); receive a second detection from a second reflection point from the object ([pg. 2]); detect the object from the first predicted position in the second time frame and the second detection ([pg. 2]; Fig. 6); and … Long does not explicitly teach – but Abbott teaches: a system for controlling navigation of the host vehicle and control the system to navigate the host vehicle with respect to the object (Abbott [0075]: “control component(s) of the vehicle”; “the vehicle 1200 may use this information (e.g., instances of obstacles) to localize its position in a map, to navigate”). It would have been obvious to one of ordinary skill in the art to modify Long to include a system for controlling navigation of the host vehicle and to navigate the host vehicle with respect to the object, as taught by Abbott. The disclosure of Long is directed to autonomous driving, and controlling and navigating a vehicle based on detections is considered ordinary and well-known in the art. Navigating the host vehicle with respect to the object is beneficial for enabling safer vehicle operation. Regarding Claim 16, Long teaches: wherein the processor is further configured to: receive the second detection at the first time frame ([pg. 2]; [pg. 7]); determine a second position of the second detection and a second Doppler frequency for the second detection ([pg. 1]; [pg. 2]); update the second position to a second predicted position in the second time frame based on calculations using the second Doppler frequency ([pg. 7]); and detect the object from the first predicted position in the second time frame and the second predicted position in the second time frame ([pg. 2]; Fig. 6). Regarding Claim 17, Long teaches: wherein the processor is further configured to update the first predicted position in the second time frame to a second predicted position in a third time frame based on a first calculation using the first Doppler frequency and the velocity of the host vehicle obtained in the second time frame ([pg. 2]; [pg. 7]). Regarding Claim 18, Long teaches: wherein the processor is further configured to receive the second detection within the second time frame, determining a second position of the second detection and a second Doppler frequency for the second detection in the second time frame, and update the second position to a third predicted position in the third time frame using a second calculation based on the second Doppler frequency ([pg. 2]; [pg. 7]). Regarding Claim 19, Long teaches: wherein the processor is further configured to detect the object by determining at least one of: (i) a position of the object; (ii) a shape of the object; (iii) an orientation of the object; and (iv) a class of the object ([pg. 7]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOAH Y. ZHU whose telephone number is (571)270-0170. The examiner can normally be reached Monday-Friday, 8AM-4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William J. Kelleher can be reached on (571) 272-7753. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NOAH YI MIN ZHU/Examiner, Art Unit 3648 /William Kelleher/Supervisory Patent Examiner, Art Unit 3648
Read full office action

Prosecution Timeline

Apr 25, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §102, §103
Apr 15, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591041
System and Method for Robotic Inspection
2y 5m to grant Granted Mar 31, 2026
Patent 12584779
RADAR LEVEL GAUGE SYSTEM PROPAGATING MULTIPLE INDIVIDUALLY GENERATED TRANSMIT SIGNALS BY A COMMON ANTENNA
2y 5m to grant Granted Mar 24, 2026
Patent 12553760
MICROWAVE TRANSMISSION ARRANGEMENT WITH ENCAPSULATION, COMMUNICATION AND/OR MEASUREMENT SYSTEM AND RADAR LEVEL GAUGE SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12546859
RADAR CONTROL DEVICE, METHOD AND SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12493118
INTEGRATED SURVEILLANCE RADAR SYSTEM
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+16.7%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 60 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month