Prosecution Insights
Last updated: April 19, 2026
Application No. 18/984,306

METHOD, APPARATUS, AND SYSTEM FOR ESTIMATING A CORRECTED DIRECTIONAL ANGLE MEASURED BY A RADAR BY USING INPUT FROM A CAMERA

Non-Final OA §103§112
Filed
Dec 17, 2024
Examiner
GUYAH, REMASH RAJA
Art Unit
3648
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Axis AB
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
68 granted / 89 resolved
+24.4% vs TC avg
Strong +34% interview lift
Without
With
+34.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
4.0%
-36.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 89 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. EP23218997.7, filed on 12/21/2023. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/17/2024 is in compliance with the provisions of 35 CFR 1.97. Accordingly, the IDS has been considered by the examiner. Claim Objections Claim 1 objected to because of the following informalities: "… detections form the camera…" should be "… detections from the camera…". Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 9 recites the limitation "converting" in "wherein the . There is insufficient antecedent basis for this limitation in the claim. The claim depends from claim 1 and claim 1 does not introduce any “converting” step. The term “converting” is first introduced in claim 7. The use of the definite article “the” indicates that “converting” should have been previously introduced in the dependency chain. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Niesen (US 2019/0163198 A1) in view of Emadi et al. (US 2021/0199759 A1). Regarding Claims 1, 11, and 13, Niesen (‘198) in view of Emadi et al. (‘759) teaches: Niesen (‘198) teaches: A method for estimating a corrected directional angle measured by a radar by using input from a camera having an overlapping field of view with the radar, wherein the radar and the camera have known positions and orientations in relation to each other, comprising: ([0027]: “there may be a substantial overlap between a RADAR sensor(s) 140 field of view and the one or more cameras 110a, 110b field of view.”; [0035]: “there may be a one-time alignment step (e.g., a calibration step) where regions in the RADAR’s reference frame may be translated into regions on the image plane of at least one camera.”; [0051]: “the translation between IMU 210 3D position and 3D orientation and the camera(s) 212 (110a, 110b) may be computed.”). Niesen (‘198) teaches: receiving radar detections of one or more first objects in a scene, wherein each radar detection is indicative of a first directional angle and a distance of a respective first object in relation to the radar, ([0033]: “The one or more RADAR sensor(s) 220 may produce a three-dimensional (3D) RADAR (reference) depth map and/or a 3D RADAR velocity map.”; [0052]: “The associator may also use the aligned 3D RADAR depth image to compute a depth of the 2D visual feature.”). Niesen (‘198) teaches: receiving camera detections of one or more second objects in the scene, wherein the radar and camera detections are simultaneous, and wherein each camera detection is indicative of a direction of a respective second object in relation to the camera, ([0034]: “the vehicle 100 may include one or more camera(s) 212, and in the field of view of the one or more camera(s) capture visual scenes and output multiple video frames.”; [0049]: “The feature detector 314 may extract and detect different types of features associated with an object. For example, the following different types of features may be detected: (a) an edge… (b) a corner… (c) a blob… or, a ridge.”). Niesen (‘198) teaches: identifying a radar detection and a camera detection which are detections of a same object in the scene by comparing the received radar detections to the received camera detections, and ([0052]: “The RADAR-based feature associator 344 determines which 2D visual features detected by the feature detector 340 correspond to the predicted 3D positions provided by the predictor 370. The correspondence between the predicted 3D positions of the visual features and the 2D visual detected features may be based on an association list between the detected vs. predicted visual features. Matching the 2D detected visual features to the 3D predicted positions in the association list may be based on either a similarity measure (i.e., how similar are the visual features (detected vs. predicted)), or, distance error between positions of the 2D detected visual features and 3D predicted positions, or, a combination of both.”). Niesen (‘198) teaches coordinate transformation between radar and camera reference frames: estimating a corrected first directional angle for the identified radar detection by using the direction of the identified camera detection and the known positions and orientations of the radar and the camera, ([0035]: “the RADAR-based image aligner may be used to translate the RADAR reference depth map into depth information into at least one image plane of at least one camera.”; [0051]: “A translation between the 3D world reference frame and the 3D camera frame is aided by the IMU 210… the 3D position and 3D orientation of the camera(s) 212 (110a, 110b) may also be estimated in a 3D world reference frame.”). Niesen (‘198) does not explicitly teach using the camera direction to estimate a corrected directional angle for the radar detection, but Emadi et al. (‘759) teaches correcting radar directional measurements ([0030]: “When the angle 123 between the radar 122 and the IMU 121 is calibrated, e.g., by tuning the angle 123 in ±0.1 degree increments, the clutter locations observed from radar images may change. The angle 123 may continue to be tuned until a convergence of the clutter locations is observed.”; [0002]: “the angle usually needs to be calibrated to provide accurate radar sensing and measurement.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the radar-camera association system of Niesen (‘198) with the radar angle correction teachings of Emadi et al. (‘759). One would have been motivated to do so because Emadi et al. (‘759) explicitly recognizes that radar angular measurements require calibration to provide accurate sensing ([0003]), and Niesen (‘198) already teaches that camera-based measurements can provide more accurate positional information than uncorrected radar alone ([0030]: “The large initial uncertainty signifies that the tracked visual features may initially not be very useful”). There is a reasonable expectation of success because Niesen (‘198) already teaches the necessary infrastructure for matching radar and camera detections of the same object and performing coordinate transformations between sensor reference frames, making the application of camera-derived angular corrections technically feasible. Niesen (‘198) teaches: wherein the method is repeated over time to accumulate data ([0060]: “tracking of device position, device orientation, device velocity, and the tracking of the visual features (and the positions of the visual features) may take place over a current image frame… The tracking… continues in the next image frame based on the updater 330 and predictor 370.”). Niesen (‘198) does not explicitly teach accumulating data which associates a first directional angle and a distance of each of a plurality of identified radar detections with a respective corrected first directional angle, and wherein the accumulated data is used to associate a first directional angle and a distance of a further radar detection with a corrected first directional angle, but Emadi et al. (‘759) teaches accumulating calibration data over time and using it for subsequent corrections ([0029]: “When vehicle 110 is in motion, e.g., travels along the street, the radar unit 122 is configured to generate radar scans or images 131 of the surroundings at fixed intervals, e.g., 50-100 ms apart.”; [0036]: “When the clutter pattern 302c converges, e.g., to a straight line, a spot, etc., showing a minimum average distance among the target points which corresponds to the minimum variance of the global coordinates of the target points, the corresponding position of the radar unit 122 can be considered as the newly calibrated position.”; [0050]: “the process 408 including steps 502-520 can be performed dynamically, periodically or on demand while a vehicle is moving.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the radar-camera system of Niesen (‘198) with the accumulated calibration data approach of Emadi et al. (‘759). One would have been motivated to do so because Emadi et al. (‘759) teaches that accumulated data improves calibration accuracy by tracking convergence patterns over multiple scans ([0036]), which would provide increasingly accurate radar corrections without requiring constant recomputation. There is a reasonable expectation of success because both references operate in the same technical field of vehicle-based radar systems and Niesen (‘198) already teaches continuous tracking over multiple frames, making the addition of data accumulation a straightforward extension of existing functionality. Regarding Claim 2, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 1. Niesen (‘198) teaches: wherein the first directional angle is an azimuth angle or an elevation angle defined in relation to the radar ([0026]: “Each local reference frame may be defined by a z-axis, y-axis and x-axis. The z-y plane is perpendicular to the ‘forward’ direction of travel of the vehicle, depicted by the x-axis.”). Regarding Claims 3 and 14, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 1. Niesen (‘198) does not explicitly teach: wherein, when the method is repeated, the data accumulated so far is used to correct the first directional angle indicated by the one or more received radar detections prior to comparing the received radar detections to the received camera detections, but Emadi et al. (‘759) teaches using accumulated calibration data to correct subsequent measurements ([0037]: “the calibration may continue to increment the angle for more convergence of the clutter pattern. In some embodiments, the incremented angle may vary depending a desired convergence.”; [0050]: “the process 408 including steps 502-520 can be performed dynamically, periodically or on demand while a vehicle is moving.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to apply the accumulated correction data to radar detections prior to comparison with camera detections in the combined system of Niesen (‘198) and Emadi et al. (‘759). One would have been motivated to do so because Niesen (‘198) teaches that matching is based on “distance error between positions of the 2D detected visual features and 3D predicted positions” ([0052]), and applying accumulated corrections first would reduce this error, thereby improving the matching accuracy for subsequent detections. There is a reasonable expectation of success because this represents a logical ordering of processing steps within the existing system architecture, and applying known corrections before comparison is a standard signal processing technique. Regarding Claims 4 and 15, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 1. Niesen (‘198) teaches: wherein a radar detection and a camera detection are identified to be detections of a same object in the scene in case a deviation measure between the radar detection and the camera detection when represented in a common coordinate system is below a deviation threshold ([0052]: “Matching the 2D detected visual features to the 3D predicted positions in the association list may be based on either a similarity measure (i.e., how similar are the visual features (detected vs. predicted)), or, distance error between positions of the 2D detected visual features and 3D predicted positions, or, a combination of both a similarity measure and distance error between visual feature (detected vs. predicted) positions.”; [0035]: “the RADAR-based image aligner may be used to translate the RADAR reference depth map into depth information into at least one image plane of at least one camera.”; [0053]: “The positions of the 2D matched visual features may include outliers, i.e., values of positions (or similarity values from the similarity measure) which are inordinate to the other 2D matched features.”). Regarding Claim 5 and 16, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 4. Niesen (‘198) teaches: wherein the deviation measure includes a measure of deviation in speed between a first object associated with the radar detection and a second object associated with the camera detection ([0029]: “Using the Doppler frequency shifts, the RADAR measurements by the RADAR sensor(s) 140 provide information about the relative movement of object targets in the RADAR sensor(s) 140 field of view.”; [0033]: “The one or more RADAR sensor(s) 220 may produce a three-dimensional (3D) RADAR (reference) depth map and/or a 3D RADAR velocity map.”; [0029]: “Deviations from the prediction from a VIO are therefore caused by moving objects.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include speed deviation in the matching criteria. One would have been motivated to do so because Niesen (‘198) teaches that velocity information helps distinguish between static and moving objects ([0029]), which would improve matching accuracy by providing an additional discriminating factor. There is a reasonable expectation of success because both radar and camera systems in Niesen (‘198) are capable of tracking object motion over time. Regarding Claim 6 and 17, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 4. Niesen (‘198) teaches: wherein a radar detection is only identified if there is a unique camera detection having a deviation measure with respect to the radar detection which is below the deviation threshold ([0053]: “The positions of the 2D matched visual features may include outliers, i.e., values of positions (or similarity values from the similarity measure) which are inordinate to the other 2D matched features. The inordinate values (positions or similarity values) may be detected with an outlier detector 346.”; [0052]: “The RADAR-based feature associator 344 determines which 2D visual features detected by the feature detector 340 correspond to the predicted 3D positions.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to require unique matching between radar and camera detections. One would have been motivated to do so because ambiguous matches would introduce errors into the correction data, and Niesen (‘198) already teaches outlier detection to remove unreliable associations ([0053]). There is a reasonable expectation of success because the uniqueness requirement is a straightforward constraint that can be implemented using existing matching infrastructure. Regarding Claims 7, 18, 19, and 20, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 1. Niesen (‘198) teaches: wherein estimating a corrected first directional angle for the identified radar detection includes: converting the direction of the identified camera detection into a direction which is described in relation to the radar by using the known positions and orientations of the radar and the camera, and calculating a corrected first directional angle from the direction which is described in relation to the radar ([0035]: “there may be a one-time alignment step (e.g., a calibration step) where regions in the RADAR’s reference frame may be translated into regions on the image plane of at least one camera.”; [0051]: “A translation between the 3D world reference frame and the 3D camera frame is aided by the IMU 210. The IMU 210 may estimate its 3D position and 3D orientation. As the camera(s) 212 (110a, 110b) may be coupled to the vehicle 100 (or drone) by a lever, and the IMU 210 is mounted to the vehicle 100, the translation between IMU 210 3D position and 3D orientation and the camera(s) 212 (110a, 110b) may be computed.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to convert camera direction to radar reference frame and calculate a corrected angle. One would have been motivated to do so because Niesen (‘198) already teaches coordinate transformations between sensor frames, and expressing both measurements in a common frame is a prerequisite for computing angular corrections. There is a reasonable expectation of success because coordinate transformation mathematics are well-established and Niesen (‘198) demonstrates the capability to perform such transformations. Regarding Claim 8, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 7. Niesen (‘198) teaches: wherein when the radar and the camera are arranged at the same position the converting includes compensating the direction of the identified camera detection in view of the known orientations of the radar and the camera ([0027]: “there may be a substantial overlap between a RADAR sensor(s) 140 field of view and the one or more cameras 110a, 110b field of view.”; [0035]: “there may be a one-time alignment step (e.g., a calibration step) where regions in the RADAR’s reference frame may be translated into regions on the image plane of at least one camera.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention that when sensors are co-located, only orientation compensation would be required. One would have been motivated to simplify the transformation in this case because eliminating position translation reduces computational complexity while maintaining accuracy. There is a reasonable expectation of success because this represents a simplified case of the general coordinate transformation already taught by Niesen (‘198). Regarding Claim 9, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 1. Niesen (‘198) teaches: wherein the converting includes: determining a point in the scene which is at the distance in relation to the radar indicated by the identified radar detection and in the direction in relation to the camera indicated by the identified camera detection, and calculating a direction of the determined point in relation to the radar ([0052]: “The associator may also use the aligned 3D RADAR depth image to compute a depth of the 2D visual feature, which can then be directly compared with the predicted 3D positions.”; [0035]: “the RADAR-based image aligner may be used to translate the RADAR reference depth map into depth information into at least one image plane of at least one camera.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to determine a point using radar distance and camera direction, then calculate the direction from the radar’s perspective. One would have been motivated to do so because this approach leverages the complementary strengths of each sensor—radar’s accurate distance measurement and camera’s accurate angular measurement. There is a reasonable expectation of success because Niesen (‘198) already teaches combining radar depth with camera direction for feature association, and calculating the direction back to radar frame is a straightforward geometric operation. Regarding Claim 10, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 1. Niesen (‘198) teaches: wherein each radar detection is further indicative of a second directional angle of a respective first object in relation to the radar, wherein the first directional angle is an azimuth angle of the object in relation to the radar and the second directional angle is an elevation angle of the object in relation to the radar, ([0033]: “The one or more RADAR sensor(s) 220 may produce a three-dimensional (3D) RADAR (reference) depth map.”; [0026]: “Each local reference frame may be defined by a z-axis, y-axis and x-axis.”). Niesen (‘198) in view of Emadi et al. (‘759) teaches: and wherein the method further comprises estimating a corrected second directional angle for the identified radar detection by using the direction of the identified camera detection and the known positions and orientations of the radar and the camera (see rejection of claim 1 above regarding estimating corrected directional angles). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to correct both azimuth and elevation angles using the same methodology. One would have been motivated to do so because both are directional angles subject to the same sources of error, and correcting only one would leave residual angular errors. There is a reasonable expectation of success because the camera provides full directional information in both dimensions, and the correction methodology applies equally to both angles. Regarding Claim 11, Niesen (‘198) in view of Emadi et al. (‘759) teaches: Niesen (‘198) teaches: An apparatus for estimating a corrected directional angle measured by a radar by using input from a camera having an overlapping field of view with the radar, wherein the radar and the camera have known positions and orientations in relation to each other, the apparatus comprising processing circuitry configured to carry out a method comprising: ([0065]: “the device 601 includes one or more processor(s) 628 which may include: a central processing unit (CPU); or a digital processor (DSP); or a graphics processing unit (GPU), coupled to the memory 626.”; [0069]: “the one or more processor(s) 628 may include a RADAR-based image aligner 214, and a RADAR-aided visual inertial odometer 225 (as previously described) that are coupled to each other.”). The method limitations recited in claim 11 are substantially identical to those of claim 1 and are rejected for the same reasons as set forth in the rejection of claim 1. Regarding Claim 12, Niesen (‘198) in view of Emadi et al. (‘759) teaches the apparatus according to claim 11. Niesen (‘198) teaches: further coupled to: a system for estimating a corrected directional angle measured by a radar by using input from a camera, comprising: the radar configured to make detections of one or more first objects in a scene, wherein each detection made by the radar is indicative of a first directional angle and a distance of a respective first object in relation to the radar, and the camera configured to simultaneously with the radar make detections of one or more second objects in the scene, wherein each detection made by the camera is indicative of a direction of a respective second object in relation to the camera. the apparatus is arranged to receive the detections from the camera and the radar ([0066]: “the one or more controllers 620 may be coupled to various peripheral devices (e.g., IMU 602, RADAR sensor(s) 604, camera(s) 606, display device 608, and loudspeaker(s) 610).”; [0068]: “each of the IMU 602, RADAR sensor(s) 604, camera(s) 606… may be coupled to a component of the system-on-chip device, such as one or more controller(s) 620, or the memory 626.”; [0033]-[0034] as cited in claim 1 regarding radar and camera detections). Regarding Claim 13, Niesen (‘198) in view of Emadi et al. (‘759) teaches: Niesen (‘198) teaches: A computer-readable storage medium comprising computer program code which, when executed by a device with processing capability, causes the device to carry out a method ([0074]: “A software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM)… or any other form of non-transient storage medium known in the art.”; [0065]: “The memory 626 includes instructions 630 (e.g., executable instructions) such as computer-readable instructions or processor-readable instructions.”). The method limitations recited in claim 13 are substantially identical to those of claim 1 and are rejected for the same reasons as set forth in the rejection of claim 1. Regarding Claim 14, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 2. The claim depends from claim 2 and recites substantially the same limitations as claim 3. Therefore, claim 14 is rejected for the same reasons as set forth in the rejection of claim 3. Regarding Claim 15, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 2. The claim depends from claim 2 and recites substantially the same limitations as claim 4. Therefore, claim 15 is rejected for the same reasons as set forth in the rejection of claim 4. Regarding Claim 16, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 3. The claim depends from claim 3 and recites substantially the same limitations as claim 4. Therefore, claim 16 is rejected for the same reasons as set forth in the rejections of claims 3 and 4. Regarding Claim 17, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 5. The claim depends from claim 5 and recites substantially the same limitations as claim 6 with the added limitation of speed in claim 5. Therefore, claim 17 is rejected for the same reasons as set forth in the rejections of claims 5 and 6. Regarding Claim 18, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 2. The claim depends from claim 2 and recites substantially the same limitations as claim 7. Therefore, claim 18 is rejected for the same reasons as set forth in the rejections of claims 2 and 7. Regarding Claim 19, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 3. The claim depends from claim 3 and recites substantially the same limitations as claim 7. Therefore, claim 19 is rejected for the same reasons as set forth in the rejections of claims 3 and 7. Regarding Claim 20, Niesen (‘198) in view of Emadi et al. (‘759) teaches the method according to claim 4. The claim depends from claim 4 and recites substantially the same limitations as claim 7. Therefore, claim 20 is rejected for the same reasons as set forth in the rejections of claims 4 and 7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to REMASH R GUYAH whose telephone number is (571)270-0115. The examiner can normally be reached M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vladimir Magloire can be reached at (571) 270-5144. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /REMASH R GUYAH/Examiner, Art Unit 3648 /VLADIMIR MAGLOIRE/Supervisory Patent Examiner, Art Unit 3648
Read full office action

Prosecution Timeline

Dec 17, 2024
Application Filed
Jan 13, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601828
WEARABLE DEVICE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12596174
DISTANCE MEASUREMENT DEVICE, DISTANCE MEASUREMENT METHOD, AND RADAR DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591038
RADAR CONTROL DEVICE AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591067
METHOD AND APPARATUS FOR COOPERATIVE MULTI-TARGET ASSIGNMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12578460
GUARD BAND ANTENNA IN A BEAM STEERING RADAR FOR RESOLUTION REFINEMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+34.2%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 89 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month