Prosecution Insights
Last updated: April 19, 2026
Application No. 18/171,685

RADAR OBJECT RECOGNITION SYSTEM AND METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM

Non-Final OA §103
Filed
Feb 21, 2023
Examiner
SATCHER, DION JOHN
Art Unit
2676
Tech Center
2600 — Communications
Assignee
National Yang Ming Chiao Tung University
OA Round
3 (Non-Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
33 granted / 39 resolved
+22.6% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 39 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This communication is in response to the Application Filed on 11/18/2025. Claims 1, 3–6, 8–11 and 13–15 are pending in this application. Drawings The drawing(s) filed on 02/21/2023 are accepted by the Examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/18/2025 has been entered. Response to Amendment Applicant’s Amendments filed on 11/18/2025 has been entered and made of record. Currently pending Claim(s): Independent Claim(s): Amended Claim(s): Cancelled Claim(s): 1, 3–6, 8–11 and 13–15 1, 6 and 11 1, 6 and 11 2, 7 and 12 Response to Applicant’s Arguments This office action is responsive to Applicant’s Arguments/Remarks Made in an Amendment received on 11/18/2025. Applicant’s Reply (November, 18, 2025) includes new amendment to the claims that change the scope of the claims. Applicant argues, in summary the applied prior art (Chen) does not disclose or suggest (See page 10 and 11): “wherein the radar data is a two-dimensional radar data map, …, normalizing all two-dimensional bins of the radar data map to obtain a normalized radar data map, … , ” performing a target enhancement on the normalized radar data map” However, the Examiner respectfully disagrees with the Applicant’s line of reasoning. The Examiner has thoroughly reviewed the Applicant arguments but respectfully believes that the cited reference to reasonably and properly meet the claimed limitations. Chen cites in reference, ¶ [0319], “The input data voxel 2100 includes a first array 2101 which is arranged like a range-Doppler map, i.e. it has an entry for each combination of a range bin and a doppler bin”. ¶ [0320], “However, instead of having a single complex value (e.g. FFT output) for each range bin Doppler bin combination (i.e. each range-Doppler bin), the input data voxel 2100 has a second array 2102 for each range bin doppler bin combination. The second array 2102 contains a real or complex value for each combination of an elevation bin and an azimuth bin. As in a range-Doppler map, a peak of the values indicates an object at the 3D position given by the elevation bin, azimuth bin and range bin with a radial velocity given by the radial velocity bin”. The 4D map includes a 2D map as the 2D range doppler array has an associated 2D elevation and azimuth array. Chen is applying these processes on a 2D map. Chen also teaches processing the bins and normalizing them ¶ [0244], “The graphical representation 1500b was obtained using operations described above in relation to FIGS. 11-42. The graphical representation 1500b includes a 4D map that includes estimations of the object parameters using the neural network machine learning algorithm. The graphical representation 1500b includes a binned map, the size of the FFT map, and each bin is normalized to a value of one. Bins in the graphical representation 1500b that include high values are considered to correspond to objects in the environment.” Since the 4D map includes the 2D map then the target processing is also being performed on the 2D map. PNG media_image1.png 728 755 media_image1.png Greyscale Therefore, with this broad interpretation, Chen in combination with Xie teaches, discloses or suggests the Applicant's invention, creating a radar image based on radar data, performing object recognition based on the radar image, performing post–processing to eliminate recognition errors, wherein the radar generation comprises: normalizing the radar map, performing target enhancement on the normalized radar map based on lowering the signal strength and performing cartesian conversion on the enhanced radar map to obtain the radar image. Thus, due to Applicant's broad claim language, Applicant's invention is not far removed from the art of record. Accordingly, these limitations do not render claims patentably distinct over the prior art of record. As a result, it is respectfully submitted that the present application is not in condition for allowance. Thus, the Examiner maintains that limitations as presented and as rejected were properly and adequately met. The rejection as presented in the non-final rejection is maintained regarding to the above limitation. Additional citations and/or modified citations may be present to more concisely address limitations. However, the grounds of rejection remain the same. Please see the rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 1, 3–6, 8–11 and 13–15 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 2022/0196798 A1, hereafter, "Chen") in view of Xie et al. (US 2022/0300739 A1, hereafter, "Xie"). Regarding claim 1, Chen discloses a radar object recognition system (See Chen, ¶ [0192], In some aspects, the radar detector 1110 may include a neural network machine learning algorithm to perform object detection), comprising: a storage device configured to store at least one instruction; and a processor electrically connected to the storage device, and the processor configured to execute the at least one instruction for (See Chen, ¶ [0191], The radar device 1102 may include a radar processor 1104 and a radar detector 1110 that may generate an error value 1120. See also [FIG. 11], 1104 Radar Processor): performing a radar image generation on a radar data to generate a radar image (See Chen, ¶ [0370], Range and Doppler processing creates a range doppler map, and the AoA estimation creates a azimuth elevation map for each range-doppler bin, thus resulting in a 4D voxel. A detector may then create a point cloud, which can then be an input for a perception pipeline. See also ¶ [0363], Conventional techniques for generating high-resolution radar images (i.e. point clouds)); inputting the radar image into an object recognition model, so that the object recognition model outputs a recognition result (See Chen, ¶ [0420], The radar processor 104 may use one or more neural networks to perform radar-based perception tasks (such as object detection, classification, segmentation) based on a 3D point cloud); and [performing a post-process on the recognition result to eliminate a recognition error from the recognition result], wherein the radar data is a two-dimensional radar data map, and the radar image generation executed by the processor comprises (See Chen, ¶ [0370], Range and Doppler processing creates a range-doppler map. Note: the range-doppler map is a 2D radar map): normalizing all two-dimensional bins of the radar data map to obtain a normalized radar data map (See Chen, ¶ [0244], “The graphical representation 1500b was obtained using operations described above in relation to FIGS. 11-42. The graphical representation 1500b includes a 4D map that includes estimations of the object parameters using the neural network machine learning algorithm. The graphical representation 1500b includes a binned map, the size of the FFT map, and each bin is normalized to a value of one); performing a target enhancement on the normalized radar data map, by setting a lower bound of signal strength of the normalized radar data map, to obtain an enhanced radar data map (See Chen, ¶ [0430] In a typical input data voxel, the 4D bins in the field of view in general consists of three categories: I) 4D bins genuinely occupied by objects; II) 4D bins with clutter and ghost responses; III) 4D bins with only thermal noise. Most of the 4D bins in the field of view fall into Category III. Typically, the goal is to preserve 4D bins in Category I and remove noise, clutter and ghost responses. However, it often erroneously eliminates genuine weak targets. ¶ [0436], The pre-processor then removes the rows of X′ with Pdb,k<th from X′ wherein th=CσN 2 is a predefined threshold value which is for example based on thermal noise variance σN 2. For example the radar system includes a temperature sensor configured to measure the ambient temperature and determines the threshold from the measured temperature. The result of this removal operation is an array X″∈R.sup.N×2 with N<< N r n N d o p N a z N e l * . ¶ [0527], According to various embodiments, in other words, a radar system proceeds on the basis of radar reception values (e.g. IQ samples) which lie above a thermal noise threshold. Thus, it may for example be ensured that weak targets are considered in the further processing (e.g. are detected) while keeping the amount of data to be processed, stored and communicated, e.g. in the radar baseband processing pipeline, low. The reception data values are for example generated by a radar sensor of the radar system. It should be noted that the further processing may include removal of clutter and ghost responses (e.g. followed by detection, perception etc.). Note: Examiner is interpreting removing the thermal noise as enhancing the target); and converting the enhanced radar data map into the radar image through a Cartesian coordinate conversion, wherein the radar image is a two-dimensional image (See Chen, ¶ [0343], The coordinate transformation block 2502 may perform transformation of polar coordinates to cartesian coordinates (i.e. polar coordinate bins to cartesian coordinate bins). See Chen, ¶ [0420], based on a 3D point cloud or projecting 4D/3D reception data to a 2D plane such as range-Doppler or range-azimuth (optionally projected to X-Y plane in Cartesian space) for reduced storage and computational complexity). However, Chen fail(s) to teach performing a post-process on the recognition result to eliminate a recognition error from the recognition result. Xie, working in the same field of endeavor, teaches: performing a post-process on the recognition result to eliminate a recognition error from the recognition result f(See Xie, ¶ [0040], To overcome the excess computation and energy problem, a Fast NMS algorithm can be used as follows. The basic idea is to introduce a filter between step 1 and step 2 of the above recited algorithm as step la, which vastly reduces the number of computations required by pre-emptively removing unnecessary bounding boxes from further processing): Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Chen’s reference performing a post-process on the recognition result to eliminate a recognition error from the recognition result based on the method of Xie’s reference. The suggestion/motivation would have been to vastly reduce the computation and energy required for processing (See Xie, ¶ [0001–0008, 0040]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Xie with Chen to obtain the invention as specified in claim 1. Regarding claim 3, Chen in view of Xie teaches the radar object recognition system of claim 1, wherein the object recognition model is a deep learning object recognition model, the deep learning object recognition model recognizes the radar image to obtain the recognition result, the recognition result comprises a plurality of bounding boxes in the radar image (See Chen, ¶ [0324], The first neural network 2001 may also output bounding boxes for the various objects, i.e. bounding boxes of the 4D-bins within the input data voxel belonging to the same object. The first neural network 2001 may also generate a radial velocity estimate and/or an orientation and/or 2D/3D bounding boxes for the object associated with a segment), [the bounding boxes have a plurality of confidence values respectively, and the confidence values represent confidence levels of the deep learning object recognition model of determining whether the bounding boxes comprises an object]. However, Chen fail(s) to teach the bounding boxes having a plurality of confidence values respectively, and the confidence values represent confidence levels of the deep learning object recognition model of determining whether the bounding boxes comprises an object. Xie, working in the same field of endeavor, teaches: the bounding boxes have a plurality of confidence values respectively, and the confidence values represent confidence levels of the deep learning object recognition model of determining whether the bounding boxes comprises an object (See Xie, ¶ [0042], Get confidence_score, class_score, box_pos_info original data output from the deep learning model inference output.). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Chen’s reference the bounding boxes have a plurality of confidence values respectively, and the confidence values represent confidence levels of the deep learning object recognition model of determining whether the bounding boxes comprises an object based on the method of Xie’s reference. The suggestion/motivation would have been to vastly reduce the computation and energy required for processing (See Xie, ¶ [0001–0008, 0040]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Xie with Chen to obtain the invention as specified in claim 3. Regarding claim 4, Chen in view of Xie teaches the radar object recognition system of claim 3, [wherein the post-process comprises overlap elimination, and the processor executes the overlap elimination through a non-maximum suppression of different classes based on the confidence values to eliminate overlapping bounding boxes from the bounding boxes]. However, Chen fail(s) to teach wherein the post-process comprises overlap elimination, and the processor executes the overlap elimination through a non-maximum suppression of different classes based on the confidence values to eliminate overlapping bounding boxes from the bounding boxes. Xie, working in the same field of endeavor, teaches: wherein the post-process comprises overlap elimination, and the processor executes the overlap elimination through a non-maximum suppression of different classes based on the confidence values to eliminate overlapping bounding boxes from the bounding boxes (See Xie, ¶ [0040], To overcome the excess computation and energy problem, a Fast NMS algorithm can be used as follows. The basic idea is to introduce a filter between step 1 and step 2 of the above recited algorithm as step la, which vastly reduces the number of computations required by pre-emptively removing unnecessary bounding boxes from further processing.). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Chen’s reference wherein the post-process comprises overlap elimination, and the processor executes the overlap elimination through a non-maximum suppression of different classes based on the confidence values to eliminate overlapping bounding boxes from the bounding boxes based on the method of Xie’s reference. The suggestion/motivation would have been to vastly reduce the computation and energy required for processing (See Xie, ¶ [0001–0008, 0040]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Xie with Chen to obtain the invention as specified in claim 4. Regarding claim 5, Chen in view of Xie teaches the radar object recognition system of claim 4, [wherein the non-maximum suppression of the different classes performed by the processor comprises operations of: (A) sorting these confidence values; (B) selecting one having a maximum confidence value from the bounding boxes to be a candidate in each round; (C) when a value of an intersection between the candidate and at least one of remaining bounding boxes in the bounding boxes is greater than a predetermined threshold value, setting the confidence value of the at least one of the remaining bounding boxes to zero; and (D) performing and repeating operations (B) to (C) on the remaining bounding boxes in a next round until a last bounding box serves as the candidate]. However, Chen fail(s) to teach wherein the non-maximum suppression of the different classes performed by the processor comprises operations of: (A) sorting these confidence values; (B) selecting one having a maximum confidence value from the bounding boxes to be a candidate in each round; (C) when a value of an intersection between the candidate and at least one of remaining bounding boxes in the bounding boxes is greater than a predetermined threshold value, setting the confidence value of the at least one of the remaining bounding boxes to zero; and (D) performing and repeating operations (B) to (C) on the remaining bounding boxes in a next round until a last bounding box serves as the candidate. Xie, working in the same field of endeavor, teaches: wherein the non-maximum suppression of the different classes performed by the processor comprises operations of (See Xie, ¶ [0040], To overcome the excess computation and energy problem, a Fast NMS algorithm can be used as follows. The basic idea is to introduce a filter between step 1 and step 2 of the above recited algorithm as step la, which vastly reduces the number of computations required by pre-emptively removing unnecessary bounding boxes from further processing): (A) sorting these confidence values (See Xie, ¶ [0037], In each run, the remaining scores are sorted); (B) selecting one having a maximum confidence value from the bounding boxes to be a candidate in each round (See Xie, ¶ [0037], In each run, the remaining scores are sorted and the box having the highest score is selected for further processing); (C) when a value of an intersection between the candidate and at least one of remaining bounding boxes in the bounding boxes is greater than a predetermined threshold value, setting the confidence value of the at least one of the remaining bounding boxes to zero (See Xie, ¶ [0037], With the remaining boxes, the intersection over union (IoU) for the boxes are calculated, with scores of boxes having a level of calculated result greater than a threshold (in this example, the threshold is set to 0.7) set to 0. If a plurality of boxes remain, the remaining boxes are passed to a second run where they are again sorted, high score selected, and IoU calculations done); and (D) performing and repeating operations (B) to (C) on the remaining bounding boxes in a next round until a last bounding box serves as the candidate (See Xie, ¶ [0037], The process continues until there does not remain a plurality of boxes having a non-zero score as shown). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Chen’s reference wherein the non-maximum suppression of the different classes performed by the processor comprises operations of: (A) sorting these confidence values; (B) selecting one having a maximum confidence value from the bounding boxes to be a candidate in each round; (C) when a value of an intersection between the candidate and at least one of remaining bounding boxes in the bounding boxes is greater than a predetermined threshold value, setting the confidence value of the at least one of the remaining bounding boxes to zero; and (D) performing and repeating operations (B) to (C) on the remaining bounding boxes in a next round until a last bounding box serves as the candidate based on the method of Xie’s reference. The suggestion/motivation would have been to vastly reduce the computation and energy required for processing (See Xie, ¶ [0001–0008, 0040]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Xie with Chen to obtain the invention as specified in claim 5. Regarding claim 6, claim 6 is rejected the same as claim 1 and the arguments similar to that presented above for claim 1 are equally applicable to the claim 6, and all of the other limitations similar to claim 1 are not repeated herein, but incorporated by reference. Furthermore, Chen teaches a radar object recognition method (See Chen, ¶ [0220], FIG. 13 illustrates a flowchart of an example method 1300 to perform object detection by the radar detector 1110, in accordance with at least one aspect described in the present disclosure). Regarding claim 8, claim 8 is rejected the same as claim 3 and the arguments similar to that presented above for claim 3 are equally applicable to the claim 8, and all of the other limitations similar to claim 3 are not repeated herein, but incorporated by reference. Regarding claim 9, claim 9 is rejected the same as claim 4 and the arguments similar to that presented above for claim 4 are equally applicable to the claim 9, and all of the other limitations similar to claim 4 are not repeated herein, but incorporated by reference. Regarding claim 10, claim 10 is rejected the same as claim 5 and the arguments similar to that presented above for claim 5 are equally applicable to the claim 10, and all of the other limitations similar to claim 5 are not repeated herein, but incorporated by reference. Regarding claim 11, claim 11 is rejected the same as claim 1 and the arguments similar to that presented above for claim 1 are equally applicable to the claim 11, and all of the other limitations similar to claim 1 are not repeated herein, but incorporated by reference. Furthermore, Chen teaches a non-transitory computer readable medium to store a plurality of instructions for commanding a computer to execute a radar object recognition method (See Chen, ¶ [0191], The radar device 1102 may include a radar processor 1104 and a radar detector 1110 that may generate an error value 1120. See also [FIG. 3], 309 Radar Processor). Regarding claim 13, claim 13 is rejected the same as claim 3 and the arguments similar to that presented above for claim 3 are equally applicable to the claim 13, and all of the other limitations similar to claim 3 are not repeated herein, but incorporated by reference. Regarding claim 14, claim 14 is rejected the same as claim 4 and the arguments similar to that presented above for claim 4 are equally applicable to the claim 14, and all of the other limitations similar to claim 4 are not repeated herein, but incorporated by reference. Regarding claim 15, claim 15 is rejected the same as claim 5 and the arguments similar to that presented above for claim 5 are equally applicable to the claim 15, and all of the other limitations similar to claim 5 are not repeated herein, but incorporated by reference. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kaufhold (US 20180260688 A1) teaches the system has an object recognizer module loaded on the portable processor includes a deep learning network trained on multiple SAR image chips. Each synthetic aperture radar (SAR) image chip has been paired with at least one of multiple semantic labels. The portable processor is provided to receive from the SAR sensor a stream of SAR image data (126) reflected from a target object, and provides the received stream of SAR image data to the object recognizer module. The object recognizer module is programmed to recognize the target object in the received stream of SAR image data by invoking the trained deep learning network to process the received stream of SAR image data into a recognized semantic label (120) corresponding to the target object. The recognized semantic label is one of multiple semantic labels. The portable processor provides the recognized semantic label to an operational control module onboard the flying vehicle. Popov et al. (US 20210156960 A1) teaches In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DION J SATCHER whose telephone number is (703)756-5849. The examiner can normally be reached Monday - Thursday 5:30 am - 2:30 pm, Friday 5:30 am - 9:30 am PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DION J SATCHER/Patent Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Feb 21, 2023
Application Filed
Apr 17, 2025
Non-Final Rejection — §103
Jul 15, 2025
Response Filed
Aug 21, 2025
Final Rejection — §103
Nov 18, 2025
Request for Continued Examination
Dec 01, 2025
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586218
MOTION ESTIMATION WITH ANATOMICAL INTEGRITY
2y 5m to grant Granted Mar 24, 2026
Patent 12579787
INSTRUMENT RECOGNITION METHOD BASED ON IMPROVED U2 NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12573066
Depth Estimation Using a Single Near-Infrared Camera and Dot Illuminator
2y 5m to grant Granted Mar 10, 2026
Patent 12555263
SYSTEMS AND METHODS FOR TWO-STAGE OBJECTION DETECTION
2y 5m to grant Granted Feb 17, 2026
Patent 12548140
DETERMINING PROCESS DEVIATIONS THROUGH VIDEO ANALYSIS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+14.2%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 39 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month