Prosecution Insights
Last updated: April 19, 2026
Application No. 18/519,292

METHOD FOR OPERATING A DRIVING ASSISTANCE SYSTEM FOR A MOTOR VEHICLE, APPARATUS, DRIVING ASSISTANCE SYSTEM, MOTOR VEHICLE, AND COMPUTER PROGRAM

Final Rejection §103
Filed
Nov 27, 2023
Examiner
SARWAR, BABAR
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
893 granted / 1043 resolved
+33.6% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
27 currently pending
Career history
1070
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1043 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are presented for examination. Claims 1-20 are rejected. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference (Giovanardi et al. (US Pub. No.: 2025/0065902 A1: hereinafter “Giovanardi”)) applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Speigle in view of Giovanardi et al. (US Pub. No.: 2025/0065902 A1: hereinafter “Giovanardi”). Consider claims 1, 12, and 15: Speigle teaches an apparatus (Fig. 1 elements 100-190, “The processing system 110…”), a driving assistance system (Fig. 1 elements 100-190, “The processing system 110 is a set of one or more electronic components in the vehicle 100 that take first data as input and provide second data as output based on processing the first data…”), and a method for operating a driving assistance system for a motor vehicle (See Speigle, e.g., “…A processing system comprises a processor and a memory…receive a polarimetric image from a polarimetric camera sensor, and to identify a road surface in the received image based on a vehicle location, an orientation of the camera sensor, and a vehicle pose…upon identifying, in the polarimetric image, polarized light reflections from the identified road surface…remove the identified polarized light reflections from the polarimetric image…generating an updated polarimetric image…the identified polarized light reflections are ignored at de-mosaicking, and to identify a road feature including a lane marking based on the updated polarimetric image…” of Abstract, ¶ [0005]-¶ [0007], ¶ [0014], ¶ [0018], ¶ [0027], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365), the method comprising: receiving, by a processor (e.g., “…The processing system 110 receives data from vehicle 100 sensors 130…” of Figs. 3A-B steps 300-365), first data (e.g., “…a road surface 170…the lane marking 180…a water puddle 190…”, of Fig. 1 elements 100-190) configured to indicate an announcement of at least one of a driving situation and a road situation being expected to be upcoming for or ahead of the motor vehicle from a data source (the instant application clearly states, as taught in ¶ [0052], the data source may be implemented on-board, i.e., in the motor vehicle 10 itself, or may be received from a remote data source, such as a data base, another vehicle, or the like, using a communication interface, V2V, V2X, or the like) (See Speigle, e.g., “…receive polarimetric image data from one or more polarimetric camera sensors 130 directed toward an exterior of the vehicle 100…the processing system 110 identifies the road surface 170 in the received polarimetric image…identify pixels in the received image which correspond to the point on the road surface 170… determines that the received polarimetric image is either oversaturated or undersaturated…update the polarimetric image by removing the polarized light reflections in the received polarimetric image…determine that an area on the road surface 170 is wet upon determining that, e.g., a ratio of reflected light intensity I.sub.R to total light intensity I.sub.total of a plurality of image pixels corresponding to the area exceeds a threshold…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365); determining, by the processor, at least one parameter of at least one detection device of the driving assistance system based on the first data (See Speigle, e.g., “…each of the optoelectronic components 210 of the image sensing device 200 may detect light that has a specific polarization direction…an optoelectronic component 210 may detect light with a polarization direction of 90 degrees…generate an image based on outputs of the optoelectronic components 210…determine that an area on the road surface 170 is wet upon determining that, e.g., a ratio of reflected light intensity I.sub.R to total light intensity I.sub.total of a plurality of image pixels corresponding to the area exceeds a threshold…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365). Speigle further teaches and providing, by the processor, the at least one parameter for preconditioning the at least one detection device according to the first data (See Speigle, e.g., “…each of the optoelectronic components 210 of the image sensing device 200 may detect light that has a specific polarization direction…an optoelectronic component 210 may detect light with a polarization direction of 90 degrees…generate an image based on outputs of the optoelectronic components 210…determines whether a road feature, e.g., a lane marking 180, is detected…determines that a road feature such as a lane marking 180 is detected…causes an action based on the detected road feature…actuate a vehicle 100 steering actuator 120 to keep a vehicle 100 within a lane based on the detected lane marking 180, e.g., at a left and/or right side of the lane…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365). However, SAITO does not explicitly teach a data source, the at least one detection device being different from the data source. In an analogous field of endeavor, Giovanardi teaches a data source (Fig. 1 elements 106, 112-122), the at least one detection device (e.g., “…A vehicle 102, is configured to gather (104) road data (e.g., using one or more sensors (e.g., wheel accelerometers, body accelerometers, IMUs, etc.)) and determine a road profile 108 based on that road data using one or more microprocessors…”, of Fig. 1 elements 104-126) being different from the data source (See Giovanardi, e.g., “…the one or more processors may communicate with one or more servers from which the one or more processors may access road segment information…one or more servers may include one more server processors configured to communicate in two-way communication with one or more vehicles…receive road profile information from the one or more vehicles, and store and/or utilize that road profile information to form road segment information…send reference road profile information to one or more vehicles, such that a vehicle may employ terrain-based localization…one or more vehicle systems may be controlled or one or more parameters of the one and/or more vehicle systems may be adjusted based on forward looking road profile information…”, of ¶ [0188], ¶ [0191], ¶ [0198], and Fig. 1 elements 100-126). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine “…A processing system comprises a processor and a memory…receive a polarimetric image from a polarimetric camera sensor, and to identify a road surface in the received image based on a vehicle location, an orientation of the camera sensor, and a vehicle pose…upon identifying, in the polarimetric image, polarized light reflections from the identified road surface…remove the identified polarized light reflections from the polarimetric image…generating an updated polarimetric image…the identified polarized light reflections are ignored at de-mosaicking, and to identify a road feature including a lane marking based on the updated polarimetric image…”, as disclosed in Speigle with “a data source, the at least one detection device being different from the data source”, as taught in Giovanardi with a reasonable expectation of success to yield a system, method for robustly, seamlessly, and expeditiously recognizing, and providing sufficient accuracy or resolution for road features, thereby, mitigating, avoiding collisions. Consider claim 2: The combination of Speigle, Giovanardi teaches everything claimed as implemented above in the rejection of claim 1. In addition, Speigle teaches wherein the preconditioning of the at least one detection device is performed by the processor (See Speigle, e.g., “…determine that an area on the road surface 170 is wet upon determining that, e.g., a ratio of reflected light intensity I.sub.R to total light intensity I.sub.total of a plurality of image pixels corresponding to the area exceeds a threshold…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365) before the motor vehicle reaches the at least one of the announced driving situation and the announced road situation (See Speigle, e.g., “…determines whether a road feature, e.g., a lane marking 180, is detected…determines that a road feature such as a lane marking 180 is detected…causes an action based on the detected road feature…actuate a vehicle 100 steering actuator 120 to keep a vehicle 100 within a lane based on the detected lane marking 180, e.g., at a left and/or right side of the lane…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365). Consider claims 3, 13, and 16: The combination of Speigle, Giovanardi teaches everything claimed as implemented in the rejection of claims 1, 12, and 15 above. In addition, Speigle teaches wherein the preconditioning of the at least one detection device includes proactively adjusting, by the processor (e.g., “…modify the camera parameter may include instructions to increase a camera exposure time upon determining that the received image is undersaturated…update the polarimetric image by removing the polarized light reflections in the received polarimetric image…” of Figs. 3A-B steps 300-365), the at least one detection device using the at least one parameter prior to driving situation-related or road situation-related data processing based on the at least one detection device (See Speigle, e.g., “…each of the optoelectronic components 210 of the image sensing device 200 may detect light that has a specific polarization direction…an optoelectronic component 210 may detect light with a polarization direction of 90 degrees…generate an image based on outputs of the optoelectronic components 210…determine that an area on the road surface 170 is wet upon determining that, e.g., a ratio of reflected light intensity I.sub.R to total light intensity I.sub.total of a plurality of image pixels corresponding to the area exceeds a threshold…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365). Consider claim 4: The combination of Speigle, Giovanardi teaches everything claimed as implemented in the rejection of claim 1 above. In addition, Speigle teaches wherein the at least one parameter is configured to proactively adjust detection or perception quality (e.g., “…modify the camera parameter may include instructions to increase a camera exposure time upon determining that the received image is undersaturated…update the polarimetric image by removing the polarized light reflections in the received polarimetric image…” of Figs. 3A-B steps 300-365) of the at least one detection device to the at least one of the announced driving situation and the announced traffic situation (See Speigle, e.g., “…determines that the received polarimetric image is either oversaturated or undersaturated…update the polarimetric image by removing the polarized light reflections in the received polarimetric image…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365). Consider claim 5: The combination of Speigle, Giovanardi teaches everything claimed as implemented in the rejection of claim 1 above. In addition, Speigle teaches wherein the at least one parameter is configured to overrule a corresponding at least one default parameter of the at least one detection device (e.g., “…decrease a camera exposure time upon determining that the received image is oversaturated…modify the camera parameter may include instructions to increase a camera exposure time upon determining that the received image is undersaturated…update the polarimetric image by removing the polarized light reflections in the received polarimetric image…” of Figs. 3A-B steps 300-365). Consider claim 6: The combination of Speigle, Giovanardi teaches everything claimed as implemented in the rejection of claim 1 above. In addition, Speigle teaches wherein the first data is obtained based on at least one of object recognition, traffic sign recognition, road map data, Vehicle-To-Everything (V2X) information, Vehicle-To-Vehicle (V2V) information, a traffic information service, a weather information service, satellite navigation position data, detection device data from another motor vehicle, and a database (See Speigle, e.g., “…The processing system 110 is generally arranged for communications on a vehicle communication network that can include a bus in the vehicle such as a controller area network (CAN) or the like, and/or other wired and/or wireless mechanisms…the processing system 110 may transmit messages to various devices in the vehicle and/or receive messages from the various devices, e.g., an actuator 120, an HMI 140, etc.…” of ¶ [0027], ¶ [0036]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365). Consider claims 7, 14, and 17: The combination of Speigle, Giovanardi teaches everything claimed as implemented in the rejection of claims 1, 12, and 15 above. In addition, Speigle teaches wherein, prior to the determining of the at least one parameter, the method further includes: receiving, by the processor, second data obtained by use of at least one on-board resource of the motor vehicle itself (e.g., “…upon determining that a noise ratio of the updated image exceeds a threshold, (i) ignore the received polarimetric image, (ii) receive a second polarimetric image, and (iii) identify the road feature based on an updated second polarimetric image…” of Figs. 3A-B steps 300-365) and configured to indicate confirmation of the at least one of the announced driving situation and the announced road situation (See Speigle, e.g., “…determines whether the received image is (i) oversaturated or undersaturated or (ii) neither oversaturated nor undersaturated…calculate a histogram of the received image and determine whether the image is oversaturated or undersaturated based on the determined histogram…determines that the received polarimetric image is either oversaturated or undersaturated, then the process 300 returns to the block 315; otherwise the process 300 proceeds to a block 330…” of ¶ [0027], ¶ [0036]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365). Consider claim 8: The combination of Speigle, Giovanardi teaches everything claimed as implemented in the rejection of claim 7 above. In addition, Speigle teaches wherein the second data is obtained based on at least one of image recognition data, object recognition data (See Speigle, e.g., “…each of the optoelectronic components 210 of the image sensing device 200 may detect light that has a specific polarization direction…an optoelectronic component 210 may detect light with a polarization direction of 90 degrees…generate an image based on outputs of the optoelectronic components 210…determine that an area on the road surface 170 is wet upon determining that, e.g., a ratio of reflected light intensity I.sub.R to total light intensity I.sub.total of a plurality of image pixels corresponding to the area exceeds a threshold…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365), a sensor operating state data, rain sensor data (e.g., “…The processing system 110 may estimate a maneuver of the second vehicle 100 upon predicting that the second vehicle does not detect the lane marking 180 in the wet area, e.g., an unexpected lane departure due to not detecting the lane marking 180. Thus, the vehicle 100 processing system 110 may be programmed to perform an action, e.g., increasing distance to the second vehicle by reducing speed, changing lane, etc., upon predicting an unexpected maneuver of the second vehicle…” of Figs. 3A-B steps 300-365), ambient light sensor data, changed motor vehicle driving data, a lowered motor vehicle speed in approaching the at least one of the announced driving situation and the announced road situation, and current satellite navigation position data of the motor vehicle (See Speigle, e.g., “…The vehicle 100 processing system 110 may receive vehicle 100 location data from a vehicle 100 location sensor 130, e.g., a GPS sensor 130. Vehicle 100 location coordinates include longitudinal, lateral, and elevation of a vehicle 100 reference point 150 relative to a coordinate system 160, e.g., GPS coordinate system…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365). Consider claim 9: The combination of Speigle, Giovanardi teaches everything claimed as implemented in the rejection of claim 1 above. In addition, Speigle teaches further including: updating, by the processor, at least one of the first data or the data source underlying the first data or the at least one parameter based on second data obtained by use of at least one on-board resource of the motor vehicle itself (See Speigle, e.g., “…upon determining that a noise ratio of the updated image exceeds a threshold, (i) ignore the received polarimetric image, (ii) receive a second polarimetric image, and (iii) identify the road feature based on an updated second polarimetric image…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365). Consider claim 10: The combination of Speigle, Giovanardi teaches everything claimed as implemented in the rejection of claim 1 above. In addition, Speigle teaches further including: before the motor vehicle leaves the at least one of the driving situation and the road situation again or upon leaving the at least one of the driving situation and the road situation, causing, by the processor, the provided at least one parameter to be replaced by a corresponding at least one default parameter (e.g., “…The processing system 110 may estimate a maneuver of the second vehicle 100 upon predicting that the second vehicle does not detect the lane marking 180 in the wet area, e.g., an unexpected lane departure due to not detecting the lane marking 180. Thus, the vehicle 100 processing system 110 may be programmed to perform an action, e.g., increasing distance to the second vehicle by reducing speed, changing lane, etc., upon predicting an unexpected maneuver of the second vehicle…” of Figs. 3A-B steps 300-365). Consider claim 11: The combination of Speigle, Giovanardi teaches everything claimed as implemented in the rejection of claim 1 above. In addition, Speigle teaches wherein the at least one of the driving situation and the road situation is selected from a road construction site, a road tunnel, a weather condition, a parking garage, an intersection, a railroad crossing, a speed bump, a changing road condition, and a border crossing (See Speigle, e.g., “…each of the optoelectronic components 210 of the image sensing device 200 may detect light that has a specific polarization direction…an optoelectronic component 210 may detect light with a polarization direction of 90 degrees…generate an image based on outputs of the optoelectronic components 210…determine that an area on the road surface 170 is wet upon determining that, e.g., a ratio of reflected light intensity I.sub.R to total light intensity I.sub.total of a plurality of image pixels corresponding to the area exceeds a threshold…” of ¶ [0027], ¶ [0039]-¶ [0040], ¶ [0052]-¶ [0055], ¶ [0069]-¶ [0072], ¶ [0075], ¶ [0078]-¶ [0089], and Fig. 1 elements 100-190, and Figs. 3A-B steps 300-365). Consider claims 18, 19, and 20: The claims 18-20 are analyzed, and thus rejected with respect to the similar reasonings, and analysis as implemented in the rejection of claims 1, 12, and 15. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bharwani (US Pub. No.: 2023/0209206 A1) teaches “A system for a vehicle may include a camera configured to collect image data. The camera may have an adjustable exposure. The system may also include a controller in communication with the camera. The controller may be configured to adjust the camera exposure in a constant ambient light environment based upon an anticipated change in ambient light determined from a location of the camera.” Hsu et al. (US Pub. No.: 2023/0099029 A1) teaches “A system for implementing adaptive light distributions for an autonomous vehicle (comprises the autonomous vehicle, a control device, and a headlight associated with the autonomous vehicle. The control device receives sensor data from sensors of the autonomous vehicle, where the sensor data comprises an image of one or more objects on a road traveled by the autonomous vehicle. The control device determines that a light condition level on a particular portion of the image is less than a threshold light level. The control device adjusts the headlight to increase illumination on a particular part of the road that is shown in the particular portion of the image.” Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BABAR SARWAR whose telephone number is (571)270-5584. The examiner can normally be reached on Mon-Fri 9:00 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris S. Almatrahi can be reached on (313)446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free)? If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BABAR SARWAR/Primary Examiner, Art Unit 3667
Read full office action

Prosecution Timeline

Nov 27, 2023
Application Filed
May 30, 2025
Non-Final Rejection — §103
Sep 03, 2025
Response Filed
Oct 22, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600370
VEHICULAR CONTROL SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602800
TIRE STATE ESTIMATION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602933
VEHICULAR SENSING SYSTEM WITH OCCLUSION ESTIMATION FOR USE IN CONTROL OF VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12594947
DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586465
METHOD AND APPARATUS FOR ASSISTING RIGHT TURN OF AUTONOMOUS VEHICLE BASED ON UWB COMMUNICATION AND V2X COMMUNICATION AT INTERSECTION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+20.0%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 1043 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month