Prosecution Insights
Last updated: April 19, 2026
Application No. 18/965,590

Vehicle Object Detection Data Processing Device and Method

Non-Final OA §103§112
Filed
Dec 02, 2024
Examiner
ROBERSON, JASON R
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
97%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
275 granted / 369 resolved
+22.5% vs TC avg
Strong +23% interview lift
Without
With
+22.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
394
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
30.0%
-10.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 369 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Application Claims 1-20 have been examined in this application filed on or after March 16, 2013, and are being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This communication is the First Office Action on the Merits. Key to Interpreting this Office Action For readability, all claim language has been bolded. Citations from prior art are provided at the end of each limitation in parenthesis. Any further explanations that were deemed necessary the by Examiner are provided at the end of each claim limitation. The Applicant is encouraged to contact the Examiner directly if there are any questions or concerns regarding the current Office Action. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the applicant regards as the invention. In regards to claims 1 and 11: Applicant claims generate, based on a distance value and a virtual reference point on a ground, a second range image, wherein the virtual reference point is transformed from the reference point, wherein the distance value represents a distance between a pixel of the first range image and the ground, and wherein the virtual reference point is a center of the ground; Applicant disclosure describes generating a second image by determining a distance value for each pixel of the first range image relative to the ground. However, Applicant effectively claims generating an image from a singular pixel, which is not conducive to neither the plain meaning of the term image nor the described image in Applicant disclosure. Therefore, one of ordinary skill would not understand the metes and bounds of the claimed second range image because the metes and bounds of the pixel requirements are unclear and indefinite. Further, one of ordinary skill would be unable to clearly identify whether the claimed “distance value” is for one pixel, any pixel, each pixel, or a set/plurality of pixels of the first image (that inherently includes more than one pixel) used to form the second range image. Corrective action or clarification is required. Further in regards to claims 1 and 11: Applicant claims generate a third range image, wherein the third range image comprises the extracted first area and second area; determine a third area in the third range image, wherein an object is present in the third area; In view of Applicant disclosure, the claimed object is the claimed vehicle (see preamble of claims 1 and 11), or at the very least, portions of the claimed vehicle. There does not appear to be any Applicant support for the detecting of any other objects, types of objects or any other vehicles. This therefore generates indefiniteness because Applicant appears to be using two different terms to describe the same structure within the claim. Corrective action or clarification is required. Further in regards to claims 1 and 11: Applicant claims determine a third area in the third range image, wherein an object is present in the third area; generate, based on the third area, vehicle masking data; However, Applicant appears to be missing the step of connecting the object to the vehicle masking such that one of ordinary skill would understand whether the claimed vehicle masking includes or excludes the claimed object, and compounds the metes and bounds indefiniteness of the claimed object outlined above. Further, it is unclear what vehicle Applicant refers. Corrective action or clarification is required. All other dependent claims of the indefinite claims detailed above are also indefinite at least by virtue of depending on the indefinite claims detailed above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 9, 11-13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ryu et al. (US 20220291390 A1) herein Ryu, in view of Gangundi et al. (US 20210333373 A1) herein Gangundi. In regards to Claim 1, as best understood, Ryu discloses the following: 1. An apparatus for controlling autonomous driving of a vehicle, (see at least [0058], [0063] “host vehicle”) the apparatus comprising: a sensor mounted at a reference point on the vehicle, the sensor configured to acquire sensing data; (see at least [0058]-[0059] “LiDAR sensor 310”) one or more processors; (see at least [0063]-[0066], [0141] “preprocessing unit 320”) and a memory storing one or more programs, (see at least [0142] “computer-readable recording medium”) that when executed by the one or more processors, are configured to cause the apparatus to: generate, based on the reference point and the sensing data, a first cluster of points; (see at least [0058] “point cloud”) generate, based on the first cluster of points, a first range image; (see at least [0066] “clustering unit 340 may cluster the LiDAR points preprocessed by the preprocessing unit 320 into meaningful units according to predetermined criteria… examples of the clustering unit 340, there are a 2D clustering unit… 2D clustering unit is a unit that performs clustering in units of points or a specific structure by projecting data onto the x-y plane without considering height information”, see also FIGs 19A, 19B, 19C) generate, based on a distance value and a virtual reference point on a ground, a second range image, wherein the virtual reference point is transformed from the reference point, wherein the distance value represents a distance between a pixel of the first range image and the ground, and wherein the virtual reference point is a center of the ground; (see at least [0071] “An object and the ground (or the road surface) may be present within the vehicle driving region, and it may be required to determine whether the LiDAR points are points related to the object (hereinafter referred to as “object points”) or points related to the ground (hereinafter referred to as “ground points”)”, see also FIGs 20A, 20B, 20C, [0021] “distance inspection unit” and [0059] “the distance from the LiDAR sensor 310 to the object or the ground” and Fig. 9, step 232 “lidar points present within predetermined distance centered on vehicle”) extract, based on the first range image and the second range image, a first area and a second area, wherein a contact point with the ground is absent in the first area, and wherein a contact point with the ground is present in the second area; (see at least [0103] “After step 128, a third inclination S3 is obtained using a target point and a neighboring point, among the LiDAR points present in the vehicle driving region (step 130). Here, the target point is a point which is to be inspected, among the LiDAR points acquired by the LiDAR sensor 310, to determine whether the point corresponds to a ground point or corresponds to an object point. Furthermore, the neighboring point is a point that belongs to a layer (hereinafter referred to as a “previous layer”) adjacent to a certain layer to which the target point belongs (hereinafter referred to as a “current layer”).”, see also [0140] “point attribute determination unit 336 may determine whether the target point is a ground point or an object point in response to the result of the comparison by the comparison unit 334”) generate a third range image, wherein the third range image comprises the extracted first area and second area; (see at least Fig. 19A, FIG. 19B and FIG. 19C and [0146] “difference value between the third inclination S3 and the fourth inclination S4 is compared with the threshold inclination Sth, and whether the LiDAR point corresponds to an object point or corresponds to a ground point is checked using the result of the comparison.”, [0147] “FIG. 19A, FIG. 19B and FIG. 19C show results acquired using the object-tracking methods… points having relatively small gray scale represent object points, and points having relatively great gray scale represent ground points” and FIG. 20B and [0156] “accurately determine whether the LiDAR points are ground points or object points, and thus the number of LiDAR points expressing an object”) As best understood, Ryu discloses the following: determine a third area in the third range image, wherein an object is present in the third area; (see at least [0068] “target object”) generate, based on the third area, vehicle masking data; generate a signal indicating the vehicle masking data; (see at least [0065] “preprocessing unit 320 may remove data pertaining to reflections from the host vehicle. That is, since there is a region that is shielded by the body of the host vehicle according to the mounting position and the field of view of the LiDAR sensor 310, the preprocessing unit 320 may remove data pertaining to reflections from the body of the host vehicle using the reference coordinate system.”) For the sake of compact prosecution, alternative interpretations of the above limitations are also taught by Gangundi. (see at least Fig. 2, steps 202, 204, 208 and [0015] “distinguish between valid environmental sensor measurements (e.g., point clouds 106B, 106C), and those measurements located on, or reflected from, a surface of AV 102 (e.g., self-hits represented by point cloud 106A), a geometric model can be used.”, [0018] “Geometric model 104 may be created (or updated) using a calibration process in which LiDAR data is collected by sensor 103 and used to determine the boundaries of AV 10” and [0019] “Mask 204 can be generated based on geometric model 202. In some implementations, mask 204 may be a matrix used to perform transformations on arrays of collected sensor data, for example, to separate/identify self-hit values.”) Ryu does not explicitly disclose the following, which is taught by Gangundi: and control, based on the signal, autonomous driving of the vehicle. (see at east abstract “identify self-hit data collected by autonomous vehicle (AV) sensors”, [0027] “Autonomous vehicle 402 further includes several mechanical systems that are used to effectuate appropriate motion of the autonomous vehicle 402. For instance, the mechanical systems can include but are not limited to, vehicle propulsion system 430, braking system 432, and steering system 434.”) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the features of Gangundi with the invention of Ryu, with a reasonable expectation of success, with the motivation of providing solutions for eliminating extraneous sensor signals and the associated data load for autonomous vehicles that require the collection and processing of large quantities of data using various sensor types to perform the functions that are conventionally performed by human drivers. (Gangundi, [0001]-[0002]) In regards to Claim 2, Ryu does not explicitly disclose the following, which is taught by Gangundi: 2. The apparatus of claim 1, wherein the one or more programs, when executed by the one or more processors, are configured to cause the apparatus to remove noise by applying the vehicle masking data to second sensing data acquired from the sensor. (see at least [0016] “Using the bit mask, self-hit data can be filtered from the bulk of collected sensor data 106.”) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the features of Gangundi with the invention of Ryu, with a reasonable expectation of success, with the motivation of providing solutions for eliminating extraneous sensor signals and the associated data load for autonomous vehicles that require the collection and processing of large quantities of data using various sensor types to perform the functions that are conventionally performed by human drivers. (Gangundi, [0001]-[0002]) In regards to Claim 3, Ryu discloses the following: 3. The apparatus of claim 2, wherein the one or more programs, when executed by the one or more processors, are configured to cause the apparatus to: generate, based on the second sensing data, a second cluster of points; generate, based on the second cluster of points, a fourth range image; (see at least [0065] “preprocessing unit 320 may remove data pertaining to reflections from the host vehicle” and [0066] “clustering unit 340 may cluster the LiDAR points preprocessed by the preprocessing unit 320 into meaningful units according to predetermined criteria, and may output the clustered LiDAR points to the shape analysis unit 350 (step 140)”) overlap the vehicle masking data on the fourth range image and remove a vehicle area from the fourth range image; (see at least [0066] “2D clustering unit”, see also Figs, 20A-20C) and transform the fourth range image from which the vehicle area has been removed into a third cluster of points. (see previous citations, see also Figs, 20A-20C) In regards to Claim 9, Ryu does not explicitly disclose the following, which is taught by Gangundi: 9. The apparatus of claim 1, wherein the one or more programs, when executed by the one or more processors, are configured to cause the apparatus to generate the first range image by applying spherical projection to the first cluster of points. (see at least [0017] “models may be converted between coordinate systems to facilitate the removal of LiDAR self-hits recorded in the spherical coordinate system. That is, CAD models of the AV may be converted from a Cartesian coordinate system into a spherical coordinate system. By way of example, collected sensor (LiDAR) recorded in spherical coordinates—e.g., having a radial distance (r), azimuth angle (ϕ), and elevation angle (Θ)—can be compared to the spherical model. Those points falling inside the spherical model can be identified as self-hits.”, see also [0022] “spherical coordinate system”) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the features of Gangundi with the invention of Ryu, with a reasonable expectation of success, with the motivation of providing solutions for eliminating extraneous sensor signals and the associated data load for autonomous vehicles that require the collection and processing of large quantities of data using various sensor types to perform the functions that are conventionally performed by human drivers. (Gangundi, [0001]-[0002]) In regards to Claims 11-13 and 19: Claims 11-13 and 19 are the methods performed by the apparatus of claims 1-3 and 9, and are therefore rejected the same or similar to claims 1-3 and 9, above. Claims 4-7 and 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ryu in view of Gangundi as applied, in further view of May et al. (US 20140232869 A1) herein May. In regards to Claim 4, Ryu does not explicitly disclose the following, which is taught by May: 4. The apparatus of claim 1, wherein the one or more programs, when executed by the one or more processors, are configured to cause the apparatus to generate, based on a plurality of pieces of consecutive sensing data acquired from the sensor, the vehicle masking data. (see at least [0025] “processes image data of consecutive frames of captured images while the vehicle is moving and, if there is something in the image that looks different than the image neighborhood, but is constant in position and size over the time (in other words, constant between frames of captured image data), then the system may determine that the detected item or "object" or "blob" is indicative of dirt or the like at the lens of the camera.” And Fig. 6 and [0013] “FIG. 6 is an example of the dirt detection system detecting dirt in captured images and masking the dirt”) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the features of May with the invention of Ryu, with a reasonable expectation of success, with the motivation of lowering or compensating the impairment of dirt on the vision system's camera lenses image processing algorithm that may cause wrong results and potential problems if the system does not detect and recognize the presence of the dirt at the lens. (May, [0019],[0024]) In regards to Claim 5, Ryu does not explicitly disclose the following, which is taught by May: 5. The apparatus of claim 4, wherein the one or more programs, when executed by the one or more processors, are configured to cause the apparatus to: generate a plurality of pieces of vehicle masking candidate data for each of the plurality of pieces of consecutive sensing data; and generate the vehicle masking data by comparing areas corresponding to a portion of the object, wherein the areas overlap in the plurality of pieces of vehicle masking candidate data. (see at least [0025] “processes image data of consecutive frames of captured images while the vehicle is moving and, if there is something in the image that looks different than the image neighborhood, but is constant in position and size over the time (in other words, constant between frames of captured image data), then the system may determine that the detected item or "object" or "blob" is indicative of dirt or the like at the lens of the camera.” And Fig. 6 and [0013] “FIG. 6 is an example of the dirt detection system detecting dirt in captured images and masking the dirt”) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the features of May with the invention of Ryu, with a reasonable expectation of success, with the motivation of lowering or compensating the impairment of dirt on the vision system's camera lenses image processing algorithm that may cause wrong results and potential problems if the system does not detect and recognize the presence of the dirt at the lens. (May, [0019],[0024]) In regards to Claim 6, Ryu discloses the following: 6. The apparatus of claim 5, wherein the one or more programs, when executed by the one or more processors, are configured to cause the apparatus to generate the vehicle masking data by extracting pixels of the areas, (see at least [0065] “preprocessing unit 320 may remove data pertaining to reflections from the host vehicle. That is, since there is a region that is shielded by the body of the host vehicle according to the mounting position and the field of view of the LiDAR sensor 310, the preprocessing unit 320 may remove data pertaining to reflections from the body of the host vehicle using the reference coordinate system.” wherein the areas appear at a rate higher than or equal to a preset rate across the plurality of pieces of masking candidate data. (see Fig. 19B, 19C, 20B, 20C, 21B and [0058] “The LiDAR sensor 310 may acquire a point cloud including a plurality of points related to a region”, inherent to the LIDAR signals collected by LIDAR sensor 310 due to proximity of the body of the host vehicle as compared to all other sensed objects and structures leading to inherently shorter travel times.) For the sake of compact prosecution, Gangundi also teaches this limitation. (see [0017] “models may be converted between coordinate systems to facilitate the removal of LiDAR self-hits recorded in the spherical coordinate system. That is, CAD models of the AV may be converted from a Cartesian coordinate system into a spherical coordinate system. By way of example, collected sensor (LiDAR) recorded in spherical coordinates—e.g., having a radial distance (r), azimuth angle (ϕ), and elevation angle (Θ)—can be compared to the spherical model. Those points falling inside the spherical model can be identified as self-hits.”) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the features of Gangundi with the invention of Ryu, with a reasonable expectation of success, with the motivation of providing solutions for eliminating extraneous sensor signals and the associated data load for autonomous vehicles that require the collection and processing of large quantities of data using various sensor types to perform the functions that are conventionally performed by human drivers. (Gangundi, [0001]-[0002]) In regards to Claim 7, Ryu, as modified, discloses the following: 7. The apparatus of claim 4, wherein the plurality of pieces of consecutive sensing data are acquired while the vehicle is traveling. (see at least [0058] “The LiDAR sensor 310 may acquire a point cloud including a plurality of points related to a region in which a vehicle provided with the LiDAR sensor 310 (hereinafter referred to as a “host vehicle”) travels (or a region encompassing a traveling region and a surrounding region) (hereinafter referred to as a “vehicle driving region”)”) In regards to Claims 14-17: Claims 14-17 are the methods performed by the apparatus of claims 4-7, and are therefore rejected the same or similar to claims 4-7, above. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ryu in view of Gangundi as applied, in further view of Bigio et al. (US 20190383631 A1) herein Bigio. In regards to Claim 8, Ryu suggests the following: 8. The apparatus of claim 1, wherein the one or more programs, when executed by the one or more processors, are configured to cause the apparatus to generate, based on multiple pieces of sensing data, the vehicle masking data for each sensor of a plurality of sensors, wherein the multiple pieces of sensing data are acquired from a first sensor, a second sensor, a third sensor, and a fourth sensor of the plurality of sensors, (see at least [0063] “LiDAR sensor 310 is mounted to the host vehicle.” Ryu discloses a LIDAR sensor mounted to the host vehicle. However, Ryu does not disclose a plurality of sensors and/or multiple pieces of sensing data are acquired from a first sensor, a second sensor, a third sensor, and a fourth sensor of the plurality of sensors. However, a mere duplication of parts has no patentable significance unless a new and unexpected result is produced. In re Harza, 274 F.2d 669, 124 USPQ 378 (CCPA 1960). See MPEP 2144.04, VI, B. Duplication of Parts for details. Before the effective filing date of the claimed invention, it would have been obvious for a person having ordinary skill in the art to have duplicated the parts of Ryu, with the motivation of providing redundant sensors and sensor data for the system in case of a sensor failure, and/or with the motivation of generating a larger sensing area. Further, the results of this duplication would have been predictable. Ryu is silent, but Bigio teaches the following: and wherein: the first sensor is mounted on a roof of the vehicle, the second sensor is mounted on a rear surface of the vehicle, the third sensor is mounted on a left surface of the vehicle, and the fourth sensor is mounted on a right surface of the vehicle. (see at least Fig. 2, items 112, 116A-116K and [0039]-[0040]) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the features of Bigio with the invention of Ryu, with a reasonable expectation of success, with the motivation of providing multiple-plane surfaces that can resize and reshape to automatically and selectively enhance a level of detail in the presentation, (Bigio, Abstract) and/or with the motivation of providing overlapping detection zones may provide redundant sensing, enhanced sensing, and/or provide greater detail in sensing within a particular portion (e.g., zone 216A) of a larger zone. (Bigio, [0043]) In regards to Claim 18: Claim 18 is the method performed by the apparatus of claim 8 and is therefore rejected the same or similar to claim 8, above. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ryu in view of Gangundi as applied, in further view of Zeng et al. (US 20210181350 A1) herein Zeng. In regards to Claim 10, Ryu is silent, but Zeng teaches the following: 10. The apparatus of claim 1, wherein the one or more programs, when executed by the one or more processors, are configured to cause the apparatus to adjust a size of the first range image based on at least one of an angle of view or a resolution of the sensor. (see at least Equation 5 and [0045] “third angle (θ) 334 (also referred to as the vertical angle of view (AOV)) covers near the view point 380 (e.g., distance “L” 370 of FIG. 3 being less than five meters, in certain embodiments) through determining image size according to the angle of view equations (above).”) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the features of Zeng with the invention of Ryu, with a reasonable expectation of success, with the motivation of enhancing LiDAR visibility to increase LiDAR light receiving efficiency of backscattering echo of illumination for vehicles (or other mobile platforms), including with an improved LiDAR receiver optical layout (sensors and lenses layout) to achieve a theoretically infinite depth of field (DoF). It is also important for the LiDAR receiver optical layout to maintain sufficiently large field of view to handle road topography changes while achieving the theoretically infinite DoF. (Zeng, [0003]) In regards to Claim 20: Claim 20 is the method performed by the apparatus of claim 10 and is therefore rejected the same or similar to claim 10, above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jason Roberson, whose telephone number is (571) 272-7793. The examiner can normally be reached from Monday thru Friday between 8:00 AM and 4:30 PM. The examiner may also be reached through e-mail at Jason.Roberson@USPTO.GOV, or via FAX at (571) 273-7793. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Z Mehdizadeh can be reached on (571)-272-7691. Another resource that is available to applicants is the Patient Application Information Retrieval (PAIR) system. Information regarding the status of an application can be obtained from the PAIR system. Status information for published applications may be obtained from either Private PAIR or Public PAX. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have any questions on access to the Private PAIR system, please feel free to contact the Electronic Business Center (EBC) at 866-217-9197 (toll free). Applicants are invited to contact the Office to schedule either an in-person or a telephone interview to discuss and resolve the issues set forth in this Office Action. Although an interview is not required, the Office believes that an interview can be of use to resolve any issues related to a patent application in an efficient and prompt manner. Sincerely, /JASON R ROBERSON/ Patent Examiner, Art Unit 3669 March 27, 2026 /NAVID Z. MEHDIZADEH/Supervisory Patent Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Dec 02, 2024
Application Filed
Mar 29, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12570161
CYCLE LIFE MANAGEMENT FOR MIXED CHEMISTRY VEHICLE BATTERY PACK
2y 5m to grant Granted Mar 10, 2026
Patent 12553732
ROUTING GRAPH MANAGEMENT IN AUTONOMOUS VEHICLE ROUTING
2y 5m to grant Granted Feb 17, 2026
Patent 12548186
Autonomous Driving System In The Agricultural Field By Means Of An Infrared Camera
2y 5m to grant Granted Feb 10, 2026
Patent 12528367
CHARGING CONTROL SYSTEM, CHARGING CONTROL METHOD AND AIRCRAFT
2y 5m to grant Granted Jan 20, 2026
Patent 12522253
Vehicle Control Apparatus and Vehicle Control Method
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
97%
With Interview (+22.8%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 369 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month