Prosecution Insights
Last updated: April 19, 2026
Application No. 18/510,083

METHOD FOR DETECTING COLLISION DATA, DRIVING DEVICE AND MEDIUM

Final Rejection §101
Filed
Nov 15, 2023
Examiner
AZIZ, ABDULMAJEED
Art Unit
2875
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Anhui NIO Autonomous Driving Technology Co., Ltd.
OA Round
2 (Final)
41%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
85%
With Interview

Examiner Intelligence

Grants 41% of resolved cases
41%
Career Allow Rate
78 granted / 190 resolved
-26.9% vs TC avg
Strong +44% interview lift
Without
With
+44.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
12 currently pending
Career history
202
Total Applications
across all art units

Statute-Specific Performance

§101
32.9%
-7.1% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
5.7%
-34.3% vs TC avg
§112
18.7%
-21.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 190 resolved cases

Office Action

§101
DETAILED ACTION Claims 1, 5-8 & 12-15 are currently pending and have been examined in this application. This FINAL communication is in response to the amendment submitted on 8/28/25. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 8/28/25 have been fully considered but they are not persuasive. RE: 101 Issue #1: Applicant: Applicant submits that any alleged abstract idea recited in claim 1 is incorporated into a practical application that improves the technical field of autonomously driven vehicles. Paragraph 5 of the US publication describes that embodiments of the application are directed to solving a technical problem where "the reliability of collected collision scenario data is relatively low, which reduces the performance of autonomous driving perception and decision-making algorithms and the reliability of vehicle abnormality diagnosis, accident liability determination, etc." As noted in paragraph 4 of the US publication, the low reliability of such collision scenario data (e.g., data collected about a vehicle collision) is due to "manual collection of witness statements" which may include "personal subjective opinions, resulting in relatively low reliability of the collected collision scenario data." Meanwhile paragraph 59 of the US publication describes how the claimed collision confidence level, which is arrived at through the series of steps recited in claim 1, solves these issues: [0059] In a specific implementation process, if the collision confidence level is greater than the preset confidence threshold, the scenario data may be determined as the collision scenario data, and/or the scenario data may be recalled, so that reliable data is provided for performance of autonomous driving perception and decision-making algorithms, to control the object to be detected, and to prevent collisions between the object to be detected and the target object. In addition, considerable data is also provided for vehicle abnormality diagnosis, accident liability determination, etc., to improve the reliability of vehicle abnormality diagnosis, accident liability determination, etc. Stated another way, upon performing the series of steps in claim 1 on the initial claimed scenario data and determining that a collision confidence level is greater than a threshold level, the scenario may be used, among other things, in processes that improve autonomous driving decisions to prevent collisions, which is a practical application. Claim 1 captures this practical application by reciting "determining that the collision confidence level is greater than a preset confidence threshold, and, in response, using the scenario data to in an autonomous driving decision-making algorithm for autonomously driving the vehicle." Examiner: Improving the reliability of scenario data collection in the manner described in the claims is not a technical solution to a technical problem. Is the data being collected in a way that improves the functioning of a computer or is there some machine learning that improves the autonomous driving capabilities (for which positive recitations of such control can be implemented in the claim language)? Gathering and determining more reliable data doesn’t necessarily improve autonomous vehicle technology. Is there some machine learning in conjunction with more recitations that can be added to positively recite autonomous control of the vehicle? Claim Objections Claim 1 is objected to because of the following informalities: Claims 1: Amend to: “using the scenario data [[to]] in an autonomous” (the “to” appears to be a typographical error). Appropriate correction is required. Allowable Subject Matter Claims 1, 5-8 & 12-15 (the claimed invention) would be potentially allowable if rewritten or amended to overcome the outstanding rejection(s) under 35 U.S.C. 101 or 35 U.S.C. 101 (pre-AIA ), set forth in this Office action. The 103 rejection is withdrawn. Claim 1. Houston teaches the following limitations: A method for detecting collision data, comprising: obtaining scenario data for a vehicle, wherein the scenario data includes at least perception data about a target object near the vehicle and driving data about how the vehicle is being driven; (Houston – [claim 1] A method comprising, by a computing system: accessing contextual data captured using one or more sensors associated with an autonomous vehicle while the autonomous vehicle traverses a route, wherein the contextual data includes perception data for an environment external to the autonomous vehicle and associated with the route; generating, based on at least a portion of the perception data, one or more representations of the environment external to the autonomous vehicle; [0051] contextual data about the state of the vehicle, the vehicle's environment, and/or the human driver of the vehicle. The contextual data may include parameters associated with the vehicle, such as a speed, moving direction (e.g., heading), trajectory, GPS coordinates, acceleration, pressure on a braking pedal, a pressure on an acceleration pedal, steering force on a steering wheel, wheel turning direction, turn-signal state, navigation map, target place, route, estimated time, detour, or the like. [0142] may collect contextual data of the surrounding environment based on one or more sensors associated with the vehicle system 1510. In particular embodiments, the vehicle system 1510 may collect data related to road conditions or one or more objects of the surrounding environment, for example, but not limited to, road layout, pedestrians, other vehicles (e.g., 1520), traffic status…the vehicle system 1510 may have a perception of the surrounding environment based on the contextual data collected through one or more sensors in real-time and/or based on historical contextual data stored in a vehicle model database. [0160] Perception module 1710 may process the available data (e.g., sensor data, data from a high-definition map, etc.) to derive information about the contextual environment… perception module 1710 may include one or more agent modelers (e.g., object detectors, object classifiers, or machine-learning models trained to derive information from the sensor data) to detect and/or classify agents present in the environment of the vehicle (e.g., other vehicles, pedestrians, moving objects). Perception module 1710 may also determine various characteristics of the agents. For example, perception module 1710 may track the velocities, moving directions, accelerations, trajectories, relative distances, or relative positions of these agents.) Examiner Note: Instant Spec [0024] “on the object to be detected such as a vehicle”. Object to be detected corresponds to another vehicle. [0024] “driving data of the object to be detected may include a lateral acceleration of the object to be detected, a lateral acceleration change rate of the object to be detected, a longitudinal acceleration of the object to be detected, a longitudinal acceleration change rate of the object to be detected, etc.” calculating a first collision risk score based on the perception data, and calculating a second collision risk score based on the driving data, (Houston – [0038] A risk score may be, for example, a collision probability determined for a particular environment based on sensor data that represents the environment. [0039] when a human driver activates the vehicle's brakes suddenly to reduce speed to 25 mph in an area where the appropriate speed or speed limit is substantially higher, the vehicle system may determine that an elevated level of risk is present, generate a corresponding risk score, and include the vehicle environment and risk score in the training a machine-learning model's training data. [0164] planning module 1720 may generate, based on a given predicted contextual representation, several different plans (e.g., goals or navigation instructions) for the vehicle. For each plan, the planning module 1720 may compute a score that represents the desirability of that plan. For example, if the plan would likely result in the vehicle colliding with an agent at a predicted location for that agent, as determined based on the predicted contextual representation [0142, 0160]) Examiner Note: based on the predicted contextual representation (whether through the perception data or driving data), a plan is established with a corresponding risk score computation. wherein calculating the second collision risk score based on the driving data includes: (Houston – [0038] A risk score may be, for example, a collision probability determined for a particular environment based on sensor data that represents the environment. [0039] when a human driver activates the vehicle's brakes suddenly to reduce speed to 25 mph in an area where the appropriate speed or speed limit is substantially higher, the vehicle system may determine that an elevated level of risk is present, generate a corresponding risk score, and include the vehicle environment and risk score in the training a machine-learning model's training data. [0164] planning module 1720 may generate, based on a given predicted contextual representation, several different plans (e.g., goals or navigation instructions) for the vehicle. For each plan, the planning module 1720 may compute a score that represents the desirability of that plan. For example, if the plan would likely result in the vehicle colliding with an agent at a predicted location for that agent, as determined based on the predicted contextual representation [0142, 0160]) obtaining a plurality of sets of historical driving data describing a collision involving another vehicle; (Houston – [0064] predict collision probabilities based on historical perception data. The collision probabilities may be examples of risk scores. The method may begin at step 141, where a vehicle system may retrieve historical vehicle parameters and perception data for vehicle environment associated with a time T 1 in the past; [0141] the vehicle system 1510 may collect the vehicle data and driving behavior data related to, for example, but not limited to, vehicle images, vehicle speeds, acceleration, vehicle moving paths, vehicle driving trajectories, locations, turning signal status (e.g., on-off state of turning signals), braking signal status, a distance to another vehicle, a relative speed to another vehicle, a distance to a pedestrian, a relative speed to a pedestrian, a distance to a traffic signal, a distance to an intersection, a distance to a road sign, a distance to curb, a relative position to a road line, an object in a field of view of the vehicle, positions of other traffic agents, aggressiveness metrics of other vehicles, etc. [0142] the vehicle system 1510 may have a perception of the surrounding environment based on the contextual data collected through one or more sensors in real-time and/or based on historical contextual data stored in a vehicle model database. [0160] may track the velocities, moving directions, accelerations, trajectories, relative distances, or relative positions of these agents. [claim 1, 00141-0142, 0160]). extracting feature vectors from each set of historical driving data; (Houston – [0161] The contextual environment may be represented in any suitable manner. As an example and not by way of limitation, the contextual representation may be encoded as a vector or matrix of numerical values, with each value in the vector/matrix corresponding to a predetermined category of information. For example, each agent in the environment may be represented by a sequence of values, starting with the agent's coordinate, classification (e.g., vehicle, pedestrian, etc.), orientation, velocity, trajectory, and so on. Alternatively, information about the contextual environment may be represented by a raster image that visually depicts the agent, semantic information, etc. For example, the raster image may be a birds-eye view of the vehicle and its surrounding, up to a predetermined distance. The raster image may include visual information (e.g., bounding boxes, color-coded shapes, etc.) that represent various data of interest (e.g., vehicles, pedestrians, lanes, buildings, etc.).) extracting a feature vector of the driving data; (Houston – [0161] The contextual environment may be represented in any suitable manner. As an example and not by way of limitation, the contextual representation may be encoded as a vector or matrix of numerical values, with each value in the vector/matrix corresponding to a predetermined category of information. For example, each agent in the environment may be represented by a sequence of values, starting with the agent's coordinate, classification (e.g., vehicle, pedestrian, etc.), orientation, velocity, trajectory, and so on. Alternatively, information about the contextual environment may be represented by a raster image that visually depicts the agent, semantic information, etc. For example, the raster image may be a birds-eye view of the vehicle and its surrounding, up to a predetermined distance. The raster image may include visual information (e.g., bounding boxes, color-coded shapes, etc.) that represent various data of interest (e.g., vehicles, pedestrians, lanes, buildings, etc.).) determining that the collision confidence level is greater than a preset confidence threshold, and, in response, using the scenario data to in an autonomous driving decision-making algorithm for autonomously driving the vehicle. (Houston – [0038] An anomalous risk score may be, for example, a risk score that is greater than a threshold value. An anomalous risk score may indicate, for example, that a driver of a vehicle caused a sudden or substantial change in the vehicle's operation, such as suddenly braking or turning. The vehicle system may identify anomalous risk scores by predicting risk scores based on characteristics of the environment, and determining whether the predicted risk scores satisfy anomaly criteria. The anomaly criteria may include exceeding an associated threshold or differing from ordinary (e.g., average) values by more than a threshold amount. Upon identifying a risk score that satisfies the anomaly criteria, the vehicle system may perform corresponding actions…which may include storing contextual data for subsequent use, [0045] A vehicle system 260 such as that shown in FIG. 1E may identify risk scores, such as anomalously-high predicted collision probabilities…a neural network 264 may learn the probability of collision in the top view 112 from a training process in which images from subsequent times, such as the top view 110 of FIG. 1B, are used to determine whether a collision actually occurred [0060] the vehicle system may determine that one or more collision-related operations are to be performed based on a comparison of the first predicted collision probability to a threshold collision probability. At step 125, the vehicle system may cause the first vehicle to perform the one or more collision-related operations based on the predicted and threshold collision probabilities. [0151]) Examiner Note: when a probability of collision exceeds a collision probability threshold, the scenario data anomaly may be learned by the neural network in terms of collision probability and this information is used to control operations of an autonomous vehicle. Houston does not explicitly teach the following limitations, however Olson teaches: obtaining a collision confidence level for the scenario data based on the first collision risk score and the second collision risk score; and Olson – [col 17 ln 9-12] after running any number of simulations one or more times (which may comprise various perturbations in the one or more times) a probability of different types of collisions may be determined. [col 20 1-3 & 24-33] each simulation generating simulation data as described herein as well as severity scores for vehicle occupants and objects… At 612, the process includes determining whether the severity score, a collection of severity scores, or other metric describing the collisions exceeds a threshold. The threshold may be based on one or more benchmarks or safety standards. The severity scores for a plurality of simulations may be aggregated to produce a set of severity scores that may be evaluated to determine a probability of collisions having a threshold level of severity [claim 1]) Examiner Note: collision risk scores correspond to severity scores. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Houston with Olson in order to determine a probability of collisions (confidence level) based on a plurality of severity or risk scores [Olson – col 20 ln 1-3 & 25-33]. Houston does not explicitly teach the following limitations, however Kedarisetti further teaches: applying a clustering algorithm to the [extracted feature vectors] to obtain at least one clustering center; (Kedarisetti – [0077] the pose vectors 470 are clustered, using for example k-means clustering. This results in n clusters 530, or sets, of pose vectors 470, each having a cluster center 535 and a radius r.) calculating a Euclidean distance between the [extracted feature vector] and each of the at least one clustering center; (Kedarisetti – [claim 3] the Euclidean distance is determined using the pose vector … and a cluster center of each of the clusters of pose vectors.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Houston with Kedarisetti in order to locate clustering data centers more efficiently using k-means clustering [Kedarisetti – 0080]. Based on prior art search results, the prior art of record neither anticipates nor renders obvious the claimed subject matter, as a whole or taken in combination, and does not teach: in response to determining that there is one calculated Euclidean distance, determining a first distance threshold corresponding to the Euclidean distance, and determining the second collision risk score based on the first distance threshold; and in response to determining that there are a plurality of calculated Euclidean distances, determining a minimum Euclidean distance in the plurality of calculated Euclidean distances and a second distance threshold corresponding to the minimum Euclidean distance, and determining the second collision risk score based on the second distance threshold; Most Relevant Prior Art: Houston (US 20210197720) teaches systems and methods for incident detection using inference models. Olson (US 12187322) teaches a severity simulation for autonomous vehicles. Kedarisetti (US 20210350117) teaches anomalous pose detection method and system. Raghavan (US 20230196741) teaches systems and methods for automated product classification. Therefore, the claimed invention has overcome the prior art of record. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 5-8 & 12-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are either directed to a system or method, which is one of the statutory categories of invention. (Step 1: YES). The Examiner has identified method Claim 1 as the claim that represents the claimed invention for analysis and is similar to system Claims 8 & 15. Claim 1 recites the limitations of (additional elements emphasized in bold and are considered to be parsed from the remaining abstract idea): A method for detecting collision data, comprising: obtaining scenario data for a vehicl, wherein the scenario data includes at least perception data about a target object near the vehicle and driving data about how the vehicle is being driven; calculating a first collision risk score based on the perception data, and calculating a second collision risk score based on the driving data, wherein calculating the second collision risk score based on the driving data includes: obtaining a plurality of sets of historical driving data describing a collision involving another vehicle; extracting feature vectors from each set of historical driving data; applying a clustering algorithm to the extracted feature vectors to obtain at least one clustering center; extracting a feature vector of the driving data; calculating a Euclidean distance between the extracted feature vector and each of the at least one clustering center; in response to determining that there is one calculated Euclidean distance, determining a first distance threshold corresponding to the Euclidean distance, and determining the second collision risk score based on the first distance threshold; and in response to determining that there are a plurality of calculated Euclidean distances, determining a minimum Euclidean distance in the plurality of calculated Euclidean distances and a second distance threshold corresponding to the minimum Euclidean distance, and determining the second collision risk score based on the second distance threshold; obtaining a collision confidence level for the scenario data based on the first collision risk score and the second collision risk score; and determining that the collision confidence level is greater than a preset confidence threshold, and, in response, using the scenario data to in an autonomous driving decision-making algorithm for autonomously driving the vehicle. which is a process that, under its broadest reasonable interpretation, covers performance of the limitation(s) as a Certain method of organizing human activity (fundamental economic practice), or a Mental process (concept performed in the human mind) of calculating risk scores and confidence levels to categorize or recall data. If a claim limitation, under its broadest reasonable interpretation (BRI), covers performance of the limitation as a certain method of a fundamental economic practice, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Similarly if a claim limitation under its BRI, covers performance of the limitation in the human mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. (Claims can recite a mental process even if they are claimed as being performed on a computer Gottschalk v. Benson, 409 U.S. 63; "Courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015).) Accordingly, the claim recites an abstract idea. (Step 2A-Prong 1: YES. The claims are abstract) This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05.f), (2) Adding insignificant extra-solution activity to the judicial exception (MPEP 2106.05.g), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05.h). Claim 1 is directed to a clustering algorithm and autonomous driving decision-making algorithm for autonomously driving the vehicle, however there needs to be structure for executing the method steps utilizing at least a processor. Under BRI of the claim, the Examiner will interpret that the software is being executed by a computer processor (in the interest of advancing compact prosecution), which is merely a generic computer component (see instant spec [0066] “processor 61 performs different steps of the method for detecting collision data or the method for controlling a driving device of the above method embodiments”). The next response should address this and incorporate the processor and its positively recited execution in the next response (see preamble of Claim 8 as an example). Similarly Claim 8 is directed to a driving device, processor, memory that stores a plurality of program codes, a clustering algorithm and autonomous driving decision-making algorithm for autonomously driving the vehicle and Claim 15 directed to a non-transitory CRM storing a plurality of program codes, processor, vehicles, a clustering algorithm and autonomous driving decision-making algorithm for autonomously driving the vehicle which are generic computer components (& for Claims 1, 8 & 15 - vehicles which are linking the judicial exception to a particular technological environment). The computer hardware is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to implement an abstract idea by adding the words “apply it” (or an equivalent) with the judicial exception and generally linking the use of the judicial exception to a particular technological environment. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore claim 1 is directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using computer hardware amounts to no more than mere instructions to implement an abstract idea by adding the words “apply it” (or an equivalent) with the judicial exception and generally linking the use of the judicial exception to a particular technological environment. Mere instructions to implement an abstract idea on or with the use of generic computer components, cannot provide an inventive concept - rendering the claim patent ineligible. Thus claim 1 is not patent eligible. (Step 2B: NO. The claims do not provide significantly more) The dependent claims further define the abstract idea that is present in their respective independent claims and hence are abstract for at least the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Therefore, the dependent claims are directed to an abstract idea. Thus, the aforementioned claims are not patent-eligible. Conclusion The prior art made of record, and not relied upon, considered pertinent to applicant' s disclosure or directed to the state of art is listed on the enclosed PTO-892. The following is a brief description for relevant prior art that was cited but not applied: Gyllenhammar (US 20220089153) provides scenario identification in autonomous driving environments. Whiteside (US 20240419572) provides performance testing for mobile robot trajectory planners. Heck (US 20230166743) provides devices and methods for assisting operation of vehicles based on situational assessment fusing exponential risks. Stein (US 20190325595) provides systems and methods for a vehicle environment modeling with a camera. O’Brien (US 20190235500) provides a method and system for autonomous decision making, corrective action, and navigation in a dynamically changing world. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDULMAJEED AZIZ whose telephone number is (571)270-5046. The examiner can normally be reached M-F 7-3:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALLANA LEWIN BIDDER can be reached at (571) 272-5560. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDULMAJEED AZIZ/Supervisory Patent Examiner, Art Unit 2875
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
May 24, 2025
Non-Final Rejection — §101
Aug 28, 2025
Response Filed
Nov 08, 2025
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566215
BATTERY APPARATUS AND METHOD FOR ESTIMATING BATTERY STATE
2y 5m to grant Granted Mar 03, 2026
Patent 12333967
FLAME PROTECTED OPTIC
2y 5m to grant Granted Jun 17, 2025
Patent 12087743
LIGHT-EMITTING WINDOW ELEMENT AND MOTOR VEHICLE COMPRISING A LIGHT-EMITTING WINDOW ELEMENT
2y 5m to grant Granted Sep 10, 2024
Patent 12085268
HEAT SINK, SEPARATOR, AND LIGHTING DEVICE APPLYING SAME
2y 5m to grant Granted Sep 10, 2024
Patent 12078888
BACKLIGHT UNIT AND DISPLAY DEVICE INCLUDING THE SAME
2y 5m to grant Granted Sep 03, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
41%
Grant Probability
85%
With Interview (+44.3%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 190 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month