Prosecution Insights
Last updated: April 19, 2026
Application No. 18/900,204

PARKING-SLOT LEARNING METHOD AND APPARATUS

Non-Final OA §102§103
Filed
Sep 27, 2024
Examiner
TRIEU, VAN THANH
Art Unit
2685
Tech Center
2600 — Communications
Assignee
J-QuAD DYNAMICS Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
909 granted / 1076 resolved
+22.5% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
33 currently pending
Career history
1109
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
44.6%
+4.6% vs TC avg
§102
36.7%
-3.3% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1076 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – Claims 1, 2, 4, 8-12, 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wang et al [US 2024/0227785] Claim 1. A method of learning a recognition rule (the machine-learning model 608, see abstract, Figs. 6, 8, para [0005-0007, 0061-0063]) of at least one parking slot around a vehicle (see Figs. 2A-C, 7A-C), the method comprising: objectifying at least one parking slot at least partly included in a learning BEV image 606 as an at least one objectified parking slot (the machine-learning model 608 identifying and learning images of the parking slot 102, see Figs. 6, 7A-7C, para [0006, 0007, 0051]) comprised of (i) a plurality of corner points of the at least one parking slot (the corners 702, 704, 706, 708, see Fig. 7A, para [0053, 0054]), (ii) a center point of the at least one parking slot (the center coordinate 712, see Fig. 7B, para [0055]); and (iii) attribute information on the at least one parking slot (the parking lot prediction data 610, see Fig. 8, para [0060-0063]); generating annotation data for the objectified at least one parking slot (the processor and machine-learning model 608 generate the map the BEV image 606 of parking slot 102, see Figs. 7A-C, 8, 9, para [0061-0062]); executing, from the learning image, learning of a recognition rule of the at least one parking slot based on the generated annotation data for the objectified at least one parking slot (the computer processors 1602 executing the machine-learning model 608 and the BEV image 606 of the parking slot 102 at a high level, a neural network 1000 represents as node 1002 to form layers 1005 and edges 1004, see Figs. 7A-C, 10, 13-15, para [0063-0073, 0103, 0104]); receiving an input image around the vehicle captured by a camera installed in the vehicle (the computer processors 1602 with machine-learned module 608 obtain/receive each ground truth parking representation in BEV image input metadata 1306 from the cameras 304, 308, see Figs. 3, 4, 13, step 1308, para [0006, 0007, 0045, 0051, 0093]); and determining whether there is at least one target parking slot in the input image in accordance with the learned recognition rule (the computer processor is further configured to determine a first location of the first available parking slot using the parking slot prediction data and park the vehicle in the first available parking slot without the assistance of a driver when the first parking slot confidence meets a threshold “as rule”, see Figs. 13, 15, abstract, para [0005, 0007, 0101]). Claim 2. A method of learning a recognition rule of at least one parking slot around a vehicle, the method comprising: objectifying at least one parking slot at least partly included in a learning image as an at least one objectified parking slot comprised of (i) a plurality of corner points of the at least one parking slot, (ii) a center point of the at least one parking slot; and (iii) attribute information on the at least one parking slot; generating annotation data for the objectified at least one parking slot; executing, from the learning image, learning of a recognition rule of the at least one parking slot based on the generated annotation data for the objectified at least one parking slot (as cited in respect to claim 1 above, see Figs. 3, 4, 7, 13-16). Claim 4. The method according to claim 1, wherein: the at least one parking slot comprises a plurality of parking slots, the method further comprising: calculating the number of one or more of the corner points of at least one of the parking slots, the one or more of the corner points of the at least one of the parking slots being blocked by at least one object other than each of the parking slots (the computing and calculating of aggregate corner, see Figs. 7A-C, 11, para [0006, 0055-0058, 0066, 0083-0086]), the executing of the learning does not learn the recognition rule of the at least one of the parking slots based on the generated annotation data for the objectified at least one of the parking slots upon determination that the number of the one or more of the corner points of the at least one of the parking slots is more than or equal to a predetermined threshold number (read upon the machine-learned module 608 immediately removed from consideration will not consider of the parking slot when the anchor boxes with an anchor box compatibility score greater than a compatibility threshold, see Figs. 11, 12, para [0079, 0080, 0092]). Claim 8. The method according to claim 7, wherein: the calculating of the blocked amount calculates the blocked amount based on a view angle of the vehicle to the at least one parking slot (one with ordinary skill in the art will recognize that parking slots may be defined using alternative and/or additional features such as vehicle entry angle and parking slot width (see Figs. 2A-C, 7C, para [0041]). The field of view FOV solid angle that the parking slot is partially blocked from the viewpoint of the vehicle 301 by a parked vehicle 204, see Figs. 2A-C, 3, 5, 7C, para [0045, 0062]). Claim 9. The method according to claim 1, wherein: the executing of the learning defines typical orientations of the at least one parking slot, and executes the learning for each of the typical orientations (the parking slot orientation, see Figs. 2A-C, 3, 6-8, para [0051, 0060]). Claim 10. The method according to claim 9, wherein: the executing of the learning encodes each of the corner points and the center point of the at least one parking slot for each of the typical orientations to acquire encoded data for each typical orientation (see Fig. 3, para [0046), and executes the learning using the encoded data for each typical orientation (the simultaneously encoding corner orientation relationships, see para [0037]). Claim 11. The method according to claim 1, further comprising: recognizing, from the learning image, a plurality of parking slots as the at least one parking slot (see Figs. 2A-C and 7A-C); selecting one parking slot from the recognized parking slots as a reference object (read upon the cameras is/are selected for the visibility of parking slots 102, see para [0046]); searching, from the reference object, for at least one parking-slot train in at least one of a predetermined first direction and a second direction opposite to the first direction, the at least one parking-slot train being comprised of selected parking slots that are included in the plurality of parking slots and are continuously aligned in at least one of the first direction or the second direction to find the at least one parking-slot train (the searching vehicle 202 for the parking slots training data including direction of the gradient or in a direction opposite to the gradient, see Figs. 3, 6, 7A-C, 10, 13, 14, para [0006, 0043, 0051, 0052, 0069-0077, 0096, 0100]); and performing, based on information related to the at least one parking-slot train, a parking-lot environment determination of whether the vehicle is located in a parking lot (see Figs. 14, 15, para [0077, 0101]). Claim 12. The method according to claim 11, wherein: the performing of the parking-lot environment determination comprises: acquiring, based on the at least one parking-slot train, a parking region that includes the at least one parking-slot train and ab aisle arranged to face the parking-slot train, the vehicle being travelable in the aisle (the acquiring of BEV images of the environment of the parking slots 102, vehicle 301, left aisle 213 and right aisle 214, see Figs. 2A, 7A-7C, para [0044-0049, 0053-0055]); determining that the vehicle is located in a parking lot upon determination that the vehicle has entered the parking region (see para [0007, 0101]); and determining that the vehicle is not located in a parking lot upon determination that the vehicle has exited from the parking region (read upon the empty parking slot, see Figs. 2A-C, para [0039]). Claim 20. An apparatus for learning a recognition rule of at least one parking slot around a vehicle, the apparatus comprising: a memory device storing learning program instructions; and a processor configured to execute the learning program instructions to accordingly: objectify at least one parking slot at least partly included in a learning image as an at least one objectified parking slot comprised of (i) a plurality of corner points of the at least one parking slot, (ii) a center point of the at least one parking slot; and (iii) attribute information on the at least one parking slot; generate annotation data for the objectified at least one parking slot; execute, from the learning image, learning of a recognition rule of the at least one parking slot based on the generated annotation data for the objectified at least one parking slot (as cited in respect to claim 1 above). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al [US 2024/0227785] in view of Nakada et al [US 2021/0179078] Claim 3. The method according to claim 1, wherein: the attribute information includes at least one of: parking-frame information related to a parking frame of the at least one parking slot (the rectangular shaped/frame parking slot 102, see Figs. 7A-7C, para [0054]). But Wang et al fails to disclose wheel-stopper information on a wheel stopper located in the at least one parking slot; and additional attribute information related to the at least one parking slot, the additional attribute information being different from the parking-frame information and the wheel-stopper information. But Wang et al teaches that in Figs. 7A-7C, which use four corners, the entrance-left corner (702) is indexed with the number 1, the entrance-right corner (704) uses the index of 2, the end-left corner (706) uses the index of 3, and the end-right corner (708) is indexed with the number 4. Thus, for example, using absolute coordinates (701) as seen in FIG. 7A, the location of the end-right corner (708) is given by the coordinate pair (x.sub.4, y.sub.4). The assignation of a corner to an index is arbitrary and one with ordinary skill in the art will recognize that any choice of mutually exclusive indices may be used with the corners of the parking representations, so long as the choice is consistently applied (see Figs. 2A-C, 5, 7A-C, para [0054]). Nakada et al suggests that the automatic parking device processes an image behind the vehicle captured by an imaging means, recognizes a parking area and its rear situation (namely, a situation in a rear part of the parking area), and determines the stop position based on the parking area and its rear situation. More specifically, in a case where a wheel stopper is recognized in the rear part of the parking area, the automatic parking device determines a contact position of each rear wheel and the wheel stopper as the stop position. In a case where the wheel stopper is not present in the rear part of the parking area, the automatic parking device recognizes a parking line in the rear part of the parking area, if any, and determines the stop position such that a prescribed space is secured between the parking line and a rear end of the vehicle. In a case where neither the wheel stopper nor the parking line is present in the rear part of the parking area, the automatic parking device determines whether a wall surface is present in the rear part of the parking area (see para 0002]). The target parking space 53, the stop position 57, and the trajectory 56 (see FIG. 7B) are displayed on the parking screen such that the target parking space 53, the stop position 57, and the trajectory 56 overlap with the travel direction image and the look-down image. While executing the driving process, the action plan unit 43 determines whether the stop position 57 is suitable. In a case where the parking space candidate the occupant has selected as the target parking space 53 is the undelimited parking space 51 (see FIGS. 6A to 6D) regarded as the parking area 50 by the external environment recognizing unit 41, an obstacle 58 may be present in a rear area of the target parking space 53. Incidentally, a vehicle stopper (wheel stopper) is not included in the obstacle 58 because the vehicle stopper is naturally present in the target parking space 53 (the parking area 50). Further, even if the space detected by the external environment recognizing unit 41 is equal to or larger than the parking size of a certain vehicle, the obstacle 58 may be placed in the rear area of the target parking space 53 after the start of the driving process (see Fig. 1, 6A-6D, 7B, 8A, para [0061, 0114]). Therefore, it would have been obvious to one skill in the art before the effective filing date of the invention to add or implement the parking stopper of Nakada et al to the entrance corners and/or end corners of parking slot of Wang et al for stopping a vehicle while entering the parking slot to prevent of colliding or impacting to nearby object or a wall, since those corners represent the parking space limit or size for parking a car. Claims 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al [US 2024/0227785] in view of Sakai et al [US 2023/0256967] Claim 13. The method according to claim 1, further comprising: recognizing, from the learning image, a plurality of parking slots as the at least one parking slot (see Figs. 2A-C, 3, 7A-C); acquiring, based on the plurality of parking slots, at least one parking-slot train that is comprised of selected parking slots that are included in the plurality of parking slots and are continuously aligned in a predetermined direction (as cited in respect to claims 11, 12 above); acquiring, based on the at least one parking-slot train, a parking region that includes the at least one parking-slot train and an aisle arranged to face the parking-slot train, the vehicle being travelable in the aisle (as cited in respect to claims 11, 12 above). But Wang et al fails to disclose extending, upon determination that there is at least one additional parking slot or at least one additional parking-slot train located adjacent to the acquired parking region in the predetermined direction, the parking region in the predetermined direction; and performing, based on information related to the extended parking region, a parking-lot environment determination of whether the vehicle is located in a parking lot. However, Wang et al teaches that the example BEV image (500) displays the surrounding environment of the vehicle (301) (the parking lot (100)), including the parking slots (102), the unavailable region (420) and the parked vehicle (204). Again, to avoid cluttering the figure, it is noted that not every parking slot (102) is labelled. As seen, the IPM often causes distortion of non-planar objects. For example, the parked vehicle (204) appears stretched as if extending to a point at infinity in FIG. 5. However, the BEV image, generally, does not strongly influence the representation of the parking slots (102), and other planar objects, such that the BEV image can be used for parking slot (102) identification and localization (see Figs. 4, 5, para [0049]). Sakai et al suggests that FIG. 6 shows a state where the own vehicle C approaches a parallel-parking type parking strip PL on the road on-road parking strip PL. Fig. 7. shows a state where the own vehicle C approaches a vertical-parking type on-road parking strip PL. Thus, in the case where a vehicle laterally approaches a parallel parking space or approaches the vertical-parking space, it is possible that the area is a parking strip PL not the parking place PP. In other words, assuming an arrangement direction to be a direction along which a plurality of parking areas PA are consecutively arranged, in the case where a vehicle approaches the parking area PA such that the parking area PA and the own vehicle C are partially overlapped in the arrangement direction, the parking areas may constitute the on-road parking strip PL. Note that ‘the parking area PA and the own vehicle C are partially overlapped in the arrangement direction’ refers to the own vehicle C partially overlaps with an extension space as a virtual space in which the parking area PA is extended in the arrangement direction. That is, at least part of the own vehicle C is located in the above-mentioned extension space (see Figs. 6, 7, para [0066]). Therefore, it would have been obvious to one skill in the art before the effective filing date of the invention to recognize that the BEV image of the parking slot appear stretch as if extending to a point at infinite of Wang et al is functionally equivalent to the extending parking space of Sakai et al to provide expanding field of view surrounding the parking spaces for determining a parking space with safety and avoiding of colliding with surrounding objects. Claim 14. The method according to claim 13, wherein: the acquiring of the at least one parking-slot train acquires first and second parking-slot trains as the at least one parking-slot train; and the acquiring of the parking region acquires the parking region that is configured such that the first and second parking-slot trains are located on both sides of the aisle (as discussed between Wang et al and Sakai et al in respect to claim 12, 13 above, and including parking slots at both sides of the aisle 212, 214, see Figs. 2A, 5, 7A, 7B). Claim 15. The method according to claim 13, wherein: the performing of the parking-lot environment determination comprises: determining that the vehicle is located in a parking lot upon determination that the vehicle has entered the extended parking Region (as the discussion between Wang et al and Sakai et al in respect to claim 13 above). But Wang et al fails to disclose determining that the vehicle is not located in a parking lot upon determination that the vehicle has exited from the extended parking region (as the discussion of the information related to extending parking region between Wang et al and Sakai et al in respect to claim 13 above), wherein Wang et al teaches that example BEV image (500) displays the surrounding environment of the vehicle (301) (the parking lot (100)), including the parking slots (102), the unavailable region (420) and the parked vehicle (204) that already reside in a parking slot (102). Further, FIGS. 2A-2C each depict at least one empty and available parking slot (205). Again, to avoid cluttering the figure, it is noted that not every parking slot (102) is labelled. As seen, the IPM often causes distortion of non-planar objects. For example, the parked vehicle (204) appears stretched as if extending to a point at infinity in FIG. 5. However, the BEV image, generally, does not strongly influence the representation of the parking slots (102), and other planar objects, such that the BEV image can be used for parking slot (102) identification and localization (see Figs. 4, 5, para [0039, 0049]). Therefore, it would have been obvious to one skill in the art to recognize that when the BEV images identified as an empty or available parking slot, which is obvious that a vehicle has exited from that empty parking space based on the information PEV images with extending of parking space. Claims 17, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al [US 2024/0227785] in view of Obara et al [US 11,958,497] Claim 17. Wang et al fails to disclose when detecting, upon determination that the vehicle is located in a parking lot, sudden acceleration due to an accelerator misoperation, executing an accelerator-misoperation addressing task including at least one of (i) reducing driving power of the vehicle and (ii) notifying one or more occupants included in the vehicle of an occurrence of the accelerator misoperation. However, Wang et al teaches that the parking slot prediction data (610) is used by the on-board driver assistance system of the vehicle (301) while actively parking or “diving” the vehicle into a detected available parking slot (102). Finally, the area enclosed by a parking slot (102) can be easily determined using the parking representation, whether the parking slot (102) is occluded or not (see Figs. 2A-C, 7A-C, 15, para [0102]). Obara et al suggests that the driving assistance control device 100 is capable of controlling to reduce a driver’s driving load, collision mitigation braking system CMBS, alerting the driver and/or low speed automatic emergency breaking LSAEB, which is a function of suppressing sudden acceleration when an obstacle is front of a vehicle or during driving in a parking slot, see Figs. 1, 6, col. 5, lines 32-67, col. 6, lines 1-13). Therefore, it would have been obvious to one skill in the art before the effective filing date of the invention to add or implement the alerting and reducing speed or driving power when a sudden acceleration of Obara et al to the on-board driver assistance system of the vehicle of Wang et al for preventing of collision or impacting with an object or obstruction while a vehicle is driving into a parking slot/space. Claim 18. The method according to claim 17, wherein: the executing of the accelerator-misoperation addressing task determines whether the accelerator misoperation has been continued for a predetermined time after execution of the accelerator-misoperation addressing task, and executes, again, the accelerator-misoperation addressing task upon determination that the accelerator misoperation has been continued for the predetermined time (as the combination between Wang et al and Obara et al in respect to claim 17 above, wherein a function of suppressing sudden acceleration when an obstacle in front of or behind the vehicle is detected and the driver steps on the accelerator during stopping period of the vehicle or driving the vehicle at low speed (see Obara et al, col. 6, lines 6-13). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al [US 2024/0227785] and Obara et al [US 11,958,497] and further in view of Marcil [US 2007/0142169] Claim 19. The method according to claim 17, wherein: the executing of the accelerator-misoperation addressing task determines, after the accelerator misoperation is cancelled, whether there is a new accelerator misoperation, and executes, again, the accelerator-misoperation addressing task upon determination that there is a new accelerator misoperation after the accelerator misoperation is cancelled (as the combination of the accelerator misoperation between Wang et al and Obara et al in respect to claim 17 above). Marcil suggests that the system for controlling an acceleration of a vehicle comprising: method for neutralizing an erroneous sudden acceleration in a vehicle, the method comprising the steps of: a) evaluating a pressure applied on an acceleration control of the vehicle; b) characterizing driving conditions of the vehicle; c) recognizing one of a presence and absence of the erroneous sudden acceleration, based on the pressure on the acceleration control and driving conditions of the vehicle; and d) actuating at least one warning signal and/or reducing a power output of an engine of the vehicle when the erroneous sudden acceleration is present for a given time period (see Fig. 3, para [0013, 0014, 0038]). If a first warning has already been given during the set time period (step 86), the response evaluator 42 determines if a second warning has been given during the set time period, as indicated in step 90. If not, the response evaluator 42 selects the second warning as an appropriate response, as indicated in step 92, and send the response signal accordingly. The response signal corresponding to the second warning can also include instructions to the controller 30 to carry out corrective measures on the vehicle 12. The second warning could be, for example, silencing or changing the warning sound, varying the blinking of the LED or lighting/blinking another LED, vibrating the acceleration control 24, etc. (see Figs. 3, 4, para [0047]). It would have been obvious to recognize that the determining of a second period is response to the once time step on the accelerator for a first period of time and a second time step on the accelerator for a second period of time or continuing stepping on the accelerator, which provide the same results of sudden acceleration of a vehicle. Therefore, it would have been obvious to one skill in the art before the effective filing date of the invention to add or implement the second or new sudden acceleration alerting and reducing speed or driving power when a sudden acceleration of Marcil to the on-board driver assistance system of the vehicle of Wang et al and Obara et al for assuring and for preventing of collision or impacting with an object or obstruction while a vehicle is driving into a parking slot/space. Conclusion Claims 5-7, 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Consider claim 5, The prior art fails to disclose or suggesting of “the executing of the learning comprises: determining whether one or two of the corner points are located outside the learning image; offsetting, upon determination that the one or two of the corner points are located outside the learning image, the one or two of the corner points into the learning image; and executing, from the learning image, the learning of the recognition rule of the at least one parking slot based on the generated annotation data for the objectified at least one parking slot after offsetting of the one or two of the corner points into the learning image”. Consider claim 7. The prior art fails to teaching and/or suggesting of “the executing of the learning sets parking-slot empty information on the at least one parking slot to an undefined state upon determination that the calculated blocked amount of the part of the at least one parking slot is greater than or equal to a threshold amount”. Consider claim 16. The prior art fails to teaching and/or suggesting of “does not perform the parking-lot environment determination upon determination that the vehicle has entered the parking region from another direction different from the specified direction”. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li et al discloses the parking space detection method, apparatus, device, and storage medium are provided. The method includes: obtaining image frames of a region where a current vehicle is located; for each image frame, recognizing one or more parking spaces and parking space corners of the recognized parking space; determining, based on the parking space corners, a verified parking space; tracking the verified parking space to record in a parking space tracking list a quantity of consecutive visible frames where the verified parking space is recognized, and a quantity of consecutive missing frames where the verified parking space is not recognized, and delete the verified parking space if the quantity of the consecutive missing frames reaches a first threshold; and determining and outputting, based on the parking space corners of the verified parking space, semantic information of the verified parking space if the quantity of consecutive visible frames reaches a second threshold. [US 2024/0029448] Any inquiry concerning this communication or earlier communications from examiner should be directed to primary examiner craft is Van Trieu whose telephone number is (571) 2722972. The examiner can normally be reached on Mon-Fri from 8:00 AM to 3:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Mr. Wang Quan-Zhen can be reached on (571) 272-3114. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair- direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786- 9199 (IN USA OR CANADA) or 571-272-1000. /VAN T TRIEU/ Primary Examiner, Art Unit 2685 1/02/2025
Read full office action

Prosecution Timeline

Sep 27, 2024
Application Filed
Jan 02, 2026
Non-Final Rejection — §102, §103
Mar 12, 2026
Interview Requested
Mar 25, 2026
Examiner Interview Summary
Mar 25, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599342
PATIENT REQUEST SYSTEM HAVING PATIENT FALLS RISK NOTIFICATION AND CAREGIVER NOTES ACCESS
2y 5m to grant Granted Apr 14, 2026
Patent 12599522
PATIENT SUPPORT APPARATUSES WITH WIRELESS HEADWALL COMMUNICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12600320
VEHICLE ANTI-THEFT DEVICE AND METHOD THEREFOR
2y 5m to grant Granted Apr 14, 2026
Patent 12598449
SYNCHRONIZATION BETWEEN DEVICES IN EMERGENCY VEHICLES
2y 5m to grant Granted Apr 07, 2026
Patent 12590772
Method and System for Sensing, Monitoring, Logging and Transmitting Events That Is Assembled on a Firearm
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
98%
With Interview (+13.0%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 1076 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month