Prosecution Insights
Last updated: April 19, 2026
Application No. 18/595,978

SYSTEM AND METHODS FOR AUTOMATICALLY DETECTING DOUBLE PARKING VIOLATIONS

Non-Final OA §102§103§DP
Filed
Mar 05, 2024
Examiner
FITZPATRICK, ATIBA O
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Hayden AI Technologies Inc.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
775 granted / 881 resolved
+26.0% vs TC avg
Minimal +5% lift
Without
With
+4.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
908
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 881 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 7-9, 11, 13, 14, 16, 22, 24, and 28 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20220147745 A1 (Ghadiok). As per claim 1, Ghadiok teaches a method of automatically detecting a double parking violation, comprising: determining, using one or more processors of an edge device, a location of a road edge of a roadway from one or more video frames of a video captured by one or more video image sensors of the edge device; determining a layout of one or more lanes of the roadway based on the road edge determined by the edge device¸ wherein at least one of the one or more lanes is a no-parking lane (Ghadiok: Fig. 3 (shown below): camera, edge device, 208, 102; Fig. 7: mainly 704; Figs. 5A-5E (shown below); paras 4, 63: no parking zones; paras 8, 9, 11, 13-16, 24, 27-28, 33, 55-65, and many other paragraphs: “edge device”; para 63: “restricted road area 114 can be marked by… road or curb”; “[0066] As shown in FIG. 1A, the edge device 102 can capture a video 120 of the vehicle 112 and at least part of the restricted road area 114 using one or more video image sensors 208 (see, e.g., FIGS. 5A-5E) of the edge device 102.”; para 17: “a plurality of lanes of a roadway detected from the one or more video frames in a plurality of polygons”; para 64: “The traffic violation can also include illegal double-parking”; para 72: camera, edge device paras 96-99, 146, 169, 175, 177-182, 184, 190, 192, 207: camera; para 116: “The semantic labels can be class labels such as person, road, tree, building, vehicle, curb, sidewalk, traffic lights, traffic sign, curbside city assets such as fire hydrants, parking meter, lane line, landmarks, curbside side attributes (color/markings), etc.”; para 128: “objects such as lane lines, lane dividers, crosswalks, traffic lights, no parking signs or other types of street signs, fire hydrants, parking meters, curbs, trees or other types of plants, or a combination thereof are identified in the semantic annotated maps 318 and their geolocations and any rules or regulations concerning such objects are also stored as part of the semantic annotated maps 318.”; para 129: “detection of this roadway”; para 235: “The restricted lane 114 can be marked by certain insignia, text, nearby signage, road or curb coloration”; para 253: “All of the lanes detected can then be bound using polygons 1008 to indicate the boundaries of the lanes.”; para 256: “FIG. 10 also illustrates that at least one of the polygons 1008 can be a polygon 1008 bounding a lane-of-interest (LOI), also referred to as a LOI polygon 1012. In some embodiments, the LOI can be a restricted lane 114”; PNG media_image1.png 1017 1337 media_image1.png Greyscale PNG media_image2.png 1353 1016 media_image2.png Greyscale PNG media_image3.png 1204 1014 media_image3.png Greyscale PNG media_image4.png 1050 1013 media_image4.png Greyscale ); bounding the no-parking lane using a lane bounding polygon (Ghadiok: paras 9, 10, 11, 19, 21, 22, 25, 26, 121, 122, 191, 192, 216, 218, 220, 221, 224, 227, 228, 229, 233, 234, 236, 253-256, 261-263, 267-269, 271: lane polygon; paras 17, 25: “a plurality of lanes of a roadway detected from the one or more video frames in a plurality of polygons”; para 121: “parked illegally in a restricted road area 114… restricted road area 114 captured in the video frames with a polygon”; Figs. 10, 11A, 11B, 12A-12F); bounding a vehicle detected from the one or more video frames using a vehicle bounding polygon (Ghadiok: abstract, para 8-12, 17, 19, 21-26, 29, 30, 32, 121, 122, 166, 191, 192, 197, 203-206, 227-229, 223, 224, 236, 237, 263-265, 268, 269, 270, 272, 276-278, 281, 282: vehicle bounding box Figs. 6, 8, 12A-12F); and detecting a potential double parking violation based in part on an overlap of at least part of the vehicle bounding polygon with at least part of the lane bounding polygon (Ghadiok: para 64: “The traffic violation can also include illegal double-parking”; abstract: “The method can further comprise detecting a potential traffic violation based in part on an overlap of at least part of the vehicle bounding box and at least part of one of the polygons.”; para 9: “The method can further comprise detecting, using the one or more processors, a potential traffic violation based in part on an overlap of at least part of the vehicle bounding box and at least part of the LOI polygon.”; para 10: “determining a pixel intensity value associated with each pixel within the lower portion of the vehicle bounding box, calculating a lane occupancy score by taking an average of the pixel intensity values of all pixels within the lower portion of the vehicle bounding box, and detecting the potential traffic violation when the lane occupancy score exceeds a predetermined threshold value. The pixel intensity value can represent a degree of overlap between the LOI polygon and the lower portion of the vehicle bounding box.”; paras 17, 25: “detect that a potential traffic violation has occurred based in part on an overlap of at least part of the vehicle bounding box and at least part of one of the polygons.”; para 19: “The device can detect that a potential traffic violation has occurred based in part on an overlap of at least part of the vehicle bounding box and at least part of the LOI polygon.”; para 26: “The potential traffic violation can be detected based in part on an overlap of at least part of the vehicle bounding box and at least part of the LOI polygon.”; para 121: “he event detection engine 300 can detect at least some overlap between the vehicle bounding box and the polygon when the vehicle is captured driving or parked in the restricted road area 114.” Para 122: “detect that a potential traffic violation has occurred based on a detected overlap between the vehicle bounding box and the polygon.”; Para 236: “A third worker 702C can then be used to detect a potential traffic violation based on a degree of overlap between at least part of the vehicle bounding box 800 and at least part of the LOI polygon 1012 representing the restricted lane 114.”; Para 269: “A lower bounding box 1202 representing a lower portion of the vehicle bounding box 800 has been overlaid on the masked LOI polygon to represent the overlap between the two bounded regions.” PNG media_image5.png 789 1151 media_image5.png Greyscale Para 276: “FIGS. 12C and 12D illustrate another embodiment of a method of calculating a lane occupancy score 1200 using a baseline segment 1210 along a lower side 1212 of the vehicle bounding box 800.” PNG media_image6.png 458 1120 media_image6.png Greyscale PNG media_image7.png 768 696 media_image7.png Greyscale PNG media_image8.png 484 944 media_image8.png Greyscale PNG media_image9.png 388 1111 media_image9.png Greyscale ). As per claim 7, Ghadiok teaches the method of claim 1, further comprising passing the one or more video frames to one or more deep learning models running on the (only one of the following listed items is required) edge device or a server communicatively coupled to the edge device (the following is can be interpreted as a subsequent purpose of passing the video to the deep learning) to determine a context surrounding the double parking violation, and wherein the context surrounding the double parking violation is used by the edge device or the server to detect the double parking violation (Ghadiok: See arguments and citations offered in rejecting claim 1 above: traffic violation is double parking; Abstract: “The vehicle can be detected and bounded using a first convolutional neural network… The plurality of lanes can be detected and bounded using multiple heads of a multi-headed second convolutional neural network.”; Para 18: “the vehicle can be detected and bounded using a first convolutional neural network and the plurality of lanes can be detected and bounded using multiple heads of a multi-headed second convolutional neural network separate from the first convolutional neural network.”; Para 70: “a plurality of deep learning models (see, e.g., the first convolutional neural network 314 and the second convolutional neural network 315 in FIG. 3) running on the edge device 102.”; Para 129: “The videos can first be processed locally on the edge device 102 (using the computer vision tools and deep learning models previously discussed) and the outputs (e.g., the detected objects, semantic labels, and location data) from such detection can be transmitted to the knowledge engine 306”; Para 131: “The knowledge engine 306 can also store all event data or files included as part of any evidence packages 316 received from the edge devices 102 concerning potential traffic violations. The knowledge engine 306 can then pass certain data or information from the evidence package 316 to the reasoning engine 308 of the server 104.”; Para 132: “The reasoning engine 308 can comprise a logic reasoning module 324, a context reasoning module 326, and a severity reasoning module 328. The context reasoning module 326 can further comprise a game engine 330 running on the server 104.”; Para 139: “The context reasoning module 326 can apply certain rules to the game engine simulation to determine if a potential traffic violation is indeed a traffic violation… the context reasoning module 326 can use the game engine simulation to determine that certain potential traffic violations should be considered false positives.”; para 140: “If the context reasoning module 326 determines that no mitigating circumstances are detected or discovered, the data and videos included as part of the evidence package 316 can be passed to the severity reasoning module 328. The severity reasoning module 328 can make the final determination as to whether a traffic violation has indeed occurred by comparing data and videos received from multiple edge devices 102.”; Fig. 3 (shown above); Para 245: “the second convolutional neural network 315 can be trained to detect lane markings 1004 (see, e.g., FIGS. 10, 11A, and 11B). For example, the lane markings 1004 can comprise lane lines, text markings, markings indicating a crosswalk, markings indicating turn lanes, dividing line markings,”; Paras 247, 248, 250, 255, 257, 260, 277, 285: “second convolutional neural network”; Para 282: “the 3D bounding box 1224 can be calculated from the vehicle bounding box 800 generated by the first convolutional neural network 314. In these embodiments, the 3D bounding box 1224 can be calculated by first estimating the vehicle's size and orientation using certain regression techniques and/or using a convolutional neural network and then constraining and bounding the vehicle using projective geometry. In certain embodiment, the 3D bounding box 1224 can be obtained by passing the video frame to a deep learning model trained to bound objects (e.g., vehicles) in 3D bounding boxes.”; deep learning model output is used to determine context surrounding the potential double parking violation, which is used to confirm as double parking violation). As per claim 8, Ghadiok teaches the method of claim 7, wherein the one or more deep learning models comprise at least one of (both of the following listed items are required because of the “and” conjunction) an object detection deep learning model and a lane segmentation deep learning model (Ghadiok: See arguments and citations offered in rejecting claim 7 above; Fig. 7 (shown below): mainly 708, 716; PNG media_image10.png 1242 1005 media_image10.png Greyscale ). As per claim 9, Ghadiok teaches the method of claim 7, wherein at least one of the deep learning models is configured to output a multiclass classification concerning a feature associated with the context (Ghadiok: See arguments and citations offered in rejecting claim 7 above: multiple heads, multi-headed; The multi-headed CNN outputs detections of multiple different types of lanes and attributes of the scene, which are used in the context reasoning module and game engine therein to determine the context surrounding the traffic violation (i.e. double parking): Para 139: “For example, the context reasoning module 326 can determine a causation of the potential traffic violation based on the game engine simulation. As a more specific example, the context reasoning module 326 can determine that the vehicle 112 stopped only temporarily in the restricted road area 114 to allow an emergency vehicle to pass by. Rules can be set by the context reasoning module 326 to exclude certain detected violations when the game engine simulation shows that such violations were caused by one or more mitigating circumstances (e.g., an emergency vehicle passing by or another vehicle suddenly swerving into a lane). In this manner, the context reasoning module 326 can use the game engine simulation to determine that certain potential traffic violations should be considered false positives.”; Para 140: “If the context reasoning module 326 determines that no mitigating circumstances are detected or discovered, the data and videos included as part of the evidence package 316 can be passed to the severity reasoning module 328. The severity reasoning module 328 can make the final determination as to whether a traffic violation has indeed occurred by comparing data and videos received from multiple edge devices 102.”; Figs. 9 and 10 (both shown above)). As per claim 11, Ghadiok teaches the method of claim 9, wherein the feature is a traffic condition surrounding the vehicle (Ghadiok: See arguments and citations offered in rejecting claim 9 above). As per claim 13, Ghadiok teaches the method of claim 1, wherein the one or more video frames are captured by an event camera of the edge device, wherein at least one of the video frames is passed to a license plate recognition deep learning model running on the edge device to automatically recognize a license plate of the vehicle (Ghadiok: See arguments and citations offered in rejecting claim 1 above; Para 118: “the machine learning model trained to recognize license plate numbers from such video frames or images.”; Para 119: “a deep learning network or a convolutional neural network specifically trained to recognize license plate numbers from video images. In some embodiments, the machine learning model can be or comprise the OpenALPR™ license plate recognition model. The license plate recognition engine 304 can use the machine learning model to recognize alphanumeric strings representing license plate numbers from video images comprising license plates.”; Fig. 3 (shown above): mainly 304; edge device has license plate recognition deep learning model). As per claim 14, Ghadiok teaches the method of claim 1, wherein the one or more video frames are captured by an event camera of the edge device coupled to a carrier vehicle while the carrier vehicle is in motion (Ghadiok: See arguments and citations offered in rejecting claim 1 above; Para 8: “The video can be captured by one or more video image sensors of the edge device.” Para 11: “an event detection engine on the edge device.”; Para 20: “The device can be coupled to a carrier vehicle. The video can be captured using the one or more video image sensors of the device while the carrier vehicle is in motion.”; Para 27: “The video can be captured by one or more video image sensors of an edge device. In some embodiments, the edge device can be coupled to a carrier vehicle. The video can be captured using the one or more video image sensors of the edge device while the carrier vehicle is in motion.” “[0062] When properly coupled or secured to the windshield, window, or dashboard/deck of the carrier vehicle 110 or secured to a handrail, handlebar, or mount/body of the carrier vehicle 110, the edge device 102 can use its video image sensors 208 (see, e.g., FIG. 5A-5E) to capture videos of an external environment within a field view of the video image sensors 208. Each of the edge devices 102 can then process and analyze video frames from such videos using certain computer vision tools from a computer vision library and a plurality of deep learning models to detect whether a potential traffic violation has occurred.”; Fig. 1A; PNG media_image11.png 1028 1012 media_image11.png Greyscale ). As per claim(s) 16 and 22, arguments made in rejecting claim(s) 1 and 7 are analogous, respectively. Ghadiok also teaches a device for automatically detecting a double parking violation, comprising one or more processors programmed to (Ghadiok: See arguments and citations offered in rejecting claim 1 above; Figs. 1A, 2A, 2B, 3, 4). As per claim(s) 24 and 28, arguments made in rejecting claim(s) 1 and 7 are analogous, respectively. Ghadiok also teaches one or more non-transitory computer-readable media comprising instructions stored thereon, that when executed by one or more processors, cause the one or more processors to perform operations comprising (Ghadiok: See arguments and citations offered in rejecting claim 1 above; Figs. 1A, 2A, 2B, 3, 4). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 10, 12, 17, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Ghadiok as applied to claims 1, 9, 16, 24 above, and further in view of US 20190250626 A1 (Ghafarianzadeh). As per claim 2, Ghadiok teaches the method of claim 1. Ghadiok does not teach further comprising: determining whether the vehicle is (only one of the following listed items is required) moving or static when captured by the video; and detecting the potential double parking violation only in response to the vehicle being determined to be static when captured by the video. Ghafarianzadeh teaches these limitations (Ghafarianzadeh: [0001] Stationary objects, such as vehicles on a road, may interfere with autonomous operation of a vehicle. For example, a stationary vehicle in front of the autonomous vehicle may be double-parked; [0012] As above, blocking objects (including vehicles) include objects which impede an autonomous vehicle from proceeding along a planned route or path. For example, in many urban environments, double parking is a common practice. Such double-parked vehicles need to be detected as blocking vehicles to be treated separately from stationary vehicles. Particularly, while the autonomous vehicle may be instructed to wait for a stopped vehicle to move, the autonomous vehicle may be instructed to navigate around such a double-parked vehicle. General rules, such as treating all stationary vehicles at green lights as blocking, are often inaccurate and/or insufficient for operating the autonomous vehicle safely and/or in a manner that more closely mimics human operation of a vehicle. This disclosure is generally directed to techniques (e.g., machines, programs, processes) for determining whether a stationary vehicle is a blocking vehicle; [0016] As used herein, a blocking vehicle is a stationary vehicle on a drivable surface that impedes other vehicles from making progress in some manner. Not all stationary vehicles on drivable surfaces are blocking vehicles. For example, a non-blocking stationary vehicle may be a vehicle that has paused its progress on a drivable road surface for a traffic light signaling a red light, to yield to another vehicle and/or to wait for another vehicle to make progress, for an object that crosses in front of the vehicle, etc. In contrast, a blocking vehicle may be a double-parked vehicle; [0021] Returning to the example scenario in FIG. 1A, the autonomous vehicle 102 may approach a junction 104 that includes a traffic light 106 and may encounter an object that it classifies as a stationary vehicle (e.g., vehicle 108), which is also indicated with a question mark in the illustration. The technical difficulty that arises is when the perception engine does not have sufficient information to know whether the stationary vehicle is merely paused (i.e., a non-blocking stationary vehicle), being itself obstructed by another object and/or legal constraint (e.g., a stop light), or whether the stationary vehicle is actually a blocking vehicle. As used herein, a blocking vehicle is a vehicle on a drivable road surface that is stopped or moving at a velocity less than a predetermined threshold velocity that impedes progress of other vehicles. For example, a blocking vehicle might be a double-parked vehicle [0030] At operation 206, the example process 200 may include detecting a stationary vehicle 208 from the sensor data. This may include detecting an object, classifying that object as a vehicle, and determining that a velocity (or speed) of the vehicle is less than a threshold velocity (or speed). In some examples, the threshold may be a predetermined threshold (e.g., the vehicle is stopped; [0035] At operation 212, the example process 200 may include determining, using the BV ML model 214 and from the feature values, a probability that the stationary vehicle 208 is a blocking vehicle. For instance, the feature values 210 may be input into the BV ML model 214 and the BV ML model 214 may, in response and according to the configuration of the BV ML model 214, output a probability that the stationary vehicle 208 is a blocking vehicle.; [0053] In an additional or alternate example, the autonomous vehicle 302 may detect all stationary vehicles (i.e. those vehicles not solely those constrained to a same lane) within a range of the sensors of the autonomous vehicle 302 or within a predetermined threshold distance of the autonomous vehicle 302 (e.g., 50 meters, 100 meters). In such an example, information regarding a stationary vehicle in other lanes may be used in planning how other vehicles may react (e.g. planning a route into the lane of the autonomous vehicle and around a double-parked vehicle). A feature value may reflect the location and/or other feature values associated with these other detected stationary vehicles. PNG media_image12.png 1367 983 media_image12.png Greyscale PNG media_image13.png 1152 993 media_image13.png Greyscale PNG media_image14.png 1380 815 media_image14.png Greyscale : detecting potential double parking violation (i.e. blocking vehicle) only when vehicle is determined to be static (i.e. stationary)). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Ghafarianzadeh into Ghadiok since both Ghadiok and Ghafarianzadeh suggest a practical solution and field of endeavor of detecting parking violation of double patenting in general and Ghafarianzadeh additionally provides teachings that can be incorporated into Ghadiok in that the double parking violation of a blocking vehicle is determined only after the vehicle is first determined to be stationary as to “For example, in many urban environments, double parking is a common practice. Such double-parked vehicles need to be detected as blocking vehicles to be treated separately from stationary vehicles. Particularly, while the autonomous vehicle may be instructed to wait for a stopped vehicle to move, the autonomous vehicle may be instructed to navigate around such a double-parked vehicle. General rules, such as treating all stationary vehicles at green lights as blocking, are often inaccurate and/or insufficient for operating the autonomous vehicle safely and/or in a manner that more closely mimics human operation of a vehicle.” (Ghafarianzadeh: para 12). The teachings of Ghafarianzadeh can be incorporated into Ghadiok in that the double parking violation of a blocking vehicle is determined only after the vehicle is first determined to be stationary. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim 10, Ghadiok teaches the method of claim 9. Ghadiok does not teach the feature is brake light status of the vehicle. Ghafarianzadeh teaches these limitations (Ghafarianzadeh: See arguments and citations offered in rejecting claim 2 above; para 13: “… determining feature values from the sensor data brake light conditions”; Fig. 2 (shown above): mainly 208-212: “determine feature values”, “brake lights on”, “determine, from the feature values, a probability that the SV is a blocking vehicle (BV)”; Para 44: “Responsive to detecting that the vehicle 304 is a stationary vehicle, the autonomous vehicle 302 may determine, by the perception engine, feature values from the sensor data. For example, the autonomous vehicle 302 may determine a condition of lights of the stationary vehicle 304 (e.g., hazard lights on/off, brake lights on/off) as shown at 310”; Table 1: PNG media_image15.png 164 860 media_image15.png Greyscale PNG media_image16.png 464 1022 media_image16.png Greyscale ). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Ghafarianzadeh into Ghadiok since both Ghadiok and Ghafarianzadeh suggest a practical solution and field of endeavor of detecting parking violation of double patenting in general and Ghafarianzadeh additionally provides teachings that can be incorporated into Ghadiok in that brake light features are used in confirming that the stationary vehicle is a blocking vehicle (i.e. double parking) as to “For example, in many urban environments, double parking is a common practice. Such double-parked vehicles need to be detected as blocking vehicles to be treated separately from stationary vehicles. Particularly, while the autonomous vehicle may be instructed to wait for a stopped vehicle to move, the autonomous vehicle may be instructed to navigate around such a double-parked vehicle. General rules, such as treating all stationary vehicles at green lights as blocking, are often inaccurate and/or insufficient for operating the autonomous vehicle safely and/or in a manner that more closely mimics human operation of a vehicle.” (Ghafarianzadeh: para 12). The teachings of Ghafarianzadeh can be incorporated into Ghadiok in that brake light features are used in confirming that the stationary vehicle is a blocking vehicle (i.e. double parking). Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim 12, Ghadiok teaches the method of claim 9. Ghadiok does not teach the feature is a roadway intersection status. Ghafarianzadeh teaches these limitations (Ghafarianzadeh: See arguments and citations offered in rejecting claim 2 above; para 13: “determining feature values from the sensor data (e.g., values indicating features such as a distance to the next junction in the roadway”; para 44: “Responsive to detecting that the vehicle 304 is a stationary vehicle, the autonomous vehicle 302 may determine, by the perception engine, feature values from the sensor data. For example, the autonomous vehicle 302 may determine… a distance 312 of the stationary vehicle 304 (and/or the autonomous vehicle) from the junction 306”; para 59: “the perception engine may determine one or more feature values (e.g., 15 meters from stationary vehicle to junction”; paras 81, 104, 110, 116: a distance to a junction; Table 1: PNG media_image15.png 164 860 media_image15.png Greyscale ). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Ghafarianzadeh into Ghadiok since both Ghadiok and Ghafarianzadeh suggest a practical solution and field of endeavor of detecting parking violation of double patenting in general and Ghafarianzadeh additionally provides teachings that can be incorporated into Ghadiok in that junction distance features are used in confirming that the stationary vehicle is a blocking vehicle (i.e. double parking) as to “For example, in many urban environments, double parking is a common practice. Such double-parked vehicles need to be detected as blocking vehicles to be treated separately from stationary vehicles. Particularly, while the autonomous vehicle may be instructed to wait for a stopped vehicle to move, the autonomous vehicle may be instructed to navigate around such a double-parked vehicle. General rules, such as treating all stationary vehicles at green lights as blocking, are often inaccurate and/or insufficient for operating the autonomous vehicle safely and/or in a manner that more closely mimics human operation of a vehicle.” (Ghafarianzadeh: para 12). The teachings of Ghafarianzadeh can be incorporated into Ghadiok in that junction distance features are used in confirming that the stationary vehicle is a blocking vehicle (i.e. double parking). Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim(s) 17, arguments made in rejecting claim(s) 2 are analogous, respectively. Ghadiok also teaches a device for automatically detecting a double parking violation, comprising one or more processors programmed to (Ghadiok: See arguments and citations offered in rejecting claim 1 above; Figs. 1A, 2A, 2B, 3, 4). As per claim(s) 25, arguments made in rejecting claim(s) 2 are analogous, respectively. Ghadiok also teaches one or more non-transitory computer-readable media comprising instructions stored thereon, that when executed by one or more processors, cause the one or more processors to perform operations comprising (Ghadiok: See arguments and citations offered in rejecting claim 1 above; Figs. 1A, 2A, 2B, 3, 4). Claim(s) 3, 18, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Ghadiok as applied to claims 1, 16, and 24 above, and further in view of US 20210325190 A1 (Li). As per claim 3, Ghadiok teaches the method of claim 1. Ghadiok does not teach further comprising determining the road edge by fitting a line representing the road edge to a plurality of road edge points using a random sample consensus algorithm. Li teaches these limitations (Li: Para 6: “the road divider is an edge of a road”; para 121: “Straight line fitting may be performed on the plurality of collection points using a RANSAC method, to constitute the fitted line of the road divider. The fitted lines constitute a local map of the road on which the terminal device is actually located. As shown in FIG. 7, a fitted line in the local map may be used to indicate a shape of the road divider. For example, a fitted line 306 is used to indicate the road edge 301, and may be used to calculate the feature of the road on which the terminal device is located.”; PNG media_image17.png 555 920 media_image17.png Greyscale PNG media_image18.png 757 515 media_image18.png Greyscale : determine road edge by fitting a line to road edge points using a random sample consensus algorithm). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Li into Ghadiok since both Ghadiok and Li suggest a practical solution and field of endeavor of detecting a road edge from the perspective of a moving vehicle in general and Li additionally provides teachings that can be incorporated into Ghadiok in that the road edge is determined via fitting using a RANSAC algorithm. One of ordinary skill in the art would have recognized the advantage of fitting using a RANSAC algorithm its robustness to outliers. The teachings of Li can be incorporated into Ghadiok in that the road edge is determined via fitting using a RANSAC algorithm. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim(s) 18, arguments made in rejecting claim(s) 3 are analogous, respectively. Ghadiok also teaches a device for automatically detecting a double parking violation, comprising one or more processors programmed to (Ghadiok: See arguments and citations offered in rejecting claim 1 above; Figs. 1A, 2A, 2B, 3, 4). As per claim(s) 26, arguments made in rejecting claim(s) 3 are analogous, respectively. Ghadiok also teaches one or more non-transitory computer-readable media comprising instructions stored thereon, that when executed by one or more processors, cause the one or more processors to perform operations comprising (Ghadiok: See arguments and citations offered in rejecting claim 1 above; Figs. 1A, 2A, 2B, 3, 4). Allowable Subject Matter Claims 4-6, 15, 19-21, 23, 27, 29, and 30 would be allowable if rewritten to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Limitations pertaining to “the plurality of road edge points are determined by selecting a subset of points along a mask or heatmap representing the road edge but not all points along the mask or heatmap”, in conjunction with other limitations present in claims 4, 5, 19, 20, and 27 and associated independent and intervening claim(s), distinguish over the prior art. Limitations pertaining to “the line fitted to the plurality of road edge points is parameterized by a slope and an intercept, wherein each of the slope and the intercept is calculated using a sliding window or moving average algorithm such that the slope is an average slope value and the intercept is an average intercept value calculated from one or more video frames captured prior in time”, in conjunction with other limitations present in claims 6, 21, and 29 and associated independent and intervening claim(s), distinguish over the prior art. Limitations pertaining to “determining whether the vehicle is static or moving based on a standard deviation of the transformed coordinates in both a longitudinal direction and a latitudinal direction and a cross correlation of the transformed coordinates along the longitudinal direction and the latitudinal direction”, in conjunction with other limitations present in claims 15, 23, and 30 associated independent and intervening claim(s), distinguish over the prior art. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Atiba Fitzpatrick whose telephone number is (571) 270-5255. The examiner can normally be reached on M-F 10:00am-6pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for Atiba Fitzpatrick is (571) 270-6255. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Atiba Fitzpatrick /ATIBA O FITZPATRICK/ Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Mar 05, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602854
SYSTEM AND METHOD FOR MEDICAL IMAGING
2y 5m to grant Granted Apr 14, 2026
Patent 12586195
OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC APPARATUS, OPHTHALMIC INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579649
RADIATION IMAGE PROCESSING APPARATUS AND OPERATION METHOD THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12555237
CLOSEUP IMAGE LINKING
2y 5m to grant Granted Feb 17, 2026
Patent 12548221
SYSTEMS AND METHODS FOR AUTOMATIC QUALITY CONTROL OF IMAGE RECONSTRUCTION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
93%
With Interview (+4.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 881 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month