Prosecution Insights
Last updated: April 19, 2026
Application No. 18/936,116

Railroad Light Detection

Non-Final OA §103
Filed
Nov 04, 2024
Examiner
KUNTZ, JEWEL A
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Waymo LLC
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
80%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
49 granted / 68 resolved
+20.1% vs TC avg
Moderate +8% lift
Without
With
+7.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
35 currently pending
Career history
103
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement filed 11/18/2024 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-7, 11-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schofield (US 20140362221 A1) in view of SHALEV-SHWARTZ (US 20230347877 A1). Regarding Claim 1, Schofield teaches A computing device comprising: one or more processors, wherein the one or more processors are configured to: receive images of a pair of lights of a railroad light assembly (See at least paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train.”); based on one or more illumination patterns of the pair of lights of the railroad light assembly in the received images (See at least paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train.”); indicates…that the railroad light assembly is active (See at least paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train.” The system determines whether a railroad crossing sign is activated by distinguishing flashing light characteristics of the railroad crossing signal.). Schofield does not explicitly disclose, however, SHALEV-SHWARTZ, in the same field of endeavor, teaches modify, (See at least paragraph [0151], “At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550-556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit 110 may cause one or more navigational responses in vehicle 200, as described in connection with FIG. 5A, above.” The system modifies a confidence level associated with image-based measurements, wherein accumulated measurements are associated with an increasingly higher confidence level.); and control a vehicle to autonomously take an action based on the modified confidence level (See at least paragraph [0260], “Other hard constraints may also be employed. For example, a maximum deceleration rate of the host vehicle may be employed in at least some cases. Such a maximum deceleration rate may be determined based on a detected distance to a target vehicle following the host vehicle (e.g., using images collected from a rearward facing camera). The hard constraints may include a mandatory stop at a sensed crosswalk or a railroad crossing or other applicable constraints” and paragraph [0261], “Where analysis of a scene in an environment of the host vehicle indicates that one or more predefined navigational constraints may be implicated, those constraints may be imposed relative to one or more planned navigational actions for the host vehicle. For example, where analysis of a scene results in driving policy module 803 returning a desired navigational action, that desired navigational action may be tested against one or more implicated constraints. If the desired navigational action is determined to violate any aspect of the implicated constraints (e.g., if the desired navigational action would carry the host vehicle within a distance of 0.7 meters of pedestrian 1215 where a predefined hard constraint requires that the host vehicle remain at least 1.0 meters from pedestrian 1215), then at least one modification to the desired navigational action may be made based on the one or more predefined navigational constraints. Adjusting the desired navigational action in this way may provide an actual navigational action for the host vehicle in compliance with the constraints implicated by a particular scene detected in the environment of the host vehicle.” The system causes an actual navigation action for the host vehicle, including imposing a mandatory stop at a railroad crossing, which constitutes controlling the vehicle to autonomously take an action.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood and controlling a vehicle to autonomously take an action based on the modified confidence level, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). Regarding Claim 2, Schofield and SHALEV-SHWARTZ teach The computing device of claim 1, as set forth in the obviousness rejection. Schofield teaches (See at least paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train.” The system determines whether a railroad crossing signal is activated by distinguishing the characteristic flashing light pattern of the signal, such that the illumination patterns that do not match the typical flashing pattern indicate that the railroad crossing signal is not activated.). Schofield does not explicitly disclose, however, SHALEV-SHWARTZ, in the same field of endeavor, teaches herein the modification includes decreasing the confidence level (See at least paragraph [0151], “At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550-556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit 110 may cause one or more navigational responses in vehicle 200, as described in connection with FIG. 5A, above.” The system modifies a confidence level associated with image-based measurements, wherein the confidence level reflects the reliability of accumulated measurements.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood, controlling a vehicle to autonomously take an action based on the modified confidence level, and herein the modification includes decreasing the confidence level, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). With respect to claim 13, please see the rejection above with respect to claim 2, which is commensurate in scope to claim 13, with claim 2 being drawn to a computing device and claim 13 being drawn to a corresponding system. Regarding Claim 3, Schofield and SHALEV-SHWARTZ teach The computing device of claim 1, as set forth in the obviousness rejection. Schofield teaches (See at least paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train.” The system determines whether a railroad crossing signal is activated by distinguishing the characteristic flashing light pattern of the signal, including illumination patterns consistent with the typical alternating flashing of a railroad crossing signal.). Schofield does not explicitly disclose, however, SHALEV-SHWARTZ, in the same field of endeavor, teaches wherein the modification includes increasing the confidence level (See at least paragraph [0151], “At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550-556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit 110 may cause one or more navigational responses in vehicle 200, as described in connection with FIG. 5A, above.” The system modifies a confidence level associated with image-based measurements, wherein accumulated measurements are associated with an increasingly higher confidence level.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood, controlling a vehicle to autonomously take an action based on the modified confidence level, and wherein the modification includes increasing the confidence level, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). With respect to claim 14, please see the rejection above with respect to claim 3, which is commensurate in scope to claim 14, with claim 3 being drawn to a computing device and claim 14 being drawn to a corresponding system. Regarding Claim 4, Schofield and SHALEV-SHWARTZ teach The computing device of claim 1, as set forth in the obviousness rejection. Schofield teaches (See at least paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train.” The system determines whether a railroad crossing signal is activated based on detection of flashing lights characteristic of a railroad crossing signal, such that the absence of illumination indicates the signal is not activated.); ii) illumination of one or both of the pair of lights has not been detected for a period of time inconsistent with a typical railroad light assembly; and iii) simultaneous illumination of both of the pair of lights of the railroad light assembly. Schofield does not explicitly disclose, however, SHALEV-SHWARTZ, in the same field of endeavor, teaches wherein the modification includes decreasing the confidence level (See at least paragraph [0151], “At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550-556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit 110 may cause one or more navigational responses in vehicle 200, as described in connection with FIG. 5A, above.” The system modifies a confidence level associated with image-based measurements, wherein the confidence level reflects the reliability of accumulated measurements.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood, controlling a vehicle to autonomously take an action based on the modified confidence level, and herein the modification includes decreasing the confidence level, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). With respect to claim 15, please see the rejection above with respect to claim 4, which is commensurate in scope to claim 15, with claim 4 being drawn to a computing device and claim 15 being drawn to a corresponding system. Regarding Claim 5, Schofield and SHALEV-SHWARTZ teach The computing device of claim 1, as set forth in the obviousness rejection. Schofield teaches wherein the one or more processors are further configured to determine, based on one or more road features, whether the vehicle has reached a location that is a predefined threshold distance away from the railroad light assembly (See at least paragraph [0063], “Such traffic control signage, such as speed limit signs, exit signs, warning signs, stop signs, yield signs and/or the like, is typically regulated and various types of these signs must have certain specified, standard geometric shapes (such as a triangle for a yield sign, an octagon for a stop sign and the like), and must be at a particular height and at a particular location at or distance from the side of the road, and must have a specific type/color of lettering on a specific colored background (for example, a speed limit sign is typically a predefined shape, such as rectangular or circular, and has alphanumeric characters or letters and/or numbers that are a contrast color to a background color, such as black letters/numbers on a white background, while an exit sign typically has a different shape and/or contrast colors, such as white lettering on a green background). The imaging device is arranged at the vehicle, preferably in the interior cabin and viewing through the windshield (and thus protected from the outdoor elements, such as rain, snow, etc.), with a field of view that encompasses the expected locations of such signage along the side of roads and highways and the image processor may process the captured image to determine if the captured images encompass an object or sign that is at the expected location and that has the expected size, color and/or shape or the like. Therefore, the imaging processor 16 may readily determine what type of sign is detected by its geometric shape, size, color, text/characters and its location relative to the imaging device and the vehicle”, paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train”, and paragraph [0094], “The indicator thus alerts the other drivers or people in front of the subject vehicle that the vehicle is braking and, thus, may be highly useful at intersections with two, three or four way stops or the like. The indicator may be at or near or associated with an accessory module or windshield electronics module or console or interior rearview mirror assembly or the like of the vehicle and may be readily viewable and discernible by a person outside of and forwardly of the subject vehicle. The control may adjust or modulate the indicator to enhance the viewability or discernibility of the indicator, such as flashing or increasing the intensity of the indicator, such as in response to rapid or hard braking or the like of the subject vehicle or in response to a proximity or distance sensor detecting that the subject vehicle is within a threshold distance of another vehicle and/or is approaching the other vehicle at or above a threshold speed, such as described in U.S. Pat. Nos. 6,124,647; 6,291,906 and 6,411,204, which are hereby incorporated herein by reference.”). With respect to claim 17, please see the rejection above with respect to claim 5, which is commensurate in scope to claim 17, with claim 5 being drawn to a computing device and claim 17 being drawn to a corresponding system. Regarding Claim 6, Schofield and SHALEV-SHWARTZ teach The computing device of claim 5, as set forth in the obviousness rejection. Schofield teaches (See at least paragraph [0063], “Such traffic control signage, such as speed limit signs, exit signs, warning signs, stop signs, yield signs and/or the like, is typically regulated and various types of these signs must have certain specified, standard geometric shapes (such as a triangle for a yield sign, an octagon for a stop sign and the like), and must be at a particular height and at a particular location at or distance from the side of the road, and must have a specific type/color of lettering on a specific colored background (for example, a speed limit sign is typically a predefined shape, such as rectangular or circular, and has alphanumeric characters or letters and/or numbers that are a contrast color to a background color, such as black letters/numbers on a white background, while an exit sign typically has a different shape and/or contrast colors, such as white lettering on a green background). The imaging device is arranged at the vehicle, preferably in the interior cabin and viewing through the windshield (and thus protected from the outdoor elements, such as rain, snow, etc.), with a field of view that encompasses the expected locations of such signage along the side of roads and highways and the image processor may process the captured image to determine if the captured images encompass an object or sign that is at the expected location and that has the expected size, color and/or shape or the like. Therefore, the imaging processor 16 may readily determine what type of sign is detected by its geometric shape, size, color, text/characters and its location relative to the imaging device and the vehicle”, paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train”, and paragraph [0094], “The indicator thus alerts the other drivers or people in front of the subject vehicle that the vehicle is braking and, thus, may be highly useful at intersections with two, three or four way stops or the like. The indicator may be at or near or associated with an accessory module or windshield electronics module or console or interior rearview mirror assembly or the like of the vehicle and may be readily viewable and discernible by a person outside of and forwardly of the subject vehicle. The control may adjust or modulate the indicator to enhance the viewability or discernibility of the indicator, such as flashing or increasing the intensity of the indicator, such as in response to rapid or hard braking or the like of the subject vehicle or in response to a proximity or distance sensor detecting that the subject vehicle is within a threshold distance of another vehicle and/or is approaching the other vehicle at or above a threshold speed, such as described in U.S. Pat. Nos. 6,124,647; 6,291,906 and 6,411,204, which are hereby incorporated herein by reference.”). Schofield does not explicitly disclose, however, SHALEV-SHWARTZ, in the same field of endeavor, teaches wherein the one or more processors are further configured to modify the confidence level (See at least paragraph [0151], “At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550-556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit 110 may cause one or more navigational responses in vehicle 200, as described in connection with FIG. 5A, above.”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood, controlling a vehicle to autonomously take an action based on the modified confidence level, and wherein the one or more processors are further configured to modify the confidence level, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). With respect to claim 18, please see the rejection above with respect to claim 6, which is commensurate in scope to claim 18, with claim 6 being drawn to a computing device and claim 18 being drawn to a corresponding system. Regarding Claim 7, Schofield and SHALEV-SHWARTZ teach The computing device of claim 1, as set forth in the obviousness rejection. Schofield teaches and control the vehicle as the vehicle approaches the railroad light assembly (See at least paragraph [0078], “Optionally, the imaging system may be operable to detect and identify or recognize other types of signs. For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train. The imaging system could then warn the driver that the vehicle is approaching a dangerous condition. Additionally, the imaging system may be operable to detect other signals, such as a school bus stopping signal or a pedestrian road crossing signal or the like. Optionally, the imaging system may be operable to detect road repair or road construction zone signs and may recognize such signs to distinguish when the vehicle is entering a road construction zone. The imaging system may display the reduced speed for the construction zone and/or may provide an alert to the driver of the vehicle that the vehicle is entering a construction zone and that the vehicle speed should be reduced accordingly. The imaging system thus may not only assist the driver in avoiding a speeding ticket, but may provide enhanced safety for the construction workers at the construction zone.”). Schofield does not explicitly disclose, however, SHALEV-SHWARTZ, in the same field of endeavor, teaches wherein the one or more processors are further configured to: update, based on the modified confidence level, a trajectory of the vehicle (See at least paragraph [0151], “At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550-556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit 110 may cause one or more navigational responses in vehicle 200, as described in connection with FIG. 5A, above” and paragraph [0261], “Where analysis of a scene in an environment of the host vehicle indicates that one or more predefined navigational constraints may be implicated, those constraints may be imposed relative to one or more planned navigational actions for the host vehicle. For example, where analysis of a scene results in driving policy module 803 returning a desired navigational action, that desired navigational action may be tested against one or more implicated constraints. If the desired navigational action is determined to violate any aspect of the implicated constraints (e.g., if the desired navigational action would carry the host vehicle within a distance of 0.7 meters of pedestrian 1215 where a predefined hard constraint requires that the host vehicle remain at least 1.0 meters from pedestrian 1215), then at least one modification to the desired navigational action may be made based on the one or more predefined navigational constraints. Adjusting the desired navigational action in this way may provide an actual navigational action for the host vehicle in compliance with the constraints implicated by a particular scene detected in the environment of the host vehicle.”); based on the updated trajectory (See at least paragraph [0261], “Where analysis of a scene in an environment of the host vehicle indicates that one or more predefined navigational constraints may be implicated, those constraints may be imposed relative to one or more planned navigational actions for the host vehicle. For example, where analysis of a scene results in driving policy module 803 returning a desired navigational action, that desired navigational action may be tested against one or more implicated constraints. If the desired navigational action is determined to violate any aspect of the implicated constraints (e.g., if the desired navigational action would carry the host vehicle within a distance of 0.7 meters of pedestrian 1215 where a predefined hard constraint requires that the host vehicle remain at least 1.0 meters from pedestrian 1215), then at least one modification to the desired navigational action may be made based on the one or more predefined navigational constraints. Adjusting the desired navigational action in this way may provide an actual navigational action for the host vehicle in compliance with the constraints implicated by a particular scene detected in the environment of the host vehicle.”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood, controlling a vehicle to autonomously take an action based on the modified confidence level, and updating, based on the modified confidence level, a trajectory of the vehicle, based on the updated trajectory, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). With respect to claim 19, please see the rejection above with respect to claim 7, which is commensurate in scope to claim 19, with claim 7 being drawn to a computing device and claim 19 being drawn to a corresponding system. Regarding Claim 11, Schofield and SHALEV-SHWARTZ teach The computing device of claim 1, as set forth in the obviousness rejection. Schofield teaches wherein the one or more processors are further configured to control the vehicle to autonomously take a first action (See at least paragraph [0078], “Optionally, the imaging system may be operable to detect and identify or recognize other types of signs. For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train. The imaging system could then warn the driver that the vehicle is approaching a dangerous condition. Additionally, the imaging system may be operable to detect other signals, such as a school bus stopping signal or a pedestrian road crossing signal or the like. Optionally, the imaging system may be operable to detect road repair or road construction zone signs and may recognize such signs to distinguish when the vehicle is entering a road construction zone. The imaging system may display the reduced speed for the construction zone and/or may provide an alert to the driver of the vehicle that the vehicle is entering a construction zone and that the vehicle speed should be reduced accordingly. The imaging system thus may not only assist the driver in avoiding a speeding ticket, but may provide enhanced safety for the construction workers at the construction zone.”); control the vehicle to autonomously take a second action, different from the first action, (See at least paragraph [0078], “Optionally, the imaging system may be operable to detect and identify or recognize other types of signs. For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train. The imaging system could then warn the driver that the vehicle is approaching a dangerous condition. Additionally, the imaging system may be operable to detect other signals, such as a school bus stopping signal or a pedestrian road crossing signal or the like. Optionally, the imaging system may be operable to detect road repair or road construction zone signs and may recognize such signs to distinguish when the vehicle is entering a road construction zone. The imaging system may display the reduced speed for the construction zone and/or may provide an alert to the driver of the vehicle that the vehicle is entering a construction zone and that the vehicle speed should be reduced accordingly. The imaging system thus may not only assist the driver in avoiding a speeding ticket, but may provide enhanced safety for the construction workers at the construction zone.”), and control the vehicle to autonomously take a third action, different from the first action and the second action, (See at least paragraph [0078], “Optionally, the imaging system may be operable to detect and identify or recognize other types of signs. For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train. The imaging system could then warn the driver that the vehicle is approaching a dangerous condition. Additionally, the imaging system may be operable to detect other signals, such as a school bus stopping signal or a pedestrian road crossing signal or the like. Optionally, the imaging system may be operable to detect road repair or road construction zone signs and may recognize such signs to distinguish when the vehicle is entering a road construction zone. The imaging system may display the reduced speed for the construction zone and/or may provide an alert to the driver of the vehicle that the vehicle is entering a construction zone and that the vehicle speed should be reduced accordingly. The imaging system thus may not only assist the driver in avoiding a speeding ticket, but may provide enhanced safety for the construction workers at the construction zone” and paragraph [0079], “Optionally, the imaging system of the present invention may be associated with or cooperatively operable with an adaptive cruise control 28 (FIG. 2) of the vehicle, such that the cruise control speed setting may be adjusted in response to the imaging system. For example, an adaptive speed control system may reduce the set speed of the vehicle in response to the imaging system (or other forward facing vision system) detecting a curve in the road ahead of the vehicle (such as by detecting and recognizing a warning sign at or before such a curve). The vehicle speed may be reduced to an appropriate speed for traveling around the curve without the driver having to manually deactivate the cruise control. For example, the vehicle speed may be reduced to the amount of the reduced or safe limit shown on the warning sign or the like. The adaptive speed control may then resume the initial speed setting after the vehicle is through the turn or curve and is again traveling along a generally straight section of road.”). Schofield does not explicitly disclose, however, SHALEV-SHWARTZ, in the same field of endeavor, teaches when the modified confidence level satisfies a first threshold condition (See at least paragraph [0151], “At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550-556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit 110 may cause one or more navigational responses in vehicle 200, as described in connection with FIG. 5A, above” and paragraph [0261], “Where analysis of a scene in an environment of the host vehicle indicates that one or more predefined navigational constraints may be implicated, those constraints may be imposed relative to one or more planned navigational actions for the host vehicle. For example, where analysis of a scene results in driving policy module 803 returning a desired navigational action, that desired navigational action may be tested against one or more implicated constraints. If the desired navigational action is determined to violate any aspect of the implicated constraints (e.g., if the desired navigational action would carry the host vehicle within a distance of 0.7 meters of pedestrian 1215 where a predefined hard constraint requires that the host vehicle remain at least 1.0 meters from pedestrian 1215), then at least one modification to the desired navigational action may be made based on the one or more predefined navigational constraints. Adjusting the desired navigational action in this way may provide an actual navigational action for the host vehicle in compliance with the constraints implicated by a particular scene detected in the environment of the host vehicle.”), when the modified confidence level does not satisfy the first threshold condition (See at least paragraph [0151], “At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550-556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit 110 may cause one or more navigational responses in vehicle 200, as described in connection with FIG. 5A, above” and paragraph [0261], “Where analysis of a scene in an environment of the host vehicle indicates that one or more predefined navigational constraints may be implicated, those constraints may be imposed relative to one or more planned navigational actions for the host vehicle. For example, where analysis of a scene results in driving policy module 803 returning a desired navigational action, that desired navigational action may be tested against one or more implicated constraints. If the desired navigational action is determined to violate any aspect of the implicated constraints (e.g., if the desired navigational action would carry the host vehicle within a distance of 0.7 meters of pedestrian 1215 where a predefined hard constraint requires that the host vehicle remain at least 1.0 meters from pedestrian 1215), then at least one modification to the desired navigational action may be made based on the one or more predefined navigational constraints. Adjusting the desired navigational action in this way may provide an actual navigational action for the host vehicle in compliance with the constraints implicated by a particular scene detected in the environment of the host vehicle.”), when the modified confidence level is within a predefined value of a predetermined confidence threshold value based on the first threshold condition, wherein the predetermined confidence threshold value is a value of the modified confidence level (See at least paragraph [0151], “At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550-556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit 110 may cause one or more navigational responses in vehicle 200, as described in connection with FIG. 5A, above”, paragraph [0260], “Other hard constraints may also be employed. For example, a maximum deceleration rate of the host vehicle may be employed in at least some cases. Such a maximum deceleration rate may be determined based on a detected distance to a target vehicle following the host vehicle (e.g., using images collected from a rearward facing camera). The hard constraints may include a mandatory stop at a sensed crosswalk or a railroad crossing or other applicable constraints”, and paragraph [0261], “Where analysis of a scene in an environment of the host vehicle indicates that one or more predefined navigational constraints may be implicated, those constraints may be imposed relative to one or more planned navigational actions for the host vehicle. For example, where analysis of a scene results in driving policy module 803 returning a desired navigational action, that desired navigational action may be tested against one or more implicated constraints. If the desired navigational action is determined to violate any aspect of the implicated constraints (e.g., if the desired navigational action would carry the host vehicle within a distance of 0.7 meters of pedestrian 1215 where a predefined hard constraint requires that the host vehicle remain at least 1.0 meters from pedestrian 1215), then at least one modification to the desired navigational action may be made based on the one or more predefined navigational constraints. Adjusting the desired navigational action in this way may provide an actual navigational action for the host vehicle in compliance with the constraints implicated by a particular scene detected in the environment of the host vehicle.”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood and controlling a vehicle to autonomously take an action based on the modified confidence level, when the modified confidence level satisfies a first threshold condition, when the modified confidence level does not satisfy the first threshold condition, and when the modified confidence level is within a predefined value of a predetermined confidence threshold value based on the first threshold condition, wherein the predetermined confidence threshold value is a value of the modified confidence level, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). With respect to claim 16, please see the rejection above with respect to claim 11, which is commensurate in scope to claim 16, with claim 11 being drawn to a computing device and claim 16 being drawn to a corresponding system. Regarding Claim 12, Schofield teaches A system comprising: one or more sensors configured to capture images of a pair of lights of a railroad light assembly; (See at least paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train.”); and a computing device including one or more processors, wherein the one or more processors are configured to: receive the images of the pair of lights of the railroad light assembly from the one or more sensors (See at least paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train.”); based on one or more illumination patterns of the pair of lights of the railroad light assembly in the received images (See at least paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train.”); indicates…that the railroad light assembly is active (See at least paragraph [0078], “For example, the imaging system may be operable to detect and recognize a railroad crossing sign and to further recognize that the railroad crossing sign is activated (such as by distinguishing the flashing lights characteristic of a railroad crossing signal) due to an approaching train.” The system determines whether a railroad crossing sign is activated by distinguishing flashing light characteristics of the railroad crossing signal.). Schofield does not explicitly disclose, however, SHALEV-SHWARTZ, in the same field of endeavor, teaches modify, (See at least paragraph [0151], “At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550-556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit 110 may cause one or more navigational responses in vehicle 200, as described in connection with FIG. 5A, above.” The system modifies a confidence level associated with image-based measurements, wherein accumulated measurements are associated with an increasingly higher confidence level.); and control a vehicle to autonomously take an action based on the modified confidence level (See at least paragraph [0260], “Other hard constraints may also be employed. For example, a maximum deceleration rate of the host vehicle may be employed in at least some cases. Such a maximum deceleration rate may be determined based on a detected distance to a target vehicle following the host vehicle (e.g., using images collected from a rearward facing camera). The hard constraints may include a mandatory stop at a sensed crosswalk or a railroad crossing or other applicable constraints” and paragraph [0261], “Where analysis of a scene in an environment of the host vehicle indicates that one or more predefined navigational constraints may be implicated, those constraints may be imposed relative to one or more planned navigational actions for the host vehicle. For example, where analysis of a scene results in driving policy module 803 returning a desired navigational action, that desired navigational action may be tested against one or more implicated constraints. If the desired navigational action is determined to violate any aspect of the implicated constraints (e.g., if the desired navigational action would carry the host vehicle within a distance of 0.7 meters of pedestrian 1215 where a predefined hard constraint requires that the host vehicle remain at least 1.0 meters from pedestrian 1215), then at least one modification to the desired navigational action may be made based on the one or more predefined navigational constraints. Adjusting the desired navigational action in this way may provide an actual navigational action for the host vehicle in compliance with the constraints implicated by a particular scene detected in the environment of the host vehicle.” The system causes an actual navigation action for the host vehicle, including imposing a mandatory stop at a railroad crossing, which constitutes controlling the vehicle to autonomously take an action.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood and controlling a vehicle to autonomously take an action based on the modified confidence level, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). Claim(s) 8, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schofield (US 20140362221 A1) in view of SHALEV-SHWARTZ (US 20230347877 A1) and Hilleary (US 20210139061 A1). Regarding Claim 8, Schofield and SHALEV-SHWARTZ teach The computing device of claim 1, as set forth in the obviousness rejection. Schofield and SHALEV-SHWARTZ do not explicitly disclose, however, Hilleary, in the same field of endeavor, teaches wherein the one or more processors are further configured to: categorize image data of the images of the pair of lights of the railroad light assembly by color and shape (See at least paragraph [0035], “The warning lights 58b, 58c and 58d provided on the movable barrier 84, and also the lights 58a mounted to the stationary support 82 may be the same or different color in various embodiments, and in contemplated embodiments are red incandescent or LED lights as customarily utilized in railroad crossing gates”, paragraph [0036], “Because of the different positions of the warning lights 58a on the stationary support 82a and the set of lights 58b, 58c and 58d provided on the movable barrier 82, unique image signatures can be detected by the video analytic system 70 in various different embodiments as the crossing gate 80 is operated by the crossing warning system 52 (FIG. 1). For example, the on/off condition of the warning lights (or intensity and wavelength of the lights) can be detected by the camera 72 and the image processor device 74 (FIG. 1) of the video analytic system 70 to sense and indicate illumination of each light 58a, 58b, 58c, 58d and hence an activation of the crossing warning system 52”, and paragraph [0042], “As one illustrative example, in a contemplated embodiment the flashing warning lights 58a on the stationary arm 82 begin to flash alternately at a rate of 35-65 flashes per minute. The color of the flashing warning lights 58a may also be specified as a fixed red wavelength of 650 nm or 6500Å, and the color of the warning lights 58c, 58d on the movable barrier 84 may be specified to match the flashing warning lights 58a and also may flash alternately on the barrier 84.”); determine a brightness level of portions of the image data categorized as red and circular (See at least paragraph [0036], “Because of the different positions of the warning lights 58a on the stationary support 82a and the set of lights 58b, 58c and 58d provided on the movable barrier 82, unique image signatures can be detected by the video analytic system 70 in various different embodiments as the crossing gate 80 is operated by the crossing warning system 52 (FIG. 1). For example, the on/off condition of the warning lights (or intensity and wavelength of the lights) can be detected by the camera 72 and the image processor device 74 (FIG. 1) of the video analytic system 70 to sense and indicate illumination of each light 58a, 58b, 58c, 58d and hence an activation of the crossing warning system 52” and paragraph [0042], “As one illustrative example, in a contemplated embodiment the flashing warning lights 58a on the stationary arm 82 begin to flash alternately at a rate of 35-65 flashes per minute. The color of the flashing warning lights 58a may also be specified as a fixed red wavelength of 650 nm or 6500Å, and the color of the warning lights 58c, 58d on the movable barrier 84 may be specified to match the flashing warning lights 58a and also may flash alternately on the barrier 84. In contrast, the end tip light 58b on the barrier 84 is constantly on (i.e., does not flash). The image processing engine 76 in the video analytic system 70 can be configured to look for any or all of the flashing warning lights 58a, 58c, 58d that meet the specified color and frequency of flashing as applicable, and in the case of the warning lights 58c and 58d the movement of the flashing warning lights in the specified frequency along the paths A2 or A3.”); and determine whether the brightness level satisfies or fails to satisfy a second threshold condition indicating that at least one light of the pair of lights is illuminated or not illuminated, respectively (See at least paragraph [0036], “Because of the different positions of the warning lights 58a on the stationary support 82a and the set of lights 58b, 58c and 58d provided on the movable barrier 82, unique image signatures can be detected by the video analytic system 70 in various different embodiments as the crossing gate 80 is operated by the crossing warning system 52 (FIG. 1). For example, the on/off condition of the warning lights (or intensity and wavelength of the lights) can be detected by the camera 72 and the image processor device 74 (FIG. 1) of the video analytic system 70 to sense and indicate illumination of each light 58a, 58b, 58c, 58d and hence an activation of the crossing warning system 52”, paragraph [0043], “Regardless of the specific detection features that may be utilized (e.g., mere illumination, color, flashing frequency of a warning light or lights operating as predicted when the crossing warning system has been activated, and/or movement of a warning light or lights operating along an expected or predicted path of motion such as A1, A2 and A3 when the crossing warning system has been activated) to sense a state of one or more of the lights 58a, 58b, 58c and 58d a signal can be provided to the traffic intersection controller 68 as an indication that the crossing warning system 52 has been activated. That is, when a predictable ore predetermined illumination, position and/or movement of one or more of the lights 58b, 58c, 58d has been detected by the video analytic system 70 a signal can be provided to the traffic intersection controller 68 as an indication that the movable crossing gate barrier 84 has been raised or lowered”, and paragraph [0045], “As another example, if the image processing engine 76 in the video analytic system 70 fails to detect any of the warning lights 58b, 58c and 58d in the image acquired, this can be an indication that the barrier 84 has been broken off and an error signal can be provided to the railroad and the traffic intersection controller 68 for appropriate response.”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ and Hilleary such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood and controlling a vehicle to autonomously take an action based on the modified confidence level, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), and to categorize image data of the images of the pair of lights of the railroad light assembly by color and shape, determine a brightness level of portions of the image data categorized as red and circular, and determine whether the brightness level satisfies or fails to satisfy a second threshold condition indicating that at least one light of the pair of lights is illuminated or not illuminated, respectively, as taught by Hilleary (See paragraph [0035], [0036], [0042], [0043], [0045].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). The motivation for doing so would be improving vehicle traffic flow at railroad crossings, as taught by Hilleary (See paragraph [0004].). With respect to claim 20, please see the rejection above with respect to claim 8, which is commensurate in scope to claim 20, with claim 8 being drawn to a computing device and claim 20 being drawn to a corresponding system. Claim(s) 9, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schofield (US 20140362221 A1) in view of SHALEV-SHWARTZ (US 20230347877 A1), Hilleary (US 20210139061 A1), and LI (CN 105741316 A). Regarding Claim 9, Schofield, SHALEV-SHWARTZ, and Hilleary teach The computing device of claim 8, as set forth in the obviousness rejection. Schofield and SHALEV-SHWARTZ do not explicitly disclose, however, Hilleary, in the same field of endeavor, teaches to determine the brightness level (See at least paragraph [0036], “Because of the different positions of the warning lights 58a on the stationary support 82a and the set of lights 58b, 58c and 58d provided on the movable barrier 82, unique image signatures can be detected by the video analytic system 70 in various different embodiments as the crossing gate 80 is operated by the crossing warning system 52 (FIG. 1). For example, the on/off condition of the warning lights (or intensity and wavelength of the lights) can be detected by the camera 72 and the image processor device 74 (FIG. 1) of the video analytic system 70 to sense and indicate illumination of each light 58a, 58b, 58c, 58d and hence an activation of the crossing warning system 52” and paragraph [0042], “As one illustrative example, in a contemplated embodiment the flashing warning lights 58a on the stationary arm 82 begin to flash alternately at a rate of 35-65 flashes per minute. The color of the flashing warning lights 58a may also be specified as a fixed red wavelength of 650 nm or 6500Å, and the color of the warning lights 58c, 58d on the movable barrier 84 may be specified to match the flashing warning lights 58a and also may flash alternately on the barrier 84. In contrast, the end tip light 58b on the barrier 84 is constantly on (i.e., does not flash). The image processing engine 76 in the video analytic system 70 can be configured to look for any or all of the flashing warning lights 58a, 58c, 58d that meet the specified color and frequency of flashing as applicable, and in the case of the warning lights 58c and 58d the movement of the flashing warning lights in the specified frequency along the paths A2 or A3.”). Schofield, SHALEV-SHWARTZ, and Hilleary do not explicitly disclose, however, Li, in the same field of endeavor, teaches wherein the one or more processors use a sliding window based correlation filter (See at least paragraph [0010], “To address the shortcomings of current deep learning-based tracking methods, this invention employs the following solutions for target localization during target tracking: 1) Using the output results of multiple layers in a CNN, rather than just the last layer, to construct a representation model of the target, thereby preserving the spatial structure information of the target; 2) Learning adaptive correlation filtering on the results of each layer, thereby avoiding the process of extracting a large number of samples”, paragraph [0011], “This invention divides the tracking process into two parts: target localization and scale selection. The first part, target localization, uses a convolutional neural network and correlation filtering to locate the target”, paragraph [0014], “Step 2: Extract the search region R centered at (x,y), and use a convolutional neural network (CNN) to extract the convolutional feature map of the search region R”, paragraph [0021] and [0023], “Step 5: Read the next frame image. Using the target position of the previous frame as the center, extract a scaled search region of size R*scale, where R is the M×N region mentioned above, and scale is the scale factor. Use CNN to extract the convolutional feature map of the scaled search region, and use bilateral interpolation to upsample the convolutional feature map to the size of the scaled search region R to obtain the convolutional feature map. <img file="BDA0000910393890000033.TIF" he="83" id="DESidf0008" img-content="drawing" imgformat=" tif" inline="yes" orientation="portrait" wi="147"/> Calculate the target confidence map using the target model. <img file="BDA0000910393890000034.TIF" he="83" id=" DESidf0009" img-content="drawing" img-format="tif" inline="yes" orientation="portrait" wi=" 168"/> Calculate the target confidence map. <img file="BDA0000910393890000035.TIF" he=" 83" id="DESidf0010" img-content="drawing" img-format="tif" inline="yes" orientation=" portrait" wi="187"/> For each layer l, the target confidence map is calculated as follows: Where F<sup>-1</sup> is the inverse Fourier transform”, and paragraph [0056], “Step Six: Using the confidence map set <img file="BDA0000910393890000066.TIF" he="83" id="DESidf0037" img-content="drawing" img-format="tif" inline="yes" orientation="portrait" wi="186"/> obtained in Step Five, locate the target position (x, y) layer by layer.”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ, Hilleary, and Li such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood and controlling a vehicle to autonomously take an action based on the modified confidence level, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), to categorize image data of the images of the pair of lights of the railroad light assembly by color and shape, determine a brightness level of portions of the image data categorized as red and circular, determine whether the brightness level satisfies or fails to satisfy a second threshold condition indicating that at least one light of the pair of lights is illuminated or not illuminated, respectively, and to determine the brightness level, as taught by Hilleary (See paragraph [0035], [0036], [0042], [0043], [0045].), and to use a sliding window based correlation filter, as taught by Li (See paragraph [0010], [0011], [0014], [0021], [0023], [0056].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). The motivation for doing so would be improving vehicle traffic flow at railroad crossings, as taught by Hilleary (See paragraph [0004].). The motivation for doing so would be reducing complexity and time requirements of target tracking, as taught by Li (See paragraph [0008].). Regarding Claim 10, Schofield, SHALEV-SHWARTZ, and Hilleary teach The computing device of claim 8, as set forth in the obviousness rejection. Schofield and SHALEV-SHWARTZ do not explicitly disclose, however, Hilleary, in the same field of endeavor, teaches to determine whether the brightness level satisfies or fails to satisfy the second threshold condition (See at least paragraph [0036], “Because of the different positions of the warning lights 58a on the stationary support 82a and the set of lights 58b, 58c and 58d provided on the movable barrier 82, unique image signatures can be detected by the video analytic system 70 in various different embodiments as the crossing gate 80 is operated by the crossing warning system 52 (FIG. 1). For example, the on/off condition of the warning lights (or intensity and wavelength of the lights) can be detected by the camera 72 and the image processor device 74 (FIG. 1) of the video analytic system 70 to sense and indicate illumination of each light 58a, 58b, 58c, 58d and hence an activation of the crossing warning system 52”, paragraph [0043], “Regardless of the specific detection features that may be utilized (e.g., mere illumination, color, flashing frequency of a warning light or lights operating as predicted when the crossing warning system has been activated, and/or movement of a warning light or lights operating along an expected or predicted path of motion such as A1, A2 and A3 when the crossing warning system has been activated) to sense a state of one or more of the lights 58a, 58b, 58c and 58d a signal can be provided to the traffic intersection controller 68 as an indication that the crossing warning system 52 has been activated. That is, when a predictable ore predetermined illumination, position and/or movement of one or more of the lights 58b, 58c, 58d has been detected by the video analytic system 70 a signal can be provided to the traffic intersection controller 68 as an indication that the movable crossing gate barrier 84 has been raised or lowered”, and paragraph [0045], “As another example, if the image processing engine 76 in the video analytic system 70 fails to detect any of the warning lights 58b, 58c and 58d in the image acquired, this can be an indication that the barrier 84 has been broken off and an error signal can be provided to the railroad and the traffic intersection controller 68 for appropriate response.”). Schofield, SHALEV-SHWARTZ, and Hilleary do not explicitly disclose, however, Li, in the same field of endeavor, teaches wherein the one or more processors use a convolutional neural network (See at least paragraph [0014], “Step 2: Extract the search region R centered at (x,y), and use a convolutional neural network (CNN) to extract the convolutional feature map of the search region R”, paragraph [0046], “Step 2: Based on the target determination in the current frame image, extract the search region R centered at (x,y), extract the convolutional feature map using CNN, and upsample the feature map to the size of the search region R using bilateral interpolation to obtain the convolutional feature map <img file="BDA0000910393890000051.TIF" he="82" id=" DESidf0024" img-content="drawing" img-format="tif" inline="yes" orientation="portrait" wi=" 153"/>. Here, R is M×N, where M and N are the width and height, respectively, M = 2w, N = 2h, and <img file="BDA0000910393890000052.TIF" he="82" id="DESidf0025" img-content=" drawing" img-format="tif" inline="yes" orientation="portrait" wi="115"/> is M×N×D, where D is the number of channels, and l is the number of layers in the CNN, with values {37, 28, 19}. Specifically, this invention uses VGGNet-19 as the CNN model”, and paragraph [0053] and [0055], “Step 5: Read the next frame image. Using the target position from the previous frame as the center, extract a scaled search region of size R*scale, where R is the M×N region mentioned above, and scale is the scale factor, with an initial value of 1. After obtaining the scaled search region, use CNN to extract convolutional feature maps, and upsample the feature maps to the size of the search region R using bilateral interpolation to obtain the convolutional feature map. <img file="BDA0000910393890000062.TIF" he="83" id="DESidf0033" img-content=" drawing" img-format="tif" inline="yes" orientation="portrait" wi="144"/> Calculate the confidence map using the target model <img file="BDA0000910393890000063.TIF" he="83" id="DESidf0034" img-content="drawing" img-format="tif" inline="yes" orientation="portrait" wi="160"/> <img file="BDA0000910393890000064.TIF" he="83" id="DESidf0035" imgcontent=" drawing" img-format="tif" inline="yes" orientation="portrait" wi="187"/> For each layer l, the confidence map calculation method is as follows: Where F<sup>-1</sup> is the inverse Fourier transform, and the other variables are the same as described above.”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the invention of Schofield with the teachings of SHALEV-SHWARTZ, Hilleary, and Li such that the vision system of Schofield is further configured to utilize modifying…a confidence level that indicates a likelihood and controlling a vehicle to autonomously take an action based on the modified confidence level, as taught by SHALEV-SHWARTZ (See paragraph [0151], [0260], [0261].), to categorize image data of the images of the pair of lights of the railroad light assembly by color and shape, determine a brightness level of portions of the image data categorized as red and circular, determine whether the brightness level satisfies or fails to satisfy a second threshold condition indicating that at least one light of the pair of lights is illuminated or not illuminated, respectively, use a sliding window based correlation filter to determine the brightness level, and to determine whether the brightness level satisfies or fails to satisfy the second threshold condition, as taught by Hilleary (See paragraph [0035], [0036], [0042], [0043], [0045].), and use a convolutional neural network, as taught by Li (See paragraph [0014], [0046], [0053], [0055].), with a reasonable expectation of success. The motivation for doing so would be accurately identifying location within the roadway, navigating alongside other vehicles, avoiding obstacles, observing traffic signals and signs, navigating intersections, and responding to different situations, as taught by SHALEV-SHWARTZ (See paragraph [0003].). The motivation for doing so would be improving vehicle traffic flow at railroad crossings, as taught by Hilleary (See paragraph [0004].). The motivation for doing so would be reducing complexity and time requirements of target tracking, as taught by Li (See paragraph [0008].). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEWEL ASHLEY KUNTZ whose telephone number is (571)270-5542. The examiner can normally be reached M-F 8:30am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at (313) 446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEWEL A KUNTZ/Examiner, Art Unit 3666 /ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666
Read full office action

Prosecution Timeline

Nov 04, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578195
INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12565204
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12542012
TEST SYSTEM, CONTROL DEVICE, TEST METHOD, AND TEST SYSTEM PROGRAM
2y 5m to grant Granted Feb 03, 2026
Patent 12523490
Systems and Methods for Vehicle Navigation
2y 5m to grant Granted Jan 13, 2026
Patent 12518631
Vehicle Scheduling Method, Electronic Equipment and Storage Medium
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
80%
With Interview (+7.9%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month