Prosecution Insights
Last updated: April 19, 2026
Application No. 18/156,862

METHOD AND DEVICE FOR DETERMINING A SIGNAL STATE OF A LIGHT SIGNAL SYSTEM

Non-Final OA §103
Filed
Jan 19, 2023
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Robert Bosch GmbH
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered. Claim Status Claims 1-12 are pending for examination in the application filed 12/02/2025. Claims 1, 11, and 12 are currently amended. Priority Acknowledgement is made of Applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent application DE102022200746.8 filed on 01/24/2022. Response to Arguments and Amendments Applicant’s arguments filed 12/02/2025 been fully considered but they are not persuasive. Applicant argues on page 6 of the Remarks filed 12/02/2025 that Chen does not teach the amended limitation “wherein the second identifier defines that the first object is only estimated and not currently detected”. Examiner respectfully disagrees. Chen teaches changing an identifier of the saved first object from the first identifier to a second identifier when the first object is no longer detected in a subsequent analysis cycle wherein the second identifier defines that the first object is only estimated and not currently detected ([0064] That is, camera 316 may be operating at some given frame rate, e.g., 20 fps, and thus, captures multiple images/frames of a traffic signal of interest. For example, consider a scenario where camera 316 captures, e.g., a sequence of images/frames reflecting that a particular bulb of a traffic signal (in the United States (based on location of vehicle 10)) is as follows: red-red-red-yellow-green-green. [0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.). [0070] Bulbwise detection may be performed to derive/generate certain bulbwise detections. In this example, those bulb detections 406 may result in determinations that traffic light 500 currently indicates/shows an active red bulb and an active left (green) arrow bulb. The bulb detections 406 may include other determinations, e.g., that some other bulbs are “off,” and/or unknown “?” bulb characteristics or states…In other words, the detected characteristics of state of a bulb can be determined, and compared to the bulb group configuration to arrive at a controlling bulb state for each relevant bulb group). For further clarification, in Chen the object is the active bulb light, and when the active light is off and/or unknown), then the first object is not currently detected. The first object is however estimated because bounding box locations are still generated for the various bulb locations, even when the object is not detected (off), as shown in Fig. 3C. Please see below for the updated 35 U.S.C. 103 rejections. Claim Rejections – 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-4 and 8-12 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US20220036732A1) in view of Li (US20230030172A1). Regarding claim 1, Chen teaches a method for determining a signal state of a light signal system having several light signal emitters, wherein the light signal system has several signal states and a signal state is formed by one or more activated light signal emitters ([0003] a method comprises generating images of a traffic light, and detecting current characteristics of each bulb of a plurality of bulbs comprising the traffic light based on the images. The method further comprises temporally filtering the current characteristics of each bulb in one or more bulb groups to refine the detected current characteristics, and outputting one or more bulb-specific states regarding the one or more bulb groups), the method comprising the following steps: determining an object representing the signal state of the light signal system ([0070] Bulbwise detection may be performed to derive/generate certain bulbwise detections. In this example, those bulb detections 406 may result in determinations that traffic light 500 currently indicates/shows an active red bulb and an active left (green) arrow bulb. The bulb detections 406 may include other determinations, e.g., that some other bulbs are “off,” and/or unknown “?” bulb characteristics or states); saving a determined first object having a first identifier when, in a first analysis cycle, it is detected that the first object represents the signal state of the light signal system ([0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.); changing an identifier of the saved first object from the first identifier to a second identifier when the first object is no longer detected in a subsequent analysis cycle wherein the second identifier defines that the first object is only estimated and not currently detected ([0064] That is, camera 316 may be operating at some given frame rate, e.g., 20 fps, and thus, captures multiple images/frames of a traffic signal of interest. For example, consider a scenario where camera 316 captures, e.g., a sequence of images/frames reflecting that a particular bulb of a traffic signal (in the United States (based on location of vehicle 10)) is as follows: red-red-red-yellow-green-green. [0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.). [0070] Bulbwise detection may be performed to derive/generate certain bulbwise detections. In this example, those bulb detections 406 may result in determinations that traffic light 500 currently indicates/shows an active red bulb and an active left (green) arrow bulb. The bulb detections 406 may include other determinations, e.g., that some other bulbs are “off,” and/or unknown “?” bulb characteristics or states…In other words, the detected characteristics of state of a bulb can be determined, and compared to the bulb group configuration to arrive at a controlling bulb state for each relevant bulb group); a second object is determined in an analysis cycle, which represents a different signal state of the light signal system ([0061] image thumbnail 340 may include a traffic signal including two rows of bulbs. A first (top) row may include two non-active bulbs, and an active red bulb. A second (bottom) row may include an active green, left-turn arrow, an active green go-straight arrow, and an inactive bulb. [0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output). Chen does not teach deleting the saved first object, when a second object is determined. Li, in the same field of endeavor of object tracking for vehicular applications, teaches deleting the saved first object, when a second object is determined ([0113] In some non-limiting embodiments or aspects, autonomous vehicle 102 may replace the existing tracked objects with the first detected object(s) and/or the second detected object(s), which will become the existing tracked objects for future iterations of process 500 (e.g., in a next time step, the first/second tracked object from the previous time step are deemed to be the existing tracked objects). [0121] In some non-limiting embodiments or aspects, tracked object storage system 740 may be configured to at least one of update, add to, or replace (e.g., completely, partially, and/or the like) existing tracked object data associated with the at least one existing tracked object based on the first tracked object data, the second tracked object data, any combination thereof, and/or the like, as described herein). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Li to delete the first saved object when a second object is determined to "initiate tracking of newly detected object(s)" [Li 0108]. Regarding claim 2, Chen and Li teach the method of claim 1. Chen further teaches the first object having a second identifier ([0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.)), wherein the defined condition is that the signal state of the light signal system represented by the second object is a defined subsequent signal state with respect to the signal state of the light signal system represented by the first object ([0042] Such traffic signals may cycle through an illumination sequence that can go, e.g., from red (indicating oncoming vehicles should stop) to green (indicating oncoming vehicles may go, e.g., straight) to yellow (indicating oncoming vehicles should slow down to a stop). This cycle may then repeat. [0064] For example, consider a scenario where camera 316 captures, e.g., a sequence of images/frames reflecting that a particular bulb of a traffic signal (in the United States (based on location of vehicle 10)) is as follows: red-red-red-yellow-green-green. Based on this series of detections, optional bulbwise filter 324 may determine that the yellow determination is an outlier and thus, cannot be trusted. Thus, optional bulbwise filter 324 may negate or filter out the “yellow” detection in the “red-red-red-yellow-green-green” sequence of images). Chen does not teach deleting the first object when a characteristic of the second object satisfies a defined condition. Li teaches deleting the first object when a characteristic of the second object satisfies a defined condition ([0113] In some non-limiting embodiments or aspects, autonomous vehicle 102 may replace the existing tracked objects with the first detected object(s) and/or the second detected object(s), which will become the existing tracked objects for future iterations of process 500 (e.g., in a next time step, the first/second tracked object from the previous time step are deemed to be the existing tracked objects). [0121] In some non-limiting embodiments or aspects, tracked object storage system 740 may be configured to at least one of update, add to, or replace (e.g., completely, partially, and/or the like) existing tracked object data associated with the at least one existing tracked object based on the first tracked object data, the second tracked object data, any combination thereof, and/or the like, as described herein). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Li to delete the first object when a characteristic of the second object satisfies a defined condition to “initiate tracking of newly detected object(s)” [Li 0108]. Regarding claim 3, Chen and Li teach the method of claim 1. Chen further teaches the first object having a second identifier ([0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.)). Chen does not teach deleting the first object when an age of the second object is less than an age of the first object. Li teaches deleting the first object when an age of the second object is less than an age of the first object. ([0113] In some non-limiting embodiments or aspects, autonomous vehicle 102 may replace the existing tracked objects with the first detected object(s) and/or the second detected object(s), which will become the existing tracked objects for future iterations of process 500 (e.g., in a next time step, the first/second tracked object from the previous time step are deemed to be the existing tracked objects). [0121] In some non-limiting embodiments or aspects, tracked object storage system 740 may be configured to at least one of update, add to, or replace (e.g., completely, partially, and/or the like) existing tracked object data associated with the at least one existing tracked object based on the first tracked object data, the second tracked object data, any combination thereof, and/or the like, as described herein). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Li to delete the first object having the second identifier when an age of the second object is less than an age of the first object to “initiate tracking of newly detected object(s)” [Li 0108]. Regarding claim 4, Chen and Li teach the method of claim 1. Chen further teaches the first object having a second identifier ([0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.)). Chen does not teach deleting the first object when a determined size of the second object corresponds to a determined size of the first object within a defined tolerance threshold. Li teaches deleting the first object ([0113] In some non-limiting embodiments or aspects, autonomous vehicle 102 may replace the existing tracked objects with the first detected object(s) and/or the second detected object(s), which will become the existing tracked objects for future iterations of process 500 (e.g., in a next time step, the first/second tracked object from the previous time step are deemed to be the existing tracked objects)) when a determined size of the second object corresponds to a determined size of the first object within a defined tolerance threshold ([0009] In some non-limiting embodiments or aspects, the first height of each first detected object may be determined by querying the tile map based on the first position of the first detected object to provide a plurality of ground heights and determining a first ground height of the plurality of ground heights closest to an existing height of the respective existing tracked object as the first height of the first detected object). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Li to delete the first object having the second identifier when the size of the first and second objects correspond in order to “initiate tracking of newly detected object(s)” [Li 0108]. Regarding claim 8, Chen and Li teach the method of claim 1. Chen further teaches the second object having a first identifier and the first object having a second identifier ([0061] image thumbnail 340 may include a traffic signal including two rows of bulbs. A first (top) row may include two non-active bulbs, and an active red bulb. A second (bottom) row may include an active green, left-turn arrow, an active green go-straight arrow, and an inactive bulb. [0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.)). Chen does not teach saving the second object when the first object is deleted. Li teaches saving the second object when the first object is deleted ([0113] In some non-limiting embodiments or aspects, autonomous vehicle 102 may replace the existing tracked objects with the first detected object(s) and/or the second detected object(s), which will become the existing tracked objects for future iterations of process 500 (e.g., in a next time step, the first/second tracked object from the previous time step are deemed to be the existing tracked objects). [0121] In some non-limiting embodiments or aspects, tracked object storage system 740 may be configured to at least one of update, add to, or replace (e.g., completely, partially, and/or the like) existing tracked object data associated with the at least one existing tracked object based on the first tracked object data, the second tracked object data, any combination thereof, and/or the like, as described herein). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Li to save the second object when the first object is deleted to “initiate tracking of newly detected object(s)” [Li 0108]. Regarding claim 9, Chen and Li teach the method of claim 1. Chen further teaches considering the saved object having the first identifier or the second identifier, in a further data evaluation ([0066] Returning again to FIG. 3A, at operation 304, temporal filtering may be applied to the current characteristics of each bulb in one or more bulb groups to refine the detected current characteristics. It should be understood this temporal filtering is applicable to a bulb group rather than an individual bulb (as is the case with optional bulbwise filter 324 described above). Regarding claim 10, Chen and Li teach the method of claim 1. Chen further teaches determining the object representing the signal state of the light signal system within a defined environmental estimation area ([0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations (one example of which is labeled 322a around the active red light in the top row of the traffic light); bulb category; and arrow direction. It should be understood that a boundary (or bounding box) corresponds to delineating region(s) for detecting whether or not some portion of a captured image of a view, e.g., views 200 (FIG. 2A) or 250 (FIG. 2B) includes a traffic light. Bounding boxes/sliding windows can be “slid” across an image and a classifier can check to see if the content within the bounding box is a traffic light(s)) when the first object having the second identifier is saved ([0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown)). Regarding claim 11, Chen teaches a device including a processor configured to determine a signal state of a light signal system having several light signal emitters, wherein the light signal system has several signal states and a signal state is formed by one or more activated light signal emitters ([0009] In some embodiments, a vehicle comprises a camera, and a bulb-specific detector. In some embodiments, the bulb detector comprises a bulb detector component detecting current characteristics of each bulb of a plurality of bulbs comprising the traffic light based on images captured by the camera. [0031] It should be understood that a vehicle such as vehicle 10 may have some form of a drive force unit (e.g., an engine, motor generators (MGs)), a battery, a transmission, a memory, an electronic control unit (ECU), and/or other components not necessarily illustrated herein. [0034] An ECU 110 may include circuitry to control the above aspects of vehicle operation. ECU 150 may include, for example, a microcomputer that includes a one or more processing units (e.g., microprocessors), memory storage (e.g., RAM, ROM, etc.), and I/O devices. ECU 110 may execute instructions stored in memory to control one or more electrical systems or subsystems in the vehicle), the processor configured to: determine an object representing the signal state of the light signal system ([0055] each of the traffic light image thumbnails 318 can be provided as input to a bulb detector 320. Bulb detector 320 may be a machine learning model, such as deep neural network model trained to detect/perceive bulbs of traffic signals); save a determined first object having a first identifier when, in a first analysis cycle, it is detected that the first object represents the signal state of the light signal system ([0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.); change an identifier of the saved first object from the first identifier to a second identifier when the first object is no longer detected in a subsequent analysis cycle, wherein the second identifier defines that the first object is only estimated and not currently detected ([0064] That is, camera 316 may be operating at some given frame rate, e.g., 20 fps, and thus, captures multiple images/frames of a traffic signal of interest. For example, consider a scenario where camera 316 captures, e.g., a sequence of images/frames reflecting that a particular bulb of a traffic signal (in the United States (based on location of vehicle 10)) is as follows: red-red-red-yellow-green-green. [0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.). [0070] Bulbwise detection may be performed to derive/generate certain bulbwise detections. In this example, those bulb detections 406 may result in determinations that traffic light 500 currently indicates/shows an active red bulb and an active left (green) arrow bulb. The bulb detections 406 may include other determinations, e.g., that some other bulbs are “off,” and/or unknown “?” bulb characteristics or states…In other words, the detected characteristics of state of a bulb can be determined, and compared to the bulb group configuration to arrive at a controlling bulb state for each relevant bulb group); a second object is determined in an analysis cycle, which represents a different signal state of the light signal system ([0061] image thumbnail 340 may include a traffic signal including two rows of bulbs. A first (top) row may include two non-active bulbs, and an active red bulb. A second (bottom) row may include an active green, left-turn arrow, an active green go-straight arrow, and an inactive bulb. [0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output). Chen does not teach delete the saved first object, when a second object is determined. Li, in the same field of endeavor of object tracking for vehicular applications, teaches delete the saved first object, when a second object is determined ([0113] In some non-limiting embodiments or aspects, autonomous vehicle 102 may replace the existing tracked objects with the first detected object(s) and/or the second detected object(s), which will become the existing tracked objects for future iterations of process 500 (e.g., in a next time step, the first/second tracked object from the previous time step are deemed to be the existing tracked objects). [0121] In some non-limiting embodiments or aspects, tracked object storage system 740 may be configured to at least one of update, add to, or replace (e.g., completely, partially, and/or the like) existing tracked object data associated with the at least one existing tracked object based on the first tracked object data, the second tracked object data, any combination thereof, and/or the like, as described herein). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Chen with the teachings of Li to delete the first saved object when a second object is determined to "initiate tracking of newly detected object(s)" [Li 0108]. Regarding claim 12, Chen teaches a non-transitory computer-readable medium on which is stored a computer program ([0086] the terms “computer program medium” and “computer usable medium” are used to generally refer to transitory or non-transitory media. Such media may be, e.g., memory 608, storage unit 620, media 614, and channel 628. These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product”…When executed, such instructions might enable the computing component 600 to perform features or functions of the present application as discussed herein) for determining a signal state of a light signal system having several light signal emitters, wherein the light signal system has several signal states and a signal state is formed by one or more activated light signal emitters ([0003] a method comprises generating images of a traffic light, and detecting current characteristics of each bulb of a plurality of bulbs comprising the traffic light based on the images. The method further comprises temporally filtering the current characteristics of each bulb in one or more bulb groups to refine the detected current characteristics, and outputting one or more bulb-specific states regarding the one or more bulb groups), the computer program, when executed by a computer ([0034] An ECU 110 may include circuitry to control the above aspects of vehicle operation. ECU 150 may include, for example, a microcomputer that includes a one or more processing units (e.g., microprocessors), memory storage (e.g., RAM, ROM, etc.), and I/O devices. ECU 110 may execute instructions stored in memory to control one or more electrical systems or subsystems in the vehicle), causing the computer to perform the following steps: determining an object representing the signal state of the light signal system ([0055] each of the traffic light image thumbnails 318 can be provided as input to a bulb detector 320. Bulb detector 320 may be a machine learning model, such as deep neural network model trained to detect/perceive bulbs of traffic signals); saving a determined first object having a first identifier when, in a first analysis cycle, it is detected that the first object represents the signal state of the light signal system ([0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.); changing an identifier of the saved first object from the first identifier to a second identifier when the first object is no longer detected in a subsequent analysis cycle, wherein the second identifier defines that the first object is only estimated and not currently detected ([0064] That is, camera 316 may be operating at some given frame rate, e.g., 20 fps, and thus, captures multiple images/frames of a traffic signal of interest. For example, consider a scenario where camera 316 captures, e.g., a sequence of images/frames reflecting that a particular bulb of a traffic signal (in the United States (based on location of vehicle 10)) is as follows: red-red-red-yellow-green-green. [0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.). [0070] Bulbwise detection may be performed to derive/generate certain bulbwise detections. In this example, those bulb detections 406 may result in determinations that traffic light 500 currently indicates/shows an active red bulb and an active left (green) arrow bulb. The bulb detections 406 may include other determinations, e.g., that some other bulbs are “off,” and/or unknown “?” bulb characteristics or states…In other words, the detected characteristics of state of a bulb can be determined, and compared to the bulb group configuration to arrive at a controlling bulb state for each relevant bulb group); a second object is determined in an analysis cycle, which represents a different signal state of the light signal system ([0061] image thumbnail 340 may include a traffic signal including two rows of bulbs. A first (top) row may include two non-active bulbs, and an active red bulb. A second (bottom) row may include an active green, left-turn arrow, an active green go-straight arrow, and an inactive bulb. [0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output). Chen does not teach deleting the saved first object, when a second object is determined. Li, in the same field of endeavor of object tracking for vehicular applications, teaches deleting the saved first object, when a second object is determined ([0113] In some non-limiting embodiments or aspects, autonomous vehicle 102 may replace the existing tracked objects with the first detected object(s) and/or the second detected object(s), which will become the existing tracked objects for future iterations of process 500 (e.g., in a next time step, the first/second tracked object from the previous time step are deemed to be the existing tracked objects). [0121] In some non-limiting embodiments or aspects, tracked object storage system 740 may be configured to at least one of update, add to, or replace (e.g., completely, partially, and/or the like) existing tracked object data associated with the at least one existing tracked object based on the first tracked object data, the second tracked object data, any combination thereof, and/or the like, as described herein). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the medium of Chen with the teachings of Li to delete the first saved object when a second object is determined to "initiate tracking of newly detected object(s)" [Li 0108]. Claims 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Li and Artamonov (US20210201058A1). Regarding claim 5, Chen and Li teach the method of claim 1. Artamonov, in the same field of endeavor of determining traffic signals, teaches defining a first environmental estimation area with respect to the first object having the second identifier ([0063] The video processing system 220 may isolate one or more traffic signals in the videos 215. The videos 215 or still images 225 may be cropped to isolate the traffic signals. The locations of the traffic signals may have been previously recorded, such as while mapping an intersection or retrieved from a High Definition (HD) map or another type of a road graph…This pre-recorded information may be used to determine coordinates in the videos 215 corresponding to the traffic signals. In addition to or instead of this technique other methods may be used to isolate the traffic signals, such as using image recognition to determine a position of traffic signals in the videos 215 or still images 225. The individual bulbs of the traffic signal may be isolated. For example an image of a traffic signal having three bulbs may be split into three different images, each of the three images containing a single bulb. [0065] For each bulb of each traffic signal in the set of still images 225, the first MLA 230 may predict a status of the bulb. The predicted status may include a predicted likelihood for each possible color of the bulb, whether the bulb is on or off, and/or whether the status is unknown. The predicted status may be in the format of a multi-dimensional vector), in which the first object is expected in a next analysis cycle ([0066] The predicted status vectors 240 may be input to a second MLA 250. The MLA 250 may output a predicted state of the traffic signal 260. The predicted state 260 may indicated a predicted probability of each possible state of the traffic signal. [0077] Vectors used for training the second MLA 250 may be determined based on labels input by the viewer indicating changes to the state of the traffic signal. For example if the labels indicate that a traffic signal changes from red to green at 3 seconds into the video, all vectors generated for times between 0 and 3 seconds would indicate that the state is red, and all vectors generated for times after 3 seconds would indicate that the state is green). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Artamonov to define an environmental estimation area for an object expected in the next analysis cycle so that "an image of a traffic signal having three bulbs may be split into three different images, each of the three images containing a single bulb" [Artamonov 0063] for "input to a machine learning algorithm (MLA). The MLA may output a predicted status of each bulb of the traffic signal in each of the still images" [Artamonov 0009]. Regarding claim 6, Chen, Li, and Artamonov teach the method of claim 5. Artamonov teaches defining a second environmental estimation area with respect to the first object having the second identifier ([0063] The video processing system 220 may isolate one or more traffic signals in the videos 215. The videos 215 or still images 225 may be cropped to isolate the traffic signals. The locations of the traffic signals may have been previously recorded, such as while mapping an intersection or retrieved from a High Definition (HD) map or another type of a road graph…This pre-recorded information may be used to determine coordinates in the videos 215 corresponding to the traffic signals. In addition to or instead of this technique other methods may be used to isolate the traffic signals, such as using image recognition to determine a position of traffic signals in the videos 215 or still images 225. The individual bulbs of the traffic signal may be isolated. For example an image of a traffic signal having three bulbs may be split into three different images, each of the three images containing a single bulb. [0065] For each bulb of each traffic signal in the set of still images 225, the first MLA 230 may predict a status of the bulb. The predicted status may include a predicted likelihood for each possible color of the bulb, whether the bulb is on or off, and/or whether the status is unknown. The predicted status may be in the format of a multi-dimensional vector), in which the second object is expected in a next analysis cycle, wherein the second environmental estimation area in which the second object is expected is defined, considering a current signal state of the light signal system represented by the first object and depending on a specific signal state subsequent thereto, which is represented by the second object ([0066] The predicted status vectors 240 may be input to a second MLA 250. The MLA 250 may output a predicted state of the traffic signal 260. The predicted state 260 may indicated a predicted probability of each possible state of the traffic signal. [0077] Vectors used for training the second MLA 250 may be determined based on labels input by the viewer indicating changes to the state of the traffic signal. For example if the labels indicate that a traffic signal changes from red to green at 3 seconds into the video, all vectors generated for times between 0 and 3 seconds would indicate that the state is red, and all vectors generated for times after 3 seconds would indicate that the state is green). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Artamonov to define an environmental estimation area for an object expected in the next analysis cycle so that "an image of a traffic signal having three bulbs may be split into three different images, each of the three images containing a single bulb" [Artamonov 0063] for "input to a machine learning algorithm (MLA). The MLA may output a predicted status of each bulb of the traffic signal in each of the still images" [Artamonov 0009]. Regarding claim 7, Chen and Li teach the method of claim 1. Chen further teaches the first object having a second identifier ([0062] Upon processing image thumbnail 340 by bulb detector 320, characteristics of each of the bulbs of the traffic signal may be output including, for example: bounding box locations…bulb category; and arrow direction. [0063] In terms of bulb category, characteristics such as color (e.g., red, green, yellow, or unknown) may be output. Other bulb categories includes type (round, arrow, unknown), as well as state (on, off, blinking, unknown). In terms of arrow (or angle) direction, bulb detector 320 may detect and output the direction of a detected arrow-type bulb, e.g.)). Chen does not teach deleting the first object having the second identifier when the second object is determined in a defined environmental estimation area with respect to the first object. Li teaches deleting the first object when the second object is determined ([0113] In some non-limiting embodiments or aspects, autonomous vehicle 102 may replace the existing tracked objects with the first detected object(s) and/or the second detected object(s), which will become the existing tracked objects for future iterations of process 500 (e.g., in a next time step, the first/second tracked object from the previous time step are deemed to be the existing tracked objects). [0121] In some non-limiting embodiments or aspects, tracked object storage system 740 may be configured to at least one of update, add to, or replace (e.g., completely, partially, and/or the like) existing tracked object data associated with the at least one existing tracked object based on the first tracked object data, the second tracked object data, any combination thereof, and/or the like, as described herein). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Li to delete the first object when a second object is determined to "initiate tracking of newly detected object(s)" [Li 0108]. Artamonov teaches the first object having the second identifier when the second object is determined in a defined environmental estimation area with respect to the first object ([0063] The video processing system 220 may isolate one or more traffic signals in the videos 215. The videos 215 or still images 225 may be cropped to isolate the traffic signals. The locations of the traffic signals may have been previously recorded, such as while mapping an intersection or retrieved from a High Definition (HD) map or another type of a road graph…This pre-recorded information may be used to determine coordinates in the videos 215 corresponding to the traffic signals. In addition to or instead of this technique other methods may be used to isolate the traffic signals, such as using image recognition to determine a position of traffic signals in the videos 215 or still images 225. The individual bulbs of the traffic signal may be isolated. For example an image of a traffic signal having three bulbs may be split into three different images, each of the three images containing a single bulb. [0065] For each bulb of each traffic signal in the set of still images 225, the first MLA 230 may predict a status of the bulb. The predicted status may include a predicted likelihood for each possible color of the bulb, whether the bulb is on or off, and/or whether the status is unknown. The predicted status may be in the format of a multi-dimensional vector). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Chen with the teachings of Artamonov to define environmental estimation areas for objects so that "an image of a traffic signal having three bulbs may be split into three different images, each of the three images containing a single bulb" [Artamonov 0063] for "input to a machine learning algorithm (MLA). The MLA may output a predicted status of each bulb of the traffic signal in each of the still images" [Artamonov 0009]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571) 272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACQUELINE R ZAK/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Jan 19, 2023
Application Filed
Apr 16, 2025
Non-Final Rejection — §103
Jul 18, 2025
Response Filed
Aug 27, 2025
Final Rejection — §103
Dec 02, 2025
Response after Non-Final Action
Dec 17, 2025
Request for Continued Examination
Jan 15, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month