Prosecution Insights
Last updated: April 19, 2026
Application No. 18/373,675

VEHICLE AND OBJECT RECOGNITION METHOD OF THE SAME

Final Rejection §101§103
Filed
Sep 27, 2023
Examiner
WELLS, HEATH E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Kia Corporation
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
58 granted / 77 resolved
+13.3% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The reply filed on 29 December 2025 has been entered. Applicant’s arguments with respect to claims 1-18 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. An applicant interview IS recommended in this case. Claims 1-18 are pending in this application and have been considered below. Priority Receipt is acknowledged that application claims priority to foreign application with application number KR10-2022-0123363 dated 28 September 2022. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS dated 27 September 2023 that has been previously considered remains placed in the application file. 1st Claim Rejections - 35 USC § 101 Claims 1 and 17 have been amended. Claims 1 and 17 now state inventions that fit into the statutory category of methods. The first rejection of claims 1-7 and 17-18 under 35 USC 101 is withdrawn. 2nd Claim Rejections - 35 USC § 101 Claims 1 and 17 have been amended. Claims 1 and 17 now claim subject matter that is not an abstract idea as they now state controlling a vehicle, which cannot reasonably be reasonably performed in the human mind. The rejection of claims 1-7 and 17-18 under 35 USC 101 for being an abstract idea is withdrawn. 1st Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 6, 8-10, 14 and 16 (all claims except 3-5, 7, 11-13, 15 and 17-18) are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2016 0252905 A1, (Tian et al.) in view of EU Patent Publication EP 3 633 541 A1, (Asvatha Narayanan et al.) (Published 4 August 2020). The references are listed in a PTO-892 from the Office Action in which they are first used. Claim 1 Regarding Claim 1, Tian et al. teach a method performed by at least one controller of a vehicle ("directed to detecting and responding to emergency vehicles (EVs)," paragraph [0014]), the method comprising: [AltContent: textbox (Tian et al. Fig. 4, showing recognition of a police vehicle.)] PNG media_image1.png 643 565 media_image1.png Greyscale determining, based on a video, a plurality of image frames associated surroundings of the vehicle ("capture images of its surrounding environment," paragraph [0014]); detecting, based on the plurality of image frames and by performing an object recognition process, an image of an emergency vehicle equipped with a warning light ("The captured images may be analyzed by one or more computing devices. The analysis may include detecting light in each of the captured images and determining whether the detected light is likely associated with an EV based on different templates," paragraph [0014]); performing, based on a first image frame of the plurality of image frames, a first warning light state recognition associated with the warning light ("The captured images may be analyzed by one or more computing devices. The analysis may include detecting light in each of the captured images and determining whether the detected light is likely associated with an EV based on different templates," paragraph [0014]); performing, based on a second image frame of the plurality of image frames, a second warning light state recognition associated with the warning light ("Subsequently, during a second detection stage, the computing device 110 may more accurately determine whether any of the identified light sources correspond to the characteristics of an EV," paragraph [0048]); performing a third warning light state recognition based on the first warning light state recognition and the second warning light state recognition ("the one or more computing devices may perform analyses on the light's spatial configuration and flash pattern to further determine whether the detected light corresponds to an EV," paragraph [0014]); generating, based on a result of the third warning light state recognition, a signal for controlling the vehicle ("Based on the determination, the method comprises maneuvering, using the one or more computing devices, a vehicle to yield in response to at least one of the one or more flashing light sources and the particular type of the emergency vehicle," paragraph [0002] where making a vehicle yield teaches a signal for controlling the vehicle); and controlling, based on the signal, the vehicle ("The computing device 110 may control the direction and speed of the vehicle by controlling various components," paragraph [0036]). [AltContent: textbox (Asvatha Narayanan et al. Fig. 5, showing amplitudes of flashing lights over time.)] PNG media_image2.png 430 483 media_image2.png Greyscale Tian et al. is not relied upon to explicitly teach all of accumulated value of a weight. However, Asvatha Narayanan et al. teach wherein a warning light state of the third warning light state recognition indicates whether the emergency vehicle is in a warning situation based on an accumulated value of a weight for the plurality of image frames ("An EV colour value for each of the plurality of image frames is then determined based on the sum of all the first values for each image frame," Col. 2, lines 26-29) exceeding a predetermined threshold ("assigning to a pixel a first value if the EV colour component exceeds a predefined threshold value 10 and a second value if the EV colour component does not," Col. 2, lines 8-10). Therefore, taking the teachings of Tian et al. and Asvatha Narayanan et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Real-Time Active Emergency Vehicle Detection” as taught by Tian et al. to use “Emergency Vehicle Detection” as taught by Asvatha Narayanan et al. The suggestion/motivation for doing so would have been that, “Accordingly, there is a demand for image-based methods and device for detecting the presence of emergency vehicles which are efficient that that appropriate actions may be promptly taken.” as noted by the Asvatha Narayanan et al. disclosure in paragraph [0003], which also motivates combination because the combination would predictably have a better recognition percentage as there is a reasonable expectation that emergency vehicles may be obscured in traffic; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of apparatus claim 9 while noting that the rejection above cites to both device and method disclosures. Claim 9 is mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 2 Regarding claim 2, Tian et al. teach the method of claim 1, wherein the performing the object recognition process comprises extracting a warning light area in the first image frame and the second image frame of the plurality of image frames, and wherein the first image frame and the second image frame are consecutive ("FIGS. 4A-C depict three consecutive images (or frames) of the same intersection captured by the one or more cameras 184 of vehicle 100," paragraph [0058]). Claim 6 Regarding claim 6, Tian et al. teach the method of claim 2, further comprising identifying three patches arranged horizontally in the warning light area ("FIGS. 4A-C depict three images of the same intersection 402 that are used to determine whether the light source 414 is flashing, though more or less images may be used in other flash detection scenarios," paragraph [0061] where light source 414 in Fig. 4 is divided into 4 patches, which teaches at least three horizontal patches, also see "Further, the horizontal configuration of the light 344 may also indicate that the light may be associated with a PV," paragraph [0056]). Claim 8 Regarding claim 8, Tian et al. teach the method of claim 1, further comprising outputting the result of the third warning light state recognition as data for an autonomous driving control of the vehicle ("a perception system of an autonomous vehicle may capture images of its surrounding environment to detect and respond to an approaching EV," paragraph [0014]). Claim 9 Regarding claim 9, Tian et al. teach a vehicle ("a perception system of an autonomous vehicle may capture images of its surrounding environment to detect and respond to an approaching EV," paragraph [0014]) comprising: a camera configured to capture a video comprising a plurality of image frames associated with surroundings of the vehicle ("FIG. 4C is yet another example image 460 of the intersection 402 that the camera 184 of vehicle 100 captures subsequent to image 430," paragraph [0060]); and a controller ("Alternatively, the one or more processors may be a dedicated device such as an ASIC or other hardware-based processor, such as a field programmable gate array (FPGA)," paragraph [0028] where the processor is a controller) configured to: detect, based on the plurality of image frames and by performing an object recognition process, an image of an emergency vehicle equipped with a warning light ("The captured images may be analyzed by one or more computing devices. The analysis may include detecting light in each of the captured images and determining whether the detected light is likely associated with an EV based on different templates," paragraph [0014]); perform, based on a first image frame of the plurality of image frames ("capture images of its surrounding environment," paragraph [0014]), a first warning light state recognition associated with the warning light ("The captured images may be analyzed by one or more computing devices. The analysis may include detecting light in each of the captured images and determining whether the detected light is likely associated with an EV based on different templates," paragraph [0014]); perform, based on a second image frame of the plurality of image frames, a second warning light state recognition associated with the warning light ("the one or more computing devices may determine whether the detected light is flashing," paragraph [0014]; and perform a third warning light state recognition based on the first warning light state recognition and the second warning light state recognition ("the one or more computing devices may perform analyses on the light's spatial configuration and flash pattern to further determine whether the detected light corresponds to an EV," paragraph [0014]). Tian et al. is not relied upon to explicitly teach all of accumulated value of a weight. However, Asvatha Narayanan et al. teach wherein a warning light state of the third warning light state recognition indicates whether the emergency vehicle is in a warning situation based on an accumulated value of a weight for the plurality of image frames ("An EV colour value for each of the plurality of image frames is then determined based on the sum of all the first values for each image frame," Col. 2, lines 26-29) exceeding a predetermined threshold ("assigning to a pixel a first value if the EV colour component exceeds a predefined threshold value 10 and a second value if the EV colour component does not," Col. 2, lines 8-10). Tian et al. and Asvatha Narayanan et al. are combined as per claim 1. Claim 10 Regarding claim 10, Tian et al. teach the vehicle of claim 9, wherein the controller is further configured to extract, for the object recognition process, a warning light area in the first image frame and the second image frame of the plurality of image frames, and wherein the first image frame and the second image frame are consecutive ("FIGS. 4A-C depict three consecutive images (or frames) of the same intersection captured by the one or more cameras 184 of vehicle 100," paragraph [0058]). Claim 14 Regarding claim 14, Tian et al. teach the vehicle of claim 10, wherein the controller is further configured to identify three patches arranged horizontally in the warning light area ("FIGS. 4A-C depict three images of the same intersection 402 that are used to determine whether the light source 414 is flashing, though more or less images may be used in other flash detection scenarios," paragraph [0061] where light source 414 in Fig. 4 is divided into 4 patches, which teaches at least three horizontal patches, also see "Further, the horizontal configuration of the light 344 may also indicate that the light may be associated with a PV," paragraph [0056]). Claim 16 Regarding claim 16, Tian et al. teach the vehicle of claim 9, wherein the controller is further configured to output a result of the third warning light state recognition as data for an autonomous driving control of the vehicle ("a perception system of an autonomous vehicle may capture images of its surrounding environment to detect and respond to an approaching EV," paragraph [0014]). 2nd Claim Rejections - 35 USC § 103 Claims 17-18 (third independent claim and it’s dependent claim) are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2016 0252905 A1, (Tian et al.) in view of Non Patent Publication “Emergency Vehicle Recognition and Classification Method Using HSV Color Segmentation”, (Razalli et al.) (Published 29 February 2020). The references are listed in a PTO-892 from the Office Action in which they are first used. Claim 17 Regarding Claim 17, Tian et al. teach a method performed by at least one controller of a vehicle ("directed to detecting and responding to emergency vehicles (EVs)," paragraph [0014]), the method comprising: determining, based on a video, a plurality of image frames associated surroundings of the vehicle ("capture images of its surrounding environment," paragraph [0014]); detecting, based on the plurality of image frames and by performing an object recognition process, an image of an emergency vehicle equipped with a warning light ("The captured images may be analyzed by one or more computing devices. The analysis may include detecting light in each of the captured images and determining whether the detected light is likely associated with an EV based on different templates," paragraph [0014]); and performing, based on an image frame of the plurality of image frames, a warning light state recognition associated with the warning light ("The captured images may be analyzed by one or more computing devices. The analysis may include detecting light in each of the captured images and determining whether the detected light is likely associated with an EV based on different templates," paragraph [0014]), generating, based on a result of the warning light state recognition, a signal for controlling the vehicle ("Based on the determination, the method comprises maneuvering, using the one or more computing devices, a vehicle to yield in response to at least one of the one or more flashing light sources and the particular type of the emergency vehicle," paragraph [0002] where making a vehicle yield teaches a signal for controlling the vehicle); and controlling, based on the signal, the vehicle ("The computing device 110 may control the direction and speed of the vehicle by controlling various components," paragraph [0036]), and wherein a warning light state of the warning light state recognition indicates whether the emergency vehicle is in a warning situation ("Based on the spatial configuration of the light source 414 and/or the comparison between the flash pattern of the light source 414 with one or more classifiers stored in memory 130, the computing device 110 may determine that the object 412 is a police vehicle (PV). Upon determining that the flashing light source corresponds to a PV, the autonomous vehicle may appropriately respond by slowing down and/or pulling over to the side of the road," paragraph [0062] where appropriately respond teaches the emergency vehicle is in a warning situation). Tian et al. is not relied upon to explicitly teach all of warning light state recognition. However, Razalli et al. teach wherein the warning light state recognition comprises: determining a plurality of patches in a warning light area associated with the warning light ("detection and extraction of vehicle in an image, image segmentation to find the location of emergency vehicle siren light, light extraction," page 286 paragraph 1); performing, based on the plurality of patches, red-green-blue (RGB)- to-hue-saturation-value (HSV) conversion ("The created database is a set of images representing the vehicle siren light (red and blue) in different conditions. This are then used to extract values of the HSV color parameters in order to have an exact value of illumination effect of the siren light," page 286, paragraph 3); determining, based on pixels of which saturation (S) is greater than or equal to a predetermined value for each of the plurality of patches subjected to the RGB-to-HSV conversion, an RGB histogram ("The created histogram is the selected Region of Interest (ROI) of HSV color model in order to create six different histograms data value (Hue, Saturation and value for red and blue light)," page 286 paragraph 3); selecting a patch having the highest ratio of brightness in the RGB histogram among the plurality of patches subjected to the RGB-to-HSV conversion ("object segmentation and slicing algorithm is applied to locate the position of emergency siren light," page 286 paragraph 2); and determining that the warning light is in an on state based on a value of at least one of red (R), green (G), and blue (B) channels in the RGB histogram of the selected patch being greater than or equal to a predetermined threshold ("In the other hands, Red and Blue have a wide range of color distance corresponding to the emergency siren light condition," page 286 paragraph 3, where the siren light condition is an on state based on a value greater than a threshold). Therefore, taking the teachings of Tian et al. and Razalli et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Real-Time Active Emergency Vehicle Detection” as taught by Tian et al. to use “Emergency Vehicle Recognition and Classification Method Using HSV Color Segmentation” as taught by Razalli et al. The suggestion/motivation for doing so would have been that, “it is very much necessary to design a vehicle type recognition system, such as retrieving emergency vehicles from the traffic surveillance video camera to avoid the above casualties thus preventing accidents, collisions, and obtaining safer traffic.” as noted by the Razalli et al. disclosure on page 284 in paragraph [0003], which also motivates combination because the combination would predictably have a better recognition percentage as there is a reasonable expectation that it may be difficult to tell when vehicle lights turn on and off; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 18 Regarding claim 18, Tian et al. teach the method of claim 17, further comprising identifying three patches arranged horizontally in the warning light area, wherein the plurality of patches comprises the three patches ("FIGS. 4A-C depict three images of the same intersection 402 that are used to determine whether the light source 414 is flashing, though more or less images may be used in other flash detection scenarios," paragraph [0061] where light source 414 in Fig. 4 is divided into 4 patches, which teaches at least three horizontal patches, also see "Further, the horizontal configuration of the light 344 may also indicate that the light may be associated with a PV," paragraph [0056]). 3rd Claim Rejections - 35 USC § 103 Claims 3-5, 7, 11-13 and 15 (all remaining claims) are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2016 0252905 A1, (Tian et al.) and EU Patent Publication EP 3 633 541 A1, (Asvatha Narayanan et al.) (Published 4 August 2020) in view of Non Patent Publication “Emergency Vehicle Recognition and Classification Method Using HSV Color Segmentation”, (Razalli et al.) (Published 29 February 2020). The references are listed in a PTO-892 from the Office Action in which they are first used. Claim 3 Regarding Claim 3, Tian et al. and Asvatha Narayanan et al. teach the method of claim 1, as noted above. [AltContent: textbox (Razalli et al. Fig. 2, showing extracting color values of emergency lights.)] PNG media_image3.png 301 476 media_image3.png Greyscale Tian et al. and Asvatha Narayanan et al. are not relied upon to explicitly teach all of an on/off state of the warning light. However, Razalli et al. teach wherein the performing the first warning light state recognition comprises determining, based on a deep learning-based warning light state recognition process, an on/off state of the warning light ("Some of the latest approach use to detected a movement vehicle or object is using optical flow such as implemented in [5]," Page 285 paragraph 1, where optical flow is a deep learning based recognition process). Therefore, taking the teachings of Tian et al., Asvatha Narayanan et al. and Razalli et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Real-Time Active Emergency Vehicle Detection” as taught by Tian et al. and “Emergency Vehicle Detection” as taught by Asvatha Narayanan et al. to use “Emergency Vehicle Recognition and Classification Method Using HSV Color Segmentation” as taught by Razalli et al. The suggestion/motivation for doing so would have been that, “it is very much necessary to design a vehicle type recognition system, such as retrieving emergency vehicles from the traffic surveillance video camera to avoid the above casualties thus preventing accidents, collisions, and obtaining safer traffic.” as noted by the Razalli et al. disclosure on page 284 in paragraph [0003], which also motivates combination because the combination would predictably have a better recognition percentage as there is a reasonable expectation that it may be difficult to tell when vehicle lights turn on and off; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 4 Regarding claim 4, Tian et al. teach the method of claim 1, as noted above. Tian et al. and Asvatha Narayanan et al. are not relied upon to explicitly teach all of an on/off state of the warning light. However, Razalli et al. teach wherein the performing the second warning light state recognition comprises determining, based on a computer vision-based warning light state recognition process, an on/off state of the warning light ("In this work, RGB and HSV color model was used because it allows identifying the specific features for the concrete interested color and can reduce the illumination effects," page 286 paragraph 1). Tian et al., Asvatha Narayanan et al. and Razalli are combined as per claim 3. Claim 5 Regarding claim 5, Tian et al. teach the method of claim 2, as noted above. Tian et al. and Asvatha Narayanan et al. are not relied upon to explicitly teach all of determining that the warning light is in an on state. However, Razalli et al. teach wherein the performing the second warning light state recognition comprises: determining a plurality of patches in the warning light area ("detection and extraction of vehicle in an image, image segmentation to find the location of emergency vehicle siren light, light extraction," page 286 paragraph 1); performing red-green-blue (RGB)-to-hue-saturation-value (HSV) conversion on each of the plurality of patches ("The created database is a set of images representing the vehicle siren light (red and blue) in different conditions. This are then used to extract values of the HSV color parameters in order to have an exact value of illumination effect of the siren light," page 286, paragraph 3); storing, as an RGB histogram, information of pixels of which saturation (S) is greater than or equal to a predetermined value for each of the plurality of patches subjected to the RGB-to-HSV conversion ("The created histogram is the selected Region of Interest (ROI) of HSV color model in order to create six different histograms data value (Hue, Saturation and value for red and blue light)," page 286 paragraph 3); selecting a patch having the highest ratio of brightness in the RGB histogram among the plurality of patches subjected to the RGB-to-HSV conversion ("object segmentation and slicing algorithm is applied to locate the position of emergency siren light," page 286 paragraph 2); and determining that the warning light is in an on state based on a value of at least one of red (R), green (G), and blue (B) channels in the RGB histogram of the selected patch being greater than or equal to a predetermined threshold ("In the other hands, Red and Blue have a wide range of color distance corresponding to the emergency siren light condition," page 286 paragraph 3, where the siren light condition is an on state based on a value greater than a threshold). Tian et al., Asvatha Narayanan et al. and Razalli are combined as per claim 3. Claim 7 Regarding claim 7, Tian et al. teach the method of claim 2, as noted above. Tian et al. and Asvatha Narayanan et al. are not relied upon to explicitly teach all of determining a warning light state weight. However, Razalli et al. teach wherein the performing the third warning light state recognition comprises: determining a weight based on a state determination result from the first warning light state recognition and the second warning light state recognition in each of the consecutive image frames ("The proposed work will use the color range data obtained from previously as a reference to classify the detected vehicle as emergency vehicle. In SVM based classification, each data point in the dataset is presented as a k-dimensional vector with n-ratios," page 287 paragraph 1); and determining that the warning light is in an on state based on an accumulated value of the weight for the consecutive image frames exceeding the predetermined threshold ("If each data point belongs to only one of two classes, the SVM separate the dataset with a k-1 dimensional with maximum distance separation between both classes as described in Figure 6," page 287 paragraph 1, where the two classes are on and off). Tian et al., Asvatha Narayanan et al. and Razalli are combined as per claim 3. Claim 11 Regarding claim 11, Tian et al. teach the vehicle of claim 9, as noted above. Tian et al. and Asvatha Narayanan et al. are not relied upon to explicitly teach all of warning light state recognition. However, Razalli et al. teach wherein the controller is further configured to perform the first warning light state recognition by determining, based on a deep learning-based warning light state recognition process, an on/off state of the warning light ("Some of the latest approach use to detected a movement vehicle or object is using optical flow such as implemented in [5]," Page 285 paragraph 1, where optical flow is a deep learning based recognition process). Tian et al., Asvatha Narayanan et al. and Razalli are combined as per claim 3. Claim 12 Regarding claim 12, Tian et al. teach the vehicle of claim 9, as noted above. Tian et al. and Asvatha Narayanan et al. are not relied upon to explicitly teach all of warning light state recognition. However, Razalli et al. teach wherein the controller is further configured to perform the second warning light state recognition by determining, based on a computer vision-based warning light state recognition process, an on/off state of the warning light ("In this work, RGB and HSV color model was used because it allows identifying the specific features for the concrete interested color and can reduce the illumination effects," page 286 paragraph 1). Tian et al., Asvatha Narayanan et al. and Razalli are combined as per claim 3. Claim 13 Regarding claim 13 , Tian et al. teach the vehicle of claim 10, as noted above. Tian et al. and Asvatha Narayanan et al. are not relied upon to explicitly teach all of warning light state recognition. However, Razalli et al. teach wherein the controller is, for the second warning light state recognition, further configured to: determine a plurality of patches in the warning light area ("detection and extraction of vehicle in an image, image segmentation to find the location of emergency vehicle siren light, light extraction," page 286 paragraph 1); perform red-green-blue (RGB)-to-hue-saturation-value (HSV) conversion on each of the plurality of patches ("The created database is a set of images representing the vehicle siren light (red and blue) in different conditions. This are then used to extract values of the HSV color parameters in order to have an exact value of illumination effect of the siren light," page 286, paragraph 3); store, as an RGB histogram, information of pixels of which saturation (S) is greater than or equal to a predetermined value for each of the plurality of patches subjected to the RGB-to-HSV conversion ("The created histogram is the selected Region of Interest (ROI) of HSV color model in order to create six different histograms data value (Hue, Saturation and value for red and blue light)," page 286 paragraph 3); select a patch having the highest ratio of brightness in the RGB histogram among the plurality of patches subjected to the RGB-to-HSV conversion ("object segmentation and slicing algorithm is applied to locate the position of emergency siren light," page 286 paragraph 2); and determine that the warning light is in an on state based on a value of at least one of red (R), green (G), and blue (B) channels in the RGB histogram of the selected patch being greater than or equal to a predetermined threshold ("In the other hands, Red and Blue have a wide range of color distance corresponding to the emergency siren light condition," page 286 paragraph 3, where the siren light condition is an on state based on a value greater than a threshold). Tian et al., Asvatha Narayanan et al. and Razalli are combined as per claim 3. Claim 15 Regarding claim 15, Tian et al. teach the vehicle of claim 10, as noted above. Tian et al. and Asvatha Narayanan et al. are not relied upon to explicitly teach all of warning light state recognition. However, Razalli et al. teach wherein the controller is, for the third warning light state recognition, further configured to: determine a weight based on a state determination result from the first warning light state recognition and the second warning light state recognition in each of the consecutive image frames ("The proposed work will use the color range data obtained from previously as a reference to classify the detected vehicle as emergency vehicle. In SVM based classification, each data point in the dataset is presented as a k-dimensional vector with n-ratios," page 287 paragraph 1); and determine that the warning light is in an on state based on an accumulated value of the weight for the consecutive image frames exceeding the predetermined threshold ("If each data point belongs to only one of two classes, the SVM separate the dataset with a k-1 dimensional with maximum distance separation between both classes as described in Figure 6," page 287 paragraph 1, where the two classes are on and off). Tian et al., Asvatha Narayanan et al. and Razalli are combined as per claim 3. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2021 0103747 A1 to Moustafa et al. discloses an image analysis circuit to analyze captured images using an image machine learning technique to identify an image event, and a vehicle identification circuit to identify a type of vehicle based on the image event and the sound event. The vehicle identification circuit may further use V2V or V2I alerts to identify the type of vehicle and communicate a V2X or V2I alert message based on the vehicle type. In some aspects, the type of vehicle is further identified based on a light event associated with light signals detected by the vehicle recognition system. US Patent Publication 2019 0340450 A1 to Moosaei et al. discloses detecting and classifying one or more traffic lights. The method may include converting an RGB frame to an HSY frame. The HSY frame may be filtered by at least one threshold value to obtain at least one saturation frame. At least one contour may be extracted from the at least one saturation frame. Accordingly, a first portion of the RGB may be cropped in order to encompass an area including the at least one contour. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.E.W/Examiner, Art Unit 2664 Date: 4 March 2026 /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Sep 27, 2023
Application Filed
Sep 19, 2025
Non-Final Rejection — §101, §103
Dec 29, 2025
Response Filed
Mar 04, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602755
DEEP LEARNING-BASED HIGH RESOLUTION IMAGE INPAINTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597226
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
2y 5m to grant Granted Apr 07, 2026
Patent 12591979
IMAGE GENERATION METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588876
TARGET AREA DETERMINATION METHOD AND MEDICAL IMAGING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586363
GENERATION OF PLURAL IMAGES HAVING M-BIT DEPTH PER PIXEL BY CLIPPING M-BIT SEGMENTS FROM MUTUALLY DIFFERENT POSITIONS IN IMAGE HAVING N-BIT DEPTH PER PIXEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.1%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month