DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-8, 10-21, 24-25, and 69-78 were previously pending. Claims 1-3, 5-8, 10-15, 20-21, 24-25, and 69-78 have been amended. No claims have been cancelled. Claim 79 has been newly added. Accordingly, claims 1-8, 10-21, 24-25, and 69-79 are currently pending and have been examined in this application.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/2025 has been entered.
Examiner's Note
Examiner has cited particular paragraphs/columns and line numbers or figures in the
references as applied to the claims below for the convenience of the applicant. Although the
specified citations are representative of the teachings in the art and are applied to the specific
limitations within the individual claim, other passages and figures may apply as well. It is
respectfully requested from the applicant, in preparing the responses, to fully consider the
references in their entirety as potentially teaching all or part of the claimed invention, as well as
the context of the passage as taught by the prior art or disclosed by the examiner. Applicant is
reminded that the Examiner is entitled to give the broadest reasonable interpretation to the
language of the claims. Furthermore, the Examiner is not limited to Applicant's definition which
is not specifically set forth in the disclosure.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 16-17, 19-21, 24-25, and 69-78 are rejected under 35 U.S.C. 103 as being unpatentable over Creusot (US 2020/0081450 A1) in view of Shalom (US 2015/0210277 A1).
Regarding claim 1, Creusot discloses a navigation system for a vehicle, the navigation system comprising:
(Creusot – Fig. 1, [0028, 0030] – An autonomous vehicle 100 can navigate about roadways without human conduction based upon sensor signals by different types of sensor systems 102-104. Effectuating appropriate motions by mechanical systems such as a vehicle propulsion system 106, braking system 108, and steering system 110.)
at least one processor comprising circuitry and having access to a memory, wherein the memory includes instructions that when executed by the circuitry cause the at least one processor to:
(Creusot – Fig. 1, [0031] – A computing system 112 includes a processor 114 and a memory 116 that includes computer-executable instructions that are executed by the processor 114.)
receive, from a first camera of the vehicle, a first image captured from an environment of the vehicle;
(Creusot – Fig. 4, [0042] – The first image 402 corresponding to the first camera captures the pair of traffic lights 410 and 412.)
PNG
media_image1.png
730
582
media_image1.png
Greyscale
receive, from a second camera of the vehicle, a second image captured from the environment of the vehicle;
(Creusot – Fig. 4, [0042] – The second image 404 corresponding to the second camera captures the pair of traffic lights 410 and 412.)
analyze the first image to generate a first detection result, wherein the first detection result includes a first identification of a first traffic light, a first state of the first traffic light;
(Creusot – Fig. 4, [0044-0045] – Image 402 is processed to identify configurations of the light emitting sources in the regions of interest for determining detected traffic signals and generating independent directives/observations 406 corresponding to the detected traffic signals.)
analyze the second image to generate a second detection result, wherein the second detection result includes a second identification of the traffic light and a second state of the traffic light;
(Creusot – Fig. 4, [0044-0045] – Image 404 is processed to identify configurations of the light emitting sources in the regions of interest for determining detected traffic signals and generating independent directives/observations 406 corresponding to the detected traffic signals.)
using the first detection result including the first identification of the first traffic light and the first state of the first traffic light and the second detection result including the second identification of the first traffic light and the second state of the first traffic light to determine a confirmed state of the first traffic light;
(Creusot - [0044-0045] – If the detected traffic signals are correctly determined, the independent directives for each of the independent directives would correspond to an alternating flashing red light 410 (STOP_AND_YIELD) and a solid red light 412 (STOP). If one of the cameras generates a signal that incorrectly identifies one of the lights, a third type of independent directive would be generated. All of the independent directives/observations 406 are merged and by signal fusion 408 using probabilistic techniques based on confidence scores.)
determine a navigational action for the vehicle based on the confirmed state of the first traffic light;
(Creusot - [0045] – All of the independent directives/observations 406 are merged by signal fusion 408, merging four STOP_AND_YIELD directives with four STOP directives would result in a fused directive of STOP.)
and cause the vehicle to implement the navigational action.
(Creusot - [0045] – A fused directive of STOP is output to the control system of the autonomous vehicle for manipulating operation.)
Creusot does not appear to explicitly disclose wherein the first detection result includes traffic light information comprising at least one of: a position of the first traffic light, an orientation of the first traffic light relative to the vehicle, a distance from the first traffic light to the vehicle, or an intersection associated with the first traffic light; and analyzing the first image to generate the first detection result includes identifying a second traffic light and disregarding the identification of the second traffic light from the first detection result; and compare the first detection result and the second detection result to determine a confirmed state of the first traffic light.
Shalom, in the same field of endeavor, teaches the following limitations: wherein the first detection result includes traffic light information comprising at least one of: a position of the first traffic light, an orientation of the first traffic light relative to the vehicle, a distance from the first traffic light to the vehicle, or an intersection associated with the first traffic light;
(Shalom – Fig. 20, [0252, 0260, 0265, 0271] – orientation of the traffic lights…position of each traffic light… distance to the traffic lamp fixture)
and analyzing the first image to generate the first detection result includes identifying a second traffic light and disregarding the identification of the second traffic light from the first detection result;
(Shalom – Fig. 20, [0251-252, 0257-0258] - system 100 may distinguish between relevant and irrelevant (or less relevant) traffic lights… system 100 may identify which of a plurality of traffic lights is regulating traffic in the lane in which vehicle 200 is traveling while disregarding (or placing less emphasis on) other traffic lights that regulate other lanes of traffic)
and compare the first detection result and the second detection result to determine a confirmed state of the first traffic light.
(Shalom – [0130-0135, 0239-0242] – System 100 may use two image capture devices (e.g., image capture devices 122 and 124) in providing navigation assistance for vehicle 200 and use a third image capture device (e.g., image capture device 126) to provide redundancy and validate the analysis of data received from the other two image capture devices. For example, in such a configuration, image capture devices 122 and 124 may provide images for stereo analysis by system 100 for navigating vehicle 200, while image capture device 126 may provide images for monocular analysis by system 100 to provide redundancy and validation of information obtained based on images captured from image capture device 122 and/or image capture device 124. That is, image capture device 126 (and a corresponding processing device) may be considered to provide a redundant sub-system for providing a check on the analysis derived from image capture devices 122 and 124.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Stein into the invention of Creusot with a reasonable expectation of success for the purpose of distinguishing between relevant and irrelevant traffic lights (Shalom – [0251]) and also for providing redundancy and validating the analysis of data received from the image capture devices (Shalom – [0135]). Comparing images from different cameras for validation or redundancy is well known and generally obvious, and would ensure that the information obtained from analyzing the images (i.e., the status of the traffic light) is valid and accurate, which improves overall safety. This is merely applying a known technique to a particular application in a way that would yield predictable results.
Regarding claim 2, Creusot discloses wherein at least one of the identification of the first traffic light included in the first detection results or the identification of the first traffic light included in the second detection result is based on a comparison of a candidate object with images or traffic lights, or based on a machine learning algorithm.
(Creusot - [0044] – The images 402-404 are processed by a plurality of object detector modules via a convolution neural network 308 that identifies configurations of the light emitting sources in the regions of interest. In the exemplary images 402-404, a solid red light 412 and an alternating flashing red light 410 are detected by the convolution neural network 308, which provides corresponding signals to the object detector modules. The object detector modules generate an independent directive for each traffic light captured in each image provided to each object detector module, thereby accumulating eight observations 406 that form the basis of signal fusion 408.)
Regarding claim 3, Creusot discloses wherein execution of the instructions included in the memory further causes the at least one processor to: determine a first confidence level indicator associated with the first detection result; determine a second confidence level indicator associated with the second detection result; and determine the confirmed state of the first traffic light based on the first confidence level indicator and the second confidence level indicator.
(Creusot – [0011] – The signal fusion module will apply a confidence score to determine which information should be incorporated into the fused directive. [0045] - If the detected traffic signals are correctly determined, the independent directives for each of the independent directives would correspond to an alternating flashing red light 410 (STOP_AND_YIELD) and a solid red light 412 (STOP). If one of the cameras generates a signal that incorrectly identifies one of the lights, a third type of independent directive would be generated. All of the independent directives/observations 406 are merged and by signal fusion 408 using probabilistic techniques based on confidence scores. i.e., a confidence score is applied to each independent directive corresponding to each camera and indicates accuracy or correctness of the detected result, therefore confidence scores are jointly used for generating the output signal fusion)
Regarding claim 4, Creusot does not appear to explicitly disclose wherein determining the navigation action for the vehicle is further based on a time duration between capturing the first image and the second image.
Shalom, who is in the same field of endeavor, teaches the following limitations: wherein determining the navigation action for the vehicle is further based on a time duration between capturing the first image and the second image.
(Shalom – [0241, 0244] - a change in the status of the traffic light may be determined based on the differences from between the stored patterns and two or more images taken at different times)
It would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Shalom into the invention of Creusot with a reasonable expectation of success for the purpose of determining based on images taken over time when the change in status of the traffic light occurs (Shalom – [0241]).
Regarding claim 5, Creusot does not appear to explicitly disclose wherein execution of the instructions included in the memory further causes the at least one processor to determine the confirmed state of the first traffic light based, at least in part, on a prior observed state of the first traffic light.
Shalom, who is in the same field of endeavor, teaches the following limitations: wherein execution of the instructions included in the memory further causes the at least one processor to determine the confirmed state of the first traffic light based, at least in part, on a prior observed state of the first traffic light.
(Shalom – [0241, 0244] – a change in the status of the traffic light may be determined based on the differences from between the stored patterns and two or more images taken at different times)
The motivation to combine Creusot and Shalom is the same as in the rejection of claim 4.
Regarding claim 6, Creusot does not appear to explicitly disclose wherein execution of the instructions included in the memory further causes the at least one processor to: determine a location of the vehicle; and receive map information specifying a location of the first traffic light, wherein determining the navigation action for the vehicle is further based on a comparison of the location of the vehicle and the location of the first traffic light.
Shalom, who is in the same field of endeavor, teaches the following limitations: wherein execution of the instructions included in the memory further causes the at least one processor to: determine a location of the vehicle; and receive map information specifying a location of the first traffic light, wherein determining the navigation action for the vehicle is further based on a comparison of the location of the vehicle and the location of the first traffic light.
(Shalom – [0246, 0251-0254, 0261, 0274] – compare the GPS acquired vehicle location to map data to determine the relevance of the traffic light…capture both traffic lamp fixtures 2012 and 2014, recognize that fixture 2012 is not relevant… determine the navigational response based on the relevant fixture 2014)
The motivation to combine Creusot and Shalom is the same as in the rejection of claim 1.
Regarding claim 16, Creusot discloses wherein the first camera and the second camera have at least partially overlapping fields-of-view.
(Creusot – Fig. 4, [0042] – The first camera and second camera generate first and second images 402 and 404 that both capture the traffic lights 410 and 412. i.e., at least partially overlapping fields-of-view)
Regarding claim 17, Creusot does not appear to explicitly disclose wherein the first camera and the second camera have different fields-of-view.
Shalom, who is in the same field of endeavor, teaches the following limitations: wherein the first camera and the second camera have different fields-of-view.
(Shalom – [0115] – cameras having different fields of view (FOV))
It would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Shalom into the invention of Creusot with a reasonable expectation of success for the purpose of providing the desired field of view and focal length relative to the environment of the vehicle that is to be captured so that the image capture devices may capture adjacent FOVs or may have partial overlap in their FOVs and (Shalom – [0115, 0120]).
Regarding claim 19, Creusot does not appear to explicitly disclose wherein the first camera and the second camera have different focal lengths.
Shalom, who is in the same field of endeavor, teaches the following limitations: wherein the first camera and the second camera have different focal lengths.
(Shalom – [0115] – cameras having different focal lengths)
The motivation to combine Creusot and Shalom is the same as in the rejection of claim 17.
Regarding claim 20, Creusot does not appear to explicitly disclose wherein: at least one of the first image or the second image includes representations of a traffic light group that includes the first traffic light; and execution of the instructions included in the memory further causes the at least one processor to receive map information specifying a relationship between at least two of the traffic lights in the traffic light group.
Shalom, who is in the same field of endeavor, teaches the following limitations: wherein: at least one of the first image or the second image includes representations of a traffic light group that includes the first traffic light; and execution of the instructions included in the memory further causes the at least one processor to receive map information specifying a relationship between at least two of the traffic lights in the traffic light group.
(Shalom – Fig. 20, [0246, 0251-0254, 0261, 0274] - compare the GPS acquired vehicle location to map data to determine the relevance of the traffic light…capture both traffic lamp fixtures 2012 and 2014, recognize that fixture 2012 is not relevant)
The motivation to combine Creusot and Shalom is the same as in the rejection of claim 1.
Regarding claim 21, Creusot does not appear to explicitly disclose wherein execution of the instructions included in the memory further causes the at least one processor to determine the confirmed state of the first traffic light based on the map information specifying the relationship between the at least two traffic lights in the traffic light group.
Shalom, who is in the same field of endeavor, teaches the following limitations: wherein execution of the instructions included in the memory further causes the at least one processor to determine the confirmed state of the first traffic light based on the map information specifying the relationship between the at least two traffic lights in the traffic light group.
(Shalom – Fig. 20, [0246, 0251-0254, 0261, 0274] – compare the GPS acquired vehicle location to map data to determine the relevance of the traffic light…capture both traffic lamp fixtures 2012 and 2014, recognize that fixture 2012 is not relevant)
The motivation to combine Creusot and Shalom is the same as in the rejection of claim 1.
With respect to claims 24-25, all the limitations have been analyzed in view of claim 1, and it has been determined that claims 24-25 do not teach or define any new limitations beyond those previously recited in claim 1; therefore, claims 24-25 are also rejected over the same rationale as claim 1. Claim 25 recites that the second detection result includes traffic light information (in claims 1 and 24 the first detection result includes traffic light information), however the combination of Creusot and Shalom read on the limitations of the first and second detection results in the same way.
Regarding claim 69, Creusot discloses wherein: at least one of the first image or the second image includes representations of multiple traffic lights;
(Creusot - Fig. 4, [0042-0043] – traffic lights 410 and 412)
and execution of the instructions included in the memory further causes the at least one processor to compare the first detection result generated based on the analysis of the first image and the second detection result generated based on the analysis of the second image to determine that the traffic light identified in the first image and that the first traffic light identified in the second image are the same traffic light.
(Creusot - Fig. 4, [0042-0045] – each set of images 402-404 captures the pair of traffic lights 410-412 and is processed… detects that the same two light emitting sources are captured in a sensor signal of the first camera and a sensor signal of the second camera)
Regarding claim 70, Creusot does not appear to explicitly disclose wherein the confirmed state of the first traffic light is further determined based on one or more images captured earlier than the first image and the second image.
Shalom, in the same field of endeavor, teaches the following limitations: wherein the confirmed state of the first traffic light is further determined based on one or more images captured earlier than the first image and the second image.
(Shalom – [0241, 0244] – a change in the status of the traffic light may be determined based on the differences from between the stored patterns and two or more images taken at different times)
The motivation to combine Creusot and Shalom is the same as in the rejection of claim 4.
With respect to claims 71 and 75, all the limitations have been analyzed in view of claim 2, and it has been determined that claims 71 and 75 do not teach or define any new limitations beyond those previously recited in claim 2; therefore, claims 71 and 75 are also rejected over the same rationale as claim 2.
With respect to claims 72 and 76, all the limitations have been analyzed in view of claim 6, and it has been determined that claims 72 and 76 do not teach or define any new limitations beyond those previously recited in claim 6; therefore, claims 72 and 76 are also rejected over the same rationale as claim 6.
With respect to claims 73 and 77, all the limitations have been analyzed in view of claim 7, and it has been determined that claims 73 and 77 do not teach or define any new limitations beyond those previously recited in claim 7; therefore, claims 73 and 77 are also rejected over the same rationale as claim 7.
With respect to claims 74 and 78, all the limitations have been analyzed in view of claim 11, and it has been determined that claims 74 and 78 do not teach or define any new limitations beyond those previously recited in claim 11; therefore, claims 74 and 78 are also rejected over the same rationale as claim 11.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Creusot in view of Shalom and Wendel (US 2019/0208111 A1).
Regarding claim 18, Creusot does not appear to explicitly disclose wherein the first camera and the second camera have the same focal length.
Wendell, who is in the same field of endeavor, teaches the following limitations: wherein the first camera and the second camera have the same focal length.
(Wendell – Fig. 7, [0164] – The first lens 714 and the second lens 724 may have the same focal length.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Wendell into the invention of Creusot with a reasonable expectation of success because Wendell’s image sensors may have adjustable focal lengths in order to provide them with the same or different focal lengths (Wendell – [0122, 0164]).
Allowable Subject Matter
Claims 7-8, 10-15, and 79 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 7, the prior art does not disclose or render obvious the following limitation in its entirety:
“The system of claim 1, wherein execution of the instructions included in the memory further causes the at least one processor to: analyze the second image to identify the second traffic light and include in the second detection result a third state of the second traffic light. “
Creusot discloses wherein execution of the instructions included in the memory further causes the at least one processor to: analyze the second image to identify the second traffic light and include in the second detection result a third state of the second traffic light.
(Creusot – Fig. 4, [0042, 0044-0045] – Images 402 and 404 capture the pair of traffic lights 410 and 412 and are processed to identify configurations of the light emitting sources in the regions of interest for determining detected traffic signals and generating independent directives/observations 406 corresponding to the detected traffic signals.)
Shalom teaches the following limitations: wherein the identification of the second traffic light is disregarded in response to determining a lack of relevancy of the second traffic light.
(Shalom – Fig. 20, [0251-252, 0257-0258] - system 100 may distinguish between relevant and irrelevant (or less relevant) traffic lights… system 100 may identify which of a plurality of traffic lights is regulating traffic in the lane in which vehicle 200 is traveling while disregarding (or placing less emphasis on) other traffic lights that regulate other lanes of traffic)
Rawashdeh (US 2018/0257615 A1) teaches the following limitations: wherein the identification of the second traffic light is disregarded in response to determining a lack of an identified state of the second traffic light.
(Rawashdeh – [0037, 0047] - The color of the traffic light 32 is recognized for each frame of the captured image. Then, the verifying portion 40 confirms whether the confidence level of the color obtained the color image processing is equal to or greater than a specified value. If the confidence level for each frame is equal to or greater than the specified value, then the verifying portion 40 proceeds to matching process. In other words, the verifying portion 40 performs the matching process only when the current status of the traffic light 32 is obtained through the image data with high accuracy. In this embodiment, the specified value is set to 90%, and therefore the verifying portion 40 requires 90% or more of the confidence level to pass the confidence level test.)
However, it would not have been obvious to one of ordinary skill in the art before the effective filing date to have modified Creusot’s invention such that the identified second traffic light identification in the first detection result is disregarded as taught by Shalom and to further include disregarding the identified second traffic light identification in response to determining a lack of an identified state of the second traffic light as taught by Rawashdeh such that the resulting invention only disregards the second traffic light in the first image but not the second image. These modifications would require impermissible hindsight because Shalom only disregards the second traffic light identification due to a lack of relevancy (i.e., the vehicle is in a different lane than the second traffic light) not due to a lack of determining an identified state and therefore in Shalom the second traffic light would be disregarded in each of the images. Therefore achieving this resulting invention would require further modifying Shalom’s logic for disregarding the identification of the second traffic light. The prior art fails to disclose or suggest the entirety of the limitation of “analyzing the first image to generate the first detection result includes identifying a second traffic light and disregarding the identification of the second traffic light from the first detection result” as in claim 1 in combination with the feature “wherein execution of the instructions included in the memory further causes the at least one processor to: analyze the second image to identify the second traffic light and include in the second detection result a third state of the second traffic light” as in claim 7.
Regarding claim 79, the prior art does not disclose or render obvious the following limitation in its entirety:
“The method of claim 24, wherein the identification of the second traffic light is disregarded in response to determining a lack of an identified state of the second traffic light.”
Shalom teaches the following limitations: wherein the identification of the second traffic light is disregarded in response to determining a lack of relevancy of the second traffic light.
(Shalom – Fig. 20, [0251-252, 0257-0258] - system 100 may distinguish between relevant and irrelevant (or less relevant) traffic lights… system 100 may identify which of a plurality of traffic lights is regulating traffic in the lane in which vehicle 200 is traveling while disregarding (or placing less emphasis on) other traffic lights that regulate other lanes of traffic)
Rawashdeh teaches the following limitations: wherein the identification of the second traffic light is disregarded in response to determining a lack of an identified state of the second traffic light.
(Rawashdeh – [0037, 0047] - The color of the traffic light 32 is recognized for each frame of the captured image. Then, the verifying portion 40 confirms whether the confidence level of the color obtained the color image processing is equal to or greater than a specified value. If the confidence level for each frame is equal to or greater than the specified value, then the verifying portion 40 proceeds to matching process. In other words, the verifying portion 40 performs the matching process only when the current status of the traffic light 32 is obtained through the image data with high accuracy. In this embodiment, the specified value is set to 90%, and therefore the verifying portion 40 requires 90% or more of the confidence level to pass the confidence level test.)
However, it would not have been obvious to one of ordinary skill in the art before the effective filing date to have modified Creusot’s invention such that the identified second traffic light identification in the first detection result is disregarded as taught by Shalom and to further include disregarding the identified second traffic light identification in response to determining a lack of an identified state of the second traffic light as taught by Rawashdeh. These modifications would require impermissible hindsight because Shalom only disregards the second traffic light identification due to a lack of relevancy (i.e., the vehicle is in a different lane than the second traffic light) not due to a lack of determining an identified state, and therefore this would require further modifying Shalom’s logic for disregarding the identification of the second traffic light. The prior art fails to disclose or suggest the entirety of the limitation of “analyzing the first image to generate the first detection result includes identifying a second traffic light and disregarding the identification of the second traffic light from the first detection result” as in claim 24 in combination with the feature “wherein the identification of the second traffic light is disregarded in response to determining a lack of an identified state of the second traffic light” as in claim 79.
Response to Arguments
Applicant's arguments, see pages 12-14 filed 12/22/2025, with respect to the previous 35 U.S.C. 103 rejections have been fully considered but they are not persuasive. Applicant argues that the asserted combination of Shalom and Creusot fails to teach or suggest Applicant's amended claim 1. Applicant’s arguments are directed towards Creusot failing to disclose or suggest “analyzing the first image to generate the first detection result includes identifying a second traffic light and disregarding the identification of the second traffic light from the first detection result” because all of the identified traffic lights are included in Creusot’s merged directives. Shalom, Korjus, and Wendel do not cure these deficiencies of Creusot. The examiner respectfully disagrees. Shalom teaches these limitations in at least paragraphs [0251-252, 0257-0258] which describe that the system may identify which of a plurality of traffic lights is regulating traffic in the lane in which vehicle is traveling while disregarding (or placing less emphasis on) other traffic lights that regulate other lanes of traffic.
Conclusion
The prior art made of record, and not relied upon, considered pertinent to applicant’s disclosure or directed to the state of art is listed on the enclosed PTO-982. The following is a brief description for relevant prior art that was cited but not applied:
Leach (US 2019/0291742 A1) is directed to an apparatus including an interface and a processor. The interface may be configured to receive area data and sensor data from a plurality of vehicle sensors. The processor may be configured to extract road characteristics for a location from the area data, predict expected sensor readings at the location for the plurality of sensors based on the road characteristics, calculate dynamic limits for the sensor data in response to the expected sensor readings and determine a plausibility of the sensor data received from the interface when the vehicle reaches the location. The sensor data may be plausible if the sensor data is within the dynamic limits. A confidence level of the sensor data may be adjusted in response to the plausibility of the sensor data.
Dean (US 2020/0356794 A1) is directed to training and using a phrase recognition model to identify phrases in images. As an example, a selected phrase list may include a plurality of phrases is received. Each phrase of the plurality of phrases includes text. An initial plurality of images may be received. A training image set may be selected from the initial plurality of images by identifying the phrase-containing images that include one or more phrases from the selected phrase list. Each given phrase-containing image of the training image set may be labeled with information identifying the one or more phrases from the selected phrase list included in the given phrase-containing images. The model may be trained based on the training image set such that the model is configured to, in response to receiving an input image, output data indicating whether a phrase of the plurality of phrases is included in the input image.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAITLIN MCCLEARY whose telephone number is (703)756-1674. The examiner can normally be reached Monday - Friday 10:00 am - 7:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Z Mehdizadeh can be reached at (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.R.M./Examiner, Art Unit 3669
/NAVID Z. MEHDIZADEH/Supervisory Patent Examiner, Art Unit 3669