Prosecution Insights
Last updated: April 19, 2026
Application No. 18/449,290

SIGNATURE NETWORK FOR TRAFFIC SIGN CLASSIFICATION

Final Rejection §103
Filed
Aug 14, 2023
Examiner
DOUGLAS, SHANE EMANUEL
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mobileye Vision Technologies Ltd.
OA Round
2 (Final)
17%
Grant Probability
At Risk
3-4
OA Rounds
2y 4m
To Grant
39%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
2 granted / 12 resolved
-35.3% vs TC avg
Strong +22% interview lift
Without
With
+22.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
44 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
59.4%
+19.4% vs TC avg
§102
30.3%
-9.7% vs TC avg
§112
2.5%
-37.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to amendments and remarks filed on 12/10/2025. Claims 1-16, and 18-35 are considered in this office action. Claims 1, 9, 13, 18, 26, 27, 31, 32, and 35 have been amended. Claims 1-16, and 18-35 are pending examination. The objection to the abstract is removed. The 101 rejection is removed due to the amendment of claim 1 which now discloses the execution of navigational maneuvers by the host vehicle as a result of the data exchanges. Applicant's amendment necessitated new grounds of rejection therefore, claims 1-16, and 18-35 are rejected. Response to Arguments Applicant presents the following arguments regarding the previous office action: Platonov, R. Ach, and Van der Wal do not disclose the servers aggregation of feature vectors from multiple vehicles nor the redistribution of the updated database back to the host vehicle in a manner that allows the host vehicle to navigate based on the updated database Ewert does not disclose, the aggregation or averaging of multiple feature vectors received from multiple vehicles to form a representative feature vector in amended claim 9 Stenneth does not teach a database that correlates a plurality of feature vectors with a plurality of traffic sign types as recited in amended claim 13 Regarding arguments A-C with respect to the claims have been fully considered and are moot in light of new grounds for rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 6-8, 18, 20-21, and 23-26 are all rejected under 35 U.S.C. 103 as being unpatentable over Platonov (US20120114178A1) in view of Aviel (CA2976344A1). Regarding claim 1, Platonov discloses, a navigation system for a host vehicle, (0068, the process may be a process performed by the processor of a navigation device installed onboard a vehicle), the system comprising: at least one processor (0008, a vision system comprises a camera and a processor), comprising circuitry and a memory, (0008, the processor has an interface to access annotated map data), wherein the memory includes instructions that when executed by the circuitry cause the at least one processor to: receive at least one image from a camera on a host vehicle; (0008, the processor is coupled to the camera and configured to determine at least one feature descriptor for the at least one image), analyze the at least one image to identify at least one object represented in the image (0008, a vision system comprises a camera and a processor. The camera captures at least one image, and the processor is coupled to the camera and configured to determine at least one feature descriptor for the at least one image), generate a feature vector representative of the at least one object (0058, geo-referenced feature descriptors, which may be combined to form geo-referenced feature vectors, are utilized for object recognition in an automated vision system) … (0091, a geo-referenced feature vector representing a first object, such as a church tower) compare the generated feature vector to a plurality of feature vectors stored in a database (0013, various matching procedures may be utilized, such as matching based on an overlap of feature vectors, Euclidean distances between feature vectors, or similar), and in response to a determination that the generated feature vector does not match an entry in the database, send the generated feature vector to a server (0088, a message including the image or feature descriptor(s) determined for the image may selectively be generated and transmitted to a central facility of the service provider if no matching entry has been found in the annotated map data) wherein the server is configured to generate an updated feature vector database in response to the generated feature vector sent by the host vehicle navigation system (0118, a process for adding new information to the annotated map data may be implemented in the system 60. In one embodiment, a vision system 64-66 may transmit a message including feature descriptor(s) determined for an image. The message may include position information. The control device 72 may store the received feature descriptor(s) and the associated position information in the annotated map data 74 if the annotated map data 74 does not yet include an entry matching the determined feature descriptor(s) that are received from one of the vision systems), in combination with feature vectors received from a plurality of additional vehicles (0112, FIG. 10 illustrates a system 60 that includes a plurality of vehicles 61-63 and a central facility 70 of a service provider). Additionally, Aviel who is in the same field of endeavor of sparse map generation for autonomous vehicle navigation discloses, transmit the updated feature vector database to the host vehicle; (0465, the navigation information transmitted from vehicle 1205 to server 1230 may be used by server 1230 to generate and/or update an autonomous vehicle road navigation model, which may be transmitted back from server 1230 to vehicle 1205 for providing autonomous navigation guidance for vehicle 1205), and cause at least one navigational maneuver based on the updated feature vector database received from the server (0468, the at least one processor 1715 may cause at least one navigational maneuver (e.g., steering such as making a turn, braking, accelerating, passing another vehicle, etc.) by vehicle 1205 based on the received autonomous vehicle road navigation model or the updated portion of the model). It would have been prima facie obvious to one skilled in the art to combine Platonov with Aviel to have the autonomous vehicle move as a result of the updated feature vectors. This would allow for users to have a up to date and safe navigational experience in the host vehicle. Justification for combining Platonov with Aviel not only comes from the state of the art but from Platonov (0122, the present invention has been illustrated and described with respect to several preferred embodiments thereof, various changes, omissions and additions to the form and detail thereof, may be made). Regarding claim 3, Platonov and Aviel disclose, the system of claim 1, as discussed supra. additionally, Platonov discloses, the feature vector correlates to one or more features of the at least one object (0068, if the processor 2 identifies an object in the captured image using the annotated map data, the process 8 may utilize information on the object. Such information may include an object classification (e.g., POI, traffic sign, etc.)) … (0074, the message may include additional information, such as a textual description of the object represented by the feature descriptors). Regarding claim 4, Platonov and Aviel disclose, the system of claim 1, as discussed supra. additionally, Platonov discloses, the at least one object is a traffic sign (0068, if the processor 2 identifies an object in the captured image using the annotated map data, the process 8 may utilize information on the object. Such information may include an object classification (e.g., POI, traffic sign, etc.)). Regarding claim 6, Platonov and Aviel disclose, the system of claim 1, as discussed supra. additionally, Platonov discloses, the database contains a plurality of feature vectors and correlated traffic sign types (0005, a method of performing traffic sign recognition using a neural network trained for certain types of traffic signs)… (0068, if the processor 2 identifies an object in the captured image using the annotated map data, the process 8 may utilize information on the object. Such information may include an object classification (e.g., POI, traffic sign, etc.)). Regarding claim 7, Platonov and Aviel disclose, the system of claim 1, as discussed supra. additionally, Platonov discloses, the generated feature vector is determined to not match an entry in the database where the generated feature vector differs from each of the plurality of feature vectors stored in the database by more than a predetermined amount (0013, various matching procedures may be utilized, such as matching based on an overlap of feature vectors, Euclidean distances between feature vectors, or similar). Regarding claim 8, Platonov and Aviel disclose, the system of claim 1, as discussed supra. additionally, Platonov discloses, the generated feature vector is determined to match at least one of the plurality of feature vectors stored in the database, where a Euclidian distance between the generated feature vector and at least one of the plurality of feature vectors stored in the database is below a predetermined threshold (0013, the processor may be configured to perform the matching procedure between a feature vector, or several feature vectors, determined for the image and geo-referenced feature vectors included in the annotated map data. Various matching procedures may be utilized, such as matching based on an overlap of feature vectors, Euclidean distances between feature vectors, or similar). Regarding claim 18, Platonov discloses, A method applied to a navigation system for a host vehicle, the method comprising: receiving at least one image from a camera on a host vehicle (0008, the processor is coupled to the camera and configured to determine at least one feature descriptor for the at least one image), analyzing the at least one image to identify at least one object represented in the image (0008, a vision system comprises a camera and a processor. The camera captures at least one image, and the processor is coupled to the camera and configured to determine at least one feature descriptor for the at least one image), generating a feature vector representative of the at least one object (0058, geo-referenced feature descriptors, which may be combined to form geo-referenced feature vectors, are utilized for object recognition in an automated vision system) … (0091, a geo-referenced feature vector representing a first object, such as a church tower), comparing the generated feature vector to a plurality of feature vectors stored in a database (0013, various matching procedures may be utilized, such as matching based on an overlap of feature vectors, Euclidean distances between feature vectors, or similar), and in response to a determination that the generated feature vector does not match an entry in the database, sending the generated feature vector to a server (0088, a message including the image or feature descriptor(s) determined for the image may selectively be generated and transmitted to a central facility of the service provider if no matching entry has been found in the annotated map data), wherein the server is configured to generate an updated feature vector database in response to the generated feature vector sent by the host vehicle navigation system (0118, a process for adding new information to the annotated map data may be implemented in the system 60. In one embodiment, a vision system 64-66 may transmit a message including feature descriptor(s) determined for an image. The message may include position information. The control device 72 may store the received feature descriptor(s) and the associated position information in the annotated map data 74 if the annotated map data 74 does not yet include an entry matching the determined feature descriptor(s) that are received from one of the vision systems), in combination with feature vectors received from a plurality of additional vehicles (0112, FIG. 10 illustrates a system 60 that includes a plurality of vehicles 61-63 and a central facility 70 of a service provider). Additionally, Aviel who is in the same field of endeavor of sparse map generation for autonomous vehicle navigation discloses, transmit the updated feature vector database to the host vehicle; (0465, the navigation information transmitted from vehicle 1205 to server 1230 may be used by server 1230 to generate and/or update an autonomous vehicle road navigation model, which may be transmitted back from server 1230 to vehicle 1205 for providing autonomous navigation guidance for vehicle 1205), and cause at least one navigational maneuver based on the updated feature vector database received from the server (0468, the at least one processor 1715 may cause at least one navigational maneuver (e.g., steering such as making a turn, braking, accelerating, passing another vehicle, etc.) by vehicle 1205 based on the received autonomous vehicle road navigation model or the updated portion of the model). Regarding claim 20, Platonov and Aviel disclose, the method of claim 18, as discussed supra. Additionally, Platonov discloses the feature vector correlates to one or more features of the at least one object (0068, if the processor 2 identifies an object in the captured image using the annotated map data, the process 8 may utilize information on the object. Such information may include an object classification (e.g., POI, traffic sign, etc.)) … (0074, the message may include additional information, such as a textual description of the object represented by the feature descriptors) the method of claim 18, wherein the at least one object is a traffic sign Regarding claim 21, Platonov and Aviel disclose, the method of claim 18, as discussed supra. Additionally, Platonov discloses the at least one object is a traffic sign (0068, if the processor 2 identifies an object in the captured image using the annotated map data, the process 8 may utilize information on the object. Such information may include an object classification (e.g., POI, traffic sign, etc.)). Regarding claim 23, Platonov and Aviel disclose, the method of claim 18, as discussed supra. Additionally, Platonov discloses the database contains a plurality of feature vectors and correlated traffic sign types (0005, a method of performing traffic sign recognition using a neural network trained for certain types of traffic signs)… (0068, if the processor 2 identifies an object in the captured image using the annotated map data, the process 8 may utilize information on the object. Such information may include an object classification (e.g., POI, traffic sign, etc.)). Regarding claim 24, Platonov and Aviel disclose, the method of claim 18, as discussed supra. Additionally, Platonov discloses the generated feature vector is determined to not match an entry in the database where the generated feature vector differs from each of the plurality of feature vectors stored in the database by more than a predetermined amount. (0013, various matching procedures may be utilized, such as matching based on an overlap of feature vectors, Euclidean distances between feature vectors, or similar). Regarding claim 25, Platonov and Aviel disclose, the method of claim 18, as discussed supra. Additionally, Platonov discloses the generated feature vector is determined to match at least one of the plurality of feature vectors stored in the database, where a Euclidian distance between the generated feature vector and at least one of the plurality of feature vectors stored in the database is below a predetermined threshold (0013, the processor may be configured to perform the matching procedure between a feature vector, or several feature vectors, determined for the image and geo-referenced feature vectors included in the annotated map data. Various matching procedures may be utilized, such as matching based on an overlap of feature vectors, Euclidean distances between feature vectors, or similar). Regarding claim 26, Platonov discloses, a non-transitory computer readable medium containing instructions that, when executed by a processor in a navigation system for a host vehicle, cause the processor to perform operations comprising: receiving at least one image from a camera on a host vehicle (0008, a vision system comprises a camera and a processor) … (0008, the processor is coupled to the camera and configured to determine at least one feature descriptor for the at least one image), analyzing the at least one image to identify at least one object represented in the image (0008, a vision system comprises a camera and a processor. The camera captures at least one image, and the processor is coupled to the camera and configured to determine at least one feature descriptor for the at least one image), generating a feature vector representative of the at least one object (0058, geo-referenced feature descriptors, which may be combined to form geo-referenced feature vectors, are utilized for object recognition in an automated vision system) … (0091, a geo-referenced feature vector representing a first object, such as a church tower), comparing the generated feature vector to a plurality of feature vectors stored in a database (0013, various matching procedures may be utilized, such as matching based on an overlap of feature vectors, Euclidean distances between feature vectors, or similar), and in response to a determination that the generated feature vector does not match an entry in the database, sending the generated feature vector to a server (0088, a message including the image or feature descriptor(s) determined for the image may selectively be generated and transmitted to a central facility of the service provider if no matching entry has been found in the annotated map data), wherein the server is configured to generate an updated feature vector database in response to the generated feature vector sent by the host vehicle navigation system (0118, a process for adding new information to the annotated map data may be implemented in the system 60. In one embodiment, a vision system 64-66 may transmit a message including feature descriptor(s) determined for an image. The message may include position information. The control device 72 may store the received feature descriptor(s) and the associated position information in the annotated map data 74 if the annotated map data 74 does not yet include an entry matching the determined feature descriptor(s) that are received from one of the vision systems), in combination with feature vectors received from a plurality of additional vehicles (0112, FIG. 10 illustrates a system 60 that includes a plurality of vehicles 61-63 and a central facility 70 of a service provider). Additionally, Aviel who is in the same field of endeavor of sparse map generation for autonomous vehicle navigation discloses, transmit the updated feature vector database to the host vehicle; (0465, the navigation information transmitted from vehicle 1205 to server 1230 may be used by server 1230 to generate and/or update an autonomous vehicle road navigation model, which may be transmitted back from server 1230 to vehicle 1205 for providing autonomous navigation guidance for vehicle 1205), and cause at least one navigational maneuver based on the updated feature vector database received from the server (0468, the at least one processor 1715 may cause at least one navigational maneuver (e.g., steering such as making a turn, braking, accelerating, passing another vehicle, etc.) by vehicle 1205 based on the received autonomous vehicle road navigation model or the updated portion of the model). Claims 2 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Platonov et al. (US20120114178A1) in view of Aviel (CA2976344A1), further in view of Van der Wal et al. (FPGA Acceleration for Feature Based Processing Applications). Regarding claim 2, Platonov and Aviel disclose the system of claim 1 as discussed supra. Additionally, Van der Wal who is in the same field of endeavor of feature based processing applications discloses, the feature vector is a 128-byte value associated with the image representation of the at least one object (2.2. Feature description algorithms, Paragraph 3, with a normalized angle where 0 is the dominant orientation by subtracting the dominance angle, a 128 bin histogram is generated with respect to 16 cells, and 8 gradient orientations. The floating point 128 bin histogram is then clipped and normalized/quantized to a 128-byte unsigned vector). It would have been prima facie obvious to one skilled in the art to combine the combination of Platonov and Aviel with Van der Wal to have a more compact and efficient communication system. The 128 bytes gives lower bandwidth and faster updates to peer to peer vehicle databases, thus improving data transmission. Justification for combining the combination of Platonov and Aviel with Van der Wal’s disclosures not only comes from the state of the art but from Platonov (0122, the present invention has been illustrated and described with respect to several preferred embodiments thereof, various changes, omissions and additions to the form and detail thereof, may be made). Regarding claim 19 Platonov and Aviel disclose, the method of claim 18 as discussed supra. Additionally, Van der Wal discloses, the feature vector is a 128-byte value associated with the image representation of the at least one object (2.2. Feature description algorithms, Paragraph 3, with a normalized angle where 0 is the dominant orientation by subtracting the dominance angle, a 128 bin histogram is generated with respect to 16 cells, and 8 gradient orientations. The floating point 128 bin histogram is then clipped and normalized/quantized to a 128-byte unsigned vector). Claims 5 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Platonov et al. (US20120114178A1) in view of Aviel (CA2976344A1), further in view of R. Ach et al. (Classification of Traffic Signs in Real-Time on a Multi-Core Processor, Intelligent Vehicles Symposium). Regarding claim 5, Platonov and Aviel disclose the system of claim 1, as discussed supra. Additionally, R. Ach who is in the same field of endeavor of the classification of traffic signs discloses, the feature vector is generated based on an output of a trained neural network. (III. Paragraph 2, a pure shape-based or contour-based representation of the traffic sign candidates would allow more compact feature vectors) … (III. Paragraph 9, the final reduced and normalized ROI of every traffic sign candidate is directly used as the input vector of a neural network classifier, which is presented in the following Section). It would have been prima facie obvious to one skilled in the art to have combined the combination of Platonov and Aviel with R. Ach and utilize neural networks. This would serve to improve feature extraction and clustering of features, as this is what neural networks do well. Justification for combining Platonov and Aviel with R. Ach’s disclosures not only comes from the state of the art but from Platonov (0005, object recognition in vehicular applications has so far mainly focused on recognizing lane boundaries or traffic signs. Various methods, including neural network or discriminators, may be employed to this end). Regarding claim 22, Platonov and Aviel disclose the method of claim 18, as discussed supra. Additionally, R. Ach discloses, the feature vector is generated based on an output of a trained neural network. (III. Paragraph 2, a pure shape-based or contour-based representation of the traffic sign candidates would allow more compact feature vectors) … (III. Paragraph 9, the final reduced and normalized ROI of every traffic sign candidate is directly used as the input vector of a neural network classifier, which is presented in the following Section). Claims 9-13, 15-16, 27-32, and 34-35 are rejected under 35 U.S.C. 103 as being unpatentable over Ewert (US20190114493A1) in view of Stenneth et al. (US20160104049A1), further in view of Platonov (US20120114178A1), further in view of Mensink (US20140029839A1), further in view of Aviel (CA2976344A1). Regarding claim 9, Ewert discloses, a server-based system for updating an object classification database used in vehicle navigation (0004, by plausibility checking of traffic signs with the aid of a server, an autonomously driving vehicle may independently recognize traffic signs), the system comprising: at least one processor comprising circuitry (0025, for this purpose, the device may include at least one processing unit for processing signals or data), and a memory, (0009, a server may be a processing unit and/or memory unit), wherein the memory includes instructions that when executed by the circuitry cause the at least one processor to: receive drive information from a plurality of vehicles (0018, a piece of information concerning a recognized traffic sign, as a sign recognition information signal, may be provided to another vehicle in the step of providing, using the information signal and the position signal. The obtained information may be shared with other vehicles in this way), associate the representative feature vector with object type information; update the feature vector database with the object type information (0044, as soon as another vehicle has recognized the traffic sign at the same position, a comparison with server 110 takes place, the confidence level for this traffic sign being raised when there is a positive comparison. When a traffic sign stored on server 110 has a high confidence level, the traffic sign is used in the vehicle as the truth if the vehicle has not correctly recognized the sign), and distribute the updated feature vector database to at least one target vehicle (0034, accordingly, server 110 immediately transmits a confirmation to vehicle 100 that traffic sign 106 has been correctly recognized). However, Ewert does not explicitly disclose a plurality of feature vectors determined by respective ones of the plurality of vehicles using their respective neural network models not to match entries in their respective feature vector databases generate a representative feature vector by aggregating or averaging the plurality of feature vectors received from the plurality of vehicles, the representative feature vector representing a common unrecognized object; and the associated representative feature vector; wherein the at least one target vehicle is configured to execute at least one navigational maneuver based on the updated feature vector database. Nevertheless, Stenneth who is in the same field of endeavor of sign determination discloses, the associated representative feature vector (0027, the data describing one or more road signs may be analyzed using a computer vision technique (e.g., edge detection, feature extraction, feature vector classification, or others) taking into consideration the predicted placement). It would have been prima facie obvious to one skilled in the art to combine Ewert et al and Stenneth et al disclosure to have the system use feature vectors to determine and cluster road signage. Justification for combining Ewert et al and Stenneth et al disclosures not only comes from the state of the art but from Stenneth (0099, the illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure). Additionally, Mensink who is in the same field of endeavor of class mean classifiers discloses, a plurality of feature vectors determined by respective ones of the plurality of vehicles using their respective neural network models not to match entries in their respective feature vector databases (0092, if it does not, which indicates that the classifier is not able to identify any class with sufficient certainty, none of the class labels may be assigned to the image and the image may be given a label corresponding to “unknown class.”) … (0005, one method which has been adapted to large scale classification is referred to as k-nearest neighbor (k-NN) classification); generate a representative feature vector by aggregating or averaging the plurality of feature vectors received from the plurality of vehicles, (0036, each centroid can be the mean of the feature vectors of the images assigned to that cluster) … (0030, the class representation 36 may be a function, such as the average (e.g., mean), of the set of D dimensional vectors 38 of the images 16 currently in the database 18 that are labeled with the corresponding class label (or at least a representative sample thereof)), the representative feature vector representing a common unrecognized object (0092, the image and the image may be given a label corresponding to “unknown class.”). Additionally, Aviel discloses, the at least one target vehicle is configured to execute at least one navigational maneuver based on the updated feature vector database (0468, the at least one processor 1715 may cause at least one navigational maneuver (e.g., steering such as making a turn, braking, accelerating, passing another vehicle, etc.) by vehicle 1205 based on the received autonomous vehicle road navigation model or the updated portion of the model). Finally, Platonov discloses, a server-based system for updating an object classification database used in vehicle navigation (0074, if the feature descriptors determined for the image do not have a match in the local database 4. Thereby, a database maintained by a central facility may be updated). Regarding claim 10, Ewert, Stenneth, Mensink, Aviel, and Platonov disclose the system of claim 9, as discussed supra. Additionally, Ewert discloses the object type is a traffic sign type (0044, as soon as another vehicle has recognized the traffic sign at the same position, a comparison with server 110 takes place). Regarding claim 11, Ewert, Stenneth, Mensink, Aviel, and Platonov disclose the system of claim 9 as discussed supra. Additionally, Ewert discloses, the traffic sign type is associated with an indication of at least one of a speed limit, a stop, a yield, a merge, a lane shift, or a railroad crossing (0004, the autonomous vehicle may be controlled via further traffic signs such as yield signs or stop signs). Regarding claim 12, Ewert, Stenneth, Mensink, Aviel, and Platonov disclose the system of claim 9 as discussed supra. Additionally, Stenneth discloses, the representative feature vector is within a predetermined threshold in Euclidean space of the plurality of feature vectors (0072, the degree of error may be a predetermined angle from the point of collected, a Euclidean distance, or a pixel distance between the predicted placement and the detected placement). Regarding claim 13 Ewert, Stenneth, Mensink, Aviel, and Platonov disclose the system of claim 9 as discussed supra. Additionally, Ewert discloses, a navigation system for a host vehicle (0004, by plausibility checking of traffic signs with the aid of a server, an autonomously driving vehicle may independently recognize traffic signs), the system comprising: at least one processor comprising circuitry (0025, for this purpose, the device may include at least one processing unit for processing signals or data), and a memory (0009, a server may be a processing unit and/or memory unit), wherein the memory includes instructions that when executed by the circuitry cause the at least one processor to: receive at least one image from a camera; analyze the at least one image to identify an object represented in the at least one image (0034, optical sensor 104 is a camera or a surroundings sensor. Vehicle device 102 is designed for reading in a recognition signal 112 via an interface to optical sensor 104 of vehicle 100. Recognition signal 112 represents recognized traffic sign 106 in road traffic. Vehicle device 102 is designed for determining an information signal 114 and a position signal 116, using recognition signal 112, and providing them at an interface to server 110); and cause at least one navigational action to be taken by the host vehicle based on the identified traffic sign type (0034, traffic sign 106 is subsequently displayed to the vehicle driver, and only then does the autonomous vehicle respond to recognized traffic sign 106, confirmation signal 118 being used for controlling a driving maneuver via vehicle device 102). However Ewert does not explicitly disclose, generating a feature vector representative of the object; and identify a traffic sign type from a traffic sign database based on the generated feature vector wherein the traffic sign database correlates a plurality of feature vectors with a plurality of traffic sign types. Nevertheless, Stenneth discloses generating a feature vector representative of the object; (0027, the data describing one or more road signs may be analyzed using a computer vision technique (e.g., edge detection, feature extraction, feature vector classification, or others) taking into consideration the predicted placement), and identify a traffic sign type from a traffic sign database based on the generated feature vector (0004, the device generates a model that associates values for the detected placement of the road signs with values for the at least one characteristic. The model may be later accessed to interpret subsequent sets of data describing one or more road signs). Additionally, Platonov discloses, the traffic sign database correlates a plurality of feature vectors with a plurality of traffic sign types (0005, a method of performing traffic sign recognition using a neural network trained for certain types of traffic signs). Regarding claim 16, Ewert, Stenneth, Mensink, Aviel, and Platonov disclose the system of claim 13 as discussed supra. Additionally, Stenneth discloses, the traffic sign type indicates a speed limit, (0033, the signs are illustrated as speed limit signs but other types of signs are possible). However Stenneth does not explicitly disclose at least one navigation action includes adjusting a speed of the host vehicle. Nevertheless, Ewert discloses, at least one navigation action includes adjusting a speed of the host vehicle. (0004, it is thus also possible to control the autonomously driving vehicle with regard to a maximum speed with the aid of such a system). Regarding claim 17, Ewert and Stenneth disclose the system of claim 13 as discussed supra. Additionally, Stenneth discloses, the traffic sign database correlates feature vectors with traffic sign type (0027, the data describing one or more road signs may be analyzed using a computer vision technique (e.g., edge detection, feature extraction, feature vector classification, or others) taking into consideration the predicted placement). Regarding claim 27, Ewert discloses, a method applied to a server-based system for updating an object classification database used in vehicle navigation (00004, by plausibility checking of traffic signs with the aid of a server, an autonomously driving vehicle may independently recognize traffic signs), the method comprising: receiving drive information from a plurality of vehicles (0018, a piece of information concerning a recognized traffic sign, as a sign recognition information signal, may be provided to another vehicle in the step of providing, using the information signal and the position signal. The obtained information may be shared with other vehicles in this way), in response to a determination that the plurality of feature vectors correspond to a common unrecognized object associated with a representative feature vector (0044, as soon as another vehicle has recognized the traffic sign at the same position, a comparison with server 110 takes place, the confidence level for this traffic sign being raised when there is a positive comparison. When a traffic sign stored on server 110 has a high confidence level, the traffic sign is used in the vehicle as the truth if the vehicle has not correctly recognized the sign), updating the feature vector database with the object type information (0044, the confidence level for this traffic sign being raised when there is a positive comparison), and distributing the updated feature vector database to at least one target vehicle (0034, accordingly, server 110 immediately transmits a confirmation to vehicle 100 that traffic sign 106 has been correctly recognized). However, Ewert does not explicitly disclose, a plurality of feature vectors determined by respective ones of the plurality of vehicles using their respective neural network models not to match entries in their respective feature vector databases; generating a representative feature vector by aggregating or averaging the plurality of feature vectors received from the plurality of vehicles, the representative feature vector representing a common unrecognized object; associating the representative feature vector with object type information; and the associated representative feature vector; wherein the at least one target vehicle is configured to execute at least one navigational maneuver based on the updated feature vector database. Nevertheless, Stenneth discloses associating the representative feature vector with object type information (0027, the data describing one or more road signs may be analyzed using a computer vision technique (e.g., edge detection, feature extraction, feature vector classification, or others) taking into consideration the predicted placement), and the associated representative feature vector; (0027, the data describing one or more road signs may be analyzed using a computer vision technique (e.g., edge detection, feature extraction, feature vector classification, or others) taking into consideration the predicted placement). Additionally, Mensink who is in the same field of endeavor of class mean classifiers discloses, a plurality of feature vectors determined by respective ones of the plurality of vehicles using their respective neural network models not to match entries in their respective feature vector databases (0092, if it does not, which indicates that the classifier is not able to identify any class with sufficient certainty, none of the class labels may be assigned to the image and the image may be given a label corresponding to “unknown class.”) … (0005, one method which has been adapted to large scale classification is referred to as k-nearest neighbor (k-NN) classification); generate a representative feature vector by aggregating or averaging the plurality of feature vectors received from the plurality of vehicles, (0036, each centroid can be the mean of the feature vectors of the images assigned to that cluster) … (0030, the class representation 36 may be a function, such as the average (e.g., mean), of the set of D dimensional vectors 38 of the images 16 currently in the database 18 that are labeled with the corresponding class label (or at least a representative sample thereof)), the representative feature vector representing a common unrecognized object (0092, the image and the image may be given a label corresponding to “unknown class.”). Additionally, Aviel discloses, the at least one target vehicle is configured to execute at least one navigational maneuver based on the updated feature vector database (0468, the at least one processor 1715 may cause at least one navigational maneuver (e.g., steering such as making a turn, braking, accelerating, passing another vehicle, etc.) by vehicle 1205 based on the received autonomous vehicle road navigation model or the updated portion of the model). Finally, Platonov discloses, a server-based system for updating an object classification database used in vehicle navigation (0074, if the feature descriptors determined for the image do not have a match in the local database 4. Thereby, a database maintained by a central facility may be updated). Regarding claim 28, Ewert, Stenneth, Mensink, Aviel, and Platonov disclose the method of claim 27, as discussed supra. Additionally, Ewert discloses the object type is a traffic sign type (0044, as soon as another vehicle has recognized the traffic sign at the same position, a comparison with server 110 takes place). Regarding claim 29 Ewert, Stenneth, Mensink, Aviel, and Platonov disclose the method of claim 28 as discussed supra. Additionally, Ewert discloses, the traffic sign type is associated with an indication of at least one of a speed limit, a stop, a yield, a merge, a lane shift, or a railroad crossing (0004, the autonomous vehicle may be controlled via further traffic signs such as yield signs or stop signs). Regarding claim 30 Ewert, Stenneth, Mensink, Aviel, and Platonov disclose the method of claim 27 as discussed supra. Additionally, Stenneth discloses the representative feature vector is within a predetermined threshold in Euclidean space of the plurality of feature vectors (0072, the degree of error may be a predetermined angle from the point of collected, a Euclidean distance, or a pixel distance between the predicted placement and the detected placement). Regarding claim 31, Ewert discloses, a non-transitory computer readable medium containing instructions that, when executed by a processor in a server-based system for updating an object classification database used in vehicle navigation, cause the processor to perform operations (0025, for this purpose, the device may include at least one processing unit for processing signals or data) … (0044, as soon as another vehicle has recognized the traffic sign at the same position, a comparison with server 110 takes place, the confidence level for this traffic sign being raised when there is a positive comparison. When a traffic sign stored on server 110 has a high confidence level, the traffic sign is used in the vehicle as the truth if the vehicle has not correctly recognized the sign), comprising: receiving drive information from a plurality of vehicles (0018, a piece of information concerning a recognized traffic sign, as a sign recognition information signal, may be provided to another vehicle in the step of providing, using the information signal and the position signal. The obtained information may be shared with other vehicles in this way), in response to a determination that the plurality of feature vectors correspond to a common unrecognized object associated with a representative feature vector (0044, as soon as another vehicle has recognized the traffic sign at the same position, a comparison with server 110 takes place, the confidence level for this traffic sign being raised when there is a positive comparison. When a traffic sign stored on server 110 has a high confidence level, the traffic sign is used in the vehicle as the truth if the vehicle has not correctly recognized the sign), updating the feature vector database with the object type information (0044, the confidence level for this traffic sign being raised when there is a positive comparison), and distributing the updated feature vector database to at least one target vehicle (0034, accordingly, server 110 immediately transmits a confirmation to vehicle 100 that traffic sign 106 has been correctly recognized). However, Ewert does not explicitly disclose, a plurality of feature vectors determined by respective ones of the plurality of vehicles using their respective neural network models not to match entries in their respective feature vector databases; generating a representative feature vector by aggregating or averaging the plurality of feature vectors received from the plurality of vehicles, the representative feature vector representing a common unrecognized object; associating the representative feature vector with object type information; and the associated representative feature vector; wherein the at least one target vehicle is configured to execute at least one navigational maneuver based on the updated feature vector database. Nevertheless, Stenneth discloses associating the representative feature vector with object type information (0027, the data describing one or more road signs may be analyzed using a computer vision technique (e.g., edge detection, feature extraction, feature vector classification, or others) taking into consideration the predicted placement), and the associated representative feature vector; (0027, the data describing one or more road signs may be analyzed using a computer vision technique (e.g., edge detection, feature extraction, feature vector classification, or others) taking into consideration the predicted placement). Additionally, Mensink who is in the same field of endeavor of class mean classifiers discloses, a plurality of feature vectors determined by respective ones of the plurality of vehicles using their respective neural network models not to match entries in their respective feature vector databases (0092, if it does not, which indicates that the classifier is not able to identify any class with sufficient certainty, none of the class labels may be assigned to the image and the image may be given a label corresponding to “unknown class.”) … (0005, one method which has been adapted to large scale classification is referred to as k-nearest neighbor (k-NN) classification); generate a representative feature vector by aggregating or averaging the plurality of feature vectors received from the plurality of vehicles, (0036, each centroid can be the mean of the feature vectors of the images assigned to that cluster) … (0030, the class representation 36 may be a function, such as the average (e.g., mean), of the set of D dimensional vectors 38 of the images 16 currently in the database 18 that are labeled with the corresponding class label (or at least a representative sample thereof)), the representative feature vector representing a common unrecognized object (0092, the image and the image may be given a label corresponding to “unknown class.”). Additionally, Aviel discloses, the at least one target vehicle is configured to execute at least one navigational maneuver based on the updated feature vector database (0468, the at least one processor 1715 may cause at least one navigational maneuver (e.g., steering such as making a turn, braking, accelerating, passing another vehicle, etc.) by vehicle 1205 based on the received autonomous vehicle road navigation model or the updated portion of the model). Finally, Platonov discloses, a server-based system for updating an object classification database used in vehicle navigation (0074, if the feature descriptors determined for the image do not have a match in the local database 4. Thereby, a database maintained by a central facility may be updated). Claims 32, and 34-35 are rejected under 35 U.S.C. 103 as being unpatentable over Ewert (US20190114493A1) in view of Stenneth et al. (US20160104049A1), further in view of Platonov (US20120114178A1). Regarding claim 32, Ewert discloses a method applied to a navigation system for a host vehicle, the method comprising: receiving at least one image from a camera (0034, optical sensor 104 is a camera or a surroundings sensor. Vehicle device 102 is designed for reading in a recognition signal 112 via an interface to optical sensor 104 of vehicle 100), analyzing the at least one image to identify an object represented in the at least one image (0034, Recognition signal 112 represents recognized traffic sign 106 in road traffic. Vehicle device 102 is designed for determining an information signal 114 and a position signal 116, using recognition signal 112, and providing them at an interface to server 110); and causing at least one navigational action to be taken by the host vehicle based on the identified traffic sign type (0034, traffic sign 106 is subsequently displayed to the vehicle driver, and only then does the autonomous vehicle respond to recognized traffic sign 106, confirmation signal 118 being used for controlling a driving maneuver via vehicle device 102). However, Ewert does not explicitly disclose, generating a feature vector representative of the object; and identifying a traffic sign type from a traffic sign database based on the generated feature vector. Nevertheless, Stenneth discloses generating a feature vector representative of the object (0027, the data describing one or more road signs may be analyzed using a computer vision technique (e.g., edge detection, feature extraction, feature vector classification, or others) taking into consideration the predicted placement), identifying a traffic sign type from a traffic sign database based on the generated feature vector (0004, the device generates a model that associates values for the detected placement of the road signs with values for the at least one characteristic. The model may be later accessed to interpret subsequent sets of data describing one or more road signs). Additionally, Platonov discloses, the traffic sign database correlates a plurality of feature vectors with a plurality of traffic sign types (0005, a method of performing traffic sign recognition using a neural network trained for certain types of traffic signs). Regarding claim 34, Ewert, Stenneth and Platonov disclose the method of claim 32 as discussed supra. Additionally, Stenneth discloses, the feature vector is generated by a trained neural network (0026, the machine learning model may include a Bayesian model, a neural network, a decision tree, a random forest, or another model for determining sign placement as a function of one or more of the characteristics). Regarding claim 35, Ewert discloses, a non-transitory computer readable medium containing instructions that, when executed by a processor in a navigation system for a host vehicle, cause the processor to perform operations comprising: receiving at least one image from a camera; analyzing the at least one image to identify an object represented in the at least one image (0034, optical sensor 104 is a camera or a surroundings sensor. Vehicle device 102 is designed for reading in a recognition signal 112 via an interface to optical sensor 104 of vehicle 100. Recognition signal 112 represents recognized traffic sign 106 in road traffic. Vehicle device 102 is designed for determining an information signal 114 and a position signal 116, using recognition signal 112, and providing them at an interface to server 110); and causing at least one navigational action to be taken by the host vehicle based on the identified traffic sign type (0034, traffic sign 106 is subsequently displayed to the vehicle driver, and only then does the autonomous vehicle respond to recognized traffic sign 106, confirmation signal 118 being used for controlling a driving maneuver via vehicle device 102). However, Ewert does not explicitly disclose, generating a feature vector representative of the object; and identifying a traffic sign type from a traffic sign database based on the generated feature vector. Nevertheless, Stenneth discloses, generating a feature vector representative of the object (0027, the data describing one or more road signs may be analyzed using a computer vision technique (e.g., edge detection, feature extraction, feature vector classification, or others) taking into consideration the predicted placement) and identifying a traffic sign type from a traffic sign database based on the generated feature vector (0004, the device generates a model that associates values for the detected placement of the road signs with values for the at least one characteristic. The model may be later accessed to interpret subsequent sets of data describing one or more road signs). Additionally, Platonov discloses, the traffic sign database correlates a plurality of feature vectors with a plurality of traffic sign types (0005, a method of performing traffic sign recognition using a neural network trained for certain types of traffic signs). Claims 14 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Ewert (US20190114493A1) in view of Stenneth et al. (US20160104049A1), further in view of Platonov (US20120114178A1), Further in view of Van der Wal (FPGA Acceleration for Feature Based Processing Applications). Regarding claim 14, Ewert and Stenneth disclose the system of claim 13 as discussed supra. Additionally, Van der Wal who is in the same field of endeavor of feature based processing applications discloses, the feature vector is a 128-byte value associated with the image representation of the at least one object (2.2. Feature description algorithms, Paragraph 3, with a normalized angle where 0 is the dominant orientation by subtracting the dominance angle, a 128 bin histogram is generated with respect to 16 cells, and 8 gradient orientations. The floating point 128 bin histogram is then clipped and normalized/quantized to a 128-byte unsigned vector). It would have been prima facie obvious to one skilled in the art to combine Ewert and Stenneth’s disclosures to incorporate Van der Wal’s teachings to have a more compact and efficient communication system. The 128 bytes gives lower bandwidth and faster updates to peer to peer vehicle databases, thus improving data transmission. Justification for combining Ewert and Stenneth’s disclosures to incorporate Van der Wal’s teachings not only comes from the state of the art but from Stenneth (0099, the illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure). Regarding claim 33, Ewert and Stenneth disclose the method of claim 32 as discussed supra. Additionally, Van der Wal discloses, the feature vector is a 128-byte value associated with the image representation of the at least one object (2.2. Feature description algorithms, Paragraph 3, with a normalized angle where 0 is the dominant orientation by subtracting the dominance angle, a 128 bin histogram is generated with respect to 16 cells, and 8 gradient orientations. The floating point 128 bin histogram is then clipped and normalized/quantized to a 128-byte unsigned vector). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE E DOUGLAS whose telephone number is (703)756-1417. The examiner can normally be reached Monday - Friday 7:30AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached on (571) 272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.E.D./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Aug 14, 2023
Application Filed
Jun 25, 2025
Non-Final Rejection — §103
Oct 09, 2025
Response Filed
Jan 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592101
INFORMATION COMMUNICATION DEVICE OF VEHICLE, INFORMATION MANAGEMENT SERVER, AND INFORMATION COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
17%
Grant Probability
39%
With Interview (+22.2%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month