DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner Notes that the fundamentals of the rejections are based on the broadest reasonable interpretation of the claim language. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art.
Status of the Claims
This is a Final Office Action in response to Applicant’s amendment of 07 January 2026. Claims 1-20 are pending and have been considered as follows.
Response to Amendment and/or Argument
Applicant’s amendments and/or arguments with respect to the Specification Objections of [0036, 0048, 0102] as set forth in the office action 07 November 2025 have been considered and are persuasive. Therefore, the Specification Objections of [0036, 0048, 0102] as set forth in the office action 07 November 2025 have been withdrawn.
Applicant’s amendments and/or arguments with respect to the Claim Objections of Claims 1 and 12 as set forth in the office action 07 November 2025 have been considered and are persuasive. Therefore, the Claim Objections of Claims 1 and 12 as set forth in the office action 07 November 2025 have been withdrawn.
Applicant’s amendments and/or arguments with respect to the Claim Rejections of Claims 1 -20 under 35 U.S.C. 112(b) as set forth in the office action 07 November 2025 have been considered and are persuasive. Therefore, the Claim Rejections of Claims 1 -20 under 35 U.S.C. 112(b) as set forth in the office action 07 November 2025 have been withdrawn.
Applicant’s amendments and/or arguments with respect to the Claim Rejections of Claims 1 -20 under 35 U.S.C. 101 as set forth in the office action 07 November 2025 have been considered and are persuasive. Therefore, the Claim Rejections of Claims 1 -20 under 35 U.S.C. 101 as set forth in the office action 07 November 2025 have been withdrawn.
Applicant’s amendments and/or arguments with respect to claim(s) 1 and 12 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Objections
Claim 1 is objected to because of the following informalities:
Claim 1 Line 7: “a region of a vehicle” should read –a region of the vehicle—
Claim 1 Line 8: “the road vehicle” should read –the vehicle--
Claim 1 Line 21: “acceleration of a vehicle” should read –acceleration of the vehicle--
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4, 6, 9 12-15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Efland et al. (US 2021/0406559A1 hereinafter Efland) in view of Ganjineh et al. (US 2020/0098135 A1 hereinafter Ganjineh).
Regarding Claim 1 (Similarly claim 12), Efland discloses A method that is computer implemented and is for data layer augmentation (see at least Abstract), the method comprising:
obtaining, by a processor associated with a vehicle, a data layer associated with road elements of a specified type; (see at least Fig. 1A-7 [0041-0196]: Vehicle may use current map data available to derive a planned trajectory for the vehicle to follow. Map data for a given area of the real word may differ from the actual physical characteristics. The map corresponds to given area of the real world and is made up of several different layers that each contain different information about the real-world environment. The map may include a base map layer that provides information about the basic road network such as information regarding the location of road segments and how they inter-connection and the number and directions in each road segment.)
obtaining, by the processor, localization information regarding a location of the vehicle, wherein the road element information is obtained based on aerial image information within a region of a vehicle and on environmental information sensed by the vehicle; (see at least Fig. 1A-7 [0041-0196]: The vehicle may also capture GPS sensor data that may provide an approximation of the location within the given area of real-world environment. As vehicle collects sensor data within real-world environment, the localization operation of vehicle may correlate the collected sensor data to available map data in order to localize vehicle within the map. For example, the captured LiDAR data can be processed using SLAM and then correlated to a 3D representation of real-world environment embodied in the map using one or more matching algorithms. The derived representation of the surrounding environment perceived by vehicle may be embodied in the form of a rasterized image that represents the surrounding environment perceived by vehicle in the form of colored pixels. The rasterized image may represent the surrounding environment perceived by vehicle from various different perspectives, examples of which may include a “top down” view and a “bird’s eye” view of the surrounding environment, among other possibilities. )and
augmenting the data layer using the localization information, wherein the augmenting of the data layer comprises populating a database with road elements information representing updated locations for a group of road elements of the specified type within the region of the vehicle. (see at least Fig. 1A-7 [0041-0196]: The vehicle equipped with a sensor that captures image data, such as a monocular camera that captures 2D image data. The 2D image data may then be evaluated by comparing it to a set of 2D reference images for the given area that corresponds to the current map data. For example, a computing platform may compare the two image sets using a change-detection model and then identify the detected changes as the traffic control elements. The computing platform may determine one or more map layers to update based on the detected changes and the derived information about the changes. The computing platform may effect updates to the real-time layer by adding information for new semantic elements as depicted in the top-down view showing a visualization of the updated map. Turning to FIG. 4C, another example of effecting map updates based on collected sensor data is illustrated. In FIG. 4C, a top down view of a vehicle 141 operating in a given area of a real-world environment 140 is shown. Within the given area, various permanent changes exist that are not reflected in the map data for the given area, including a new dedicated turn lane 142 , along with a new traffic signal 143 that controls traffic movements in the new lane. Further, the existing traffic signal 144 has been expanded from three lights (i.e., green, yellow, red) to five lights (i.e., green arrow, yellow arrow, green, yellow, red) to incorporate a new, protected left turn cycle for traffic in the opposite direction.)
executing an autonomous driving operation based on the localization information, the autonomous driving operation comprises using the data layer, following the augmenting, to determine at least one driving related parameter, wherein the executing further comprises autonomously controlling at least one of a speed of the vehicle, a direction of propagation of the vehicle or an acceleration of a vehicle. (see at least Fig. 6 [0031, 0138-0165]: one possible use case for the updated maps is to facilitate autonomous operation of a vehicle. The autonomy system may perform a “control” operation, which may generally involve transforming the derived behavior plan for the ego vehicle into one or more control signals (e.g., a set of one or more command messages) for causing the ego vehicle to execute the derived behavior plan, such as control signals for causing the ego vehicle to adjust its steering in a specified manner, accelerate in a specified manner, and/or brake in a specified manner, among other possibilities.)
It may be alleged that Efland does not explicitly teach the localization information is generated by fusing, by a machine learning process, (a) a movement estimate of the road vehicle indicative of a change in a location of the vehicle between an acquisition of different images acquired by a sensor of the vehicle and (b) the probabilistic location information that is indicative of probabilities of the having the vehicle located at different locations within the region, and is generated based on the aerial image information.
Ganjineh is directed to method and system for determining a geographical location and orientation of a vehicle traveling along a road network to determine a safe course of action, Ganjineh teaches obtaining, by the processor, localization information regarding a location of the vehicle, wherein the road element information is obtained based on aerial image information within a region of a vehicle and on environmental information sensed by the vehicle; the localization information is generated by fusing, by a machine learning process, (a) a movement estimate of the road vehicle indicative of a change in a location of the vehicle between an acquisition of different images acquired by a sensor of the vehicle and (b) the probabilistic location information that is indicative of probabilities of the having the vehicle located at different locations within the region, and is generated based on the aerial image information. (see at least Fig. 1, 5-23 [0095-0098, 0117-0280]: obtaining a sequence of images of an environment of the road from camera associated with a vehicle traveling on a road, each image being associated with a location where that image was obtained; generating a local map representation of an area of the road using images from the sequence of images and the locations associated therewith; processing the images to detect an object in the environment of the road by performing a pixel wise segmentation on the image using a machine learning algorithm, the pixel wise segmentation resulting in each pixel being allocated an object class or object class vector indicating a probability of each object class for that pixel; and processing the image to detect the object based at least in part on the object classes or object class vectors; determining at least one transformation for mapping the object between the at least some of the images, the determining including determining a change in position and/or rotation for the object between sequential images based on a respective location of the at least one camera where each of the images was captured; and based on the at least one transformation and the locations associated with the at least some of the images, generating a two- and/or three-dimensional representation for the object relative to the area of the road; comparing the local map representation with some or all of a reference map to identify a corresponding section of the reference map; and selectively updating the corresponding section of the reference map based on the local map representation. The local map representation may comprise a top-down two-dimensional image showing the environment of the road network in that area.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Efland’s systems and methods for effecting map layer updates based on collected sensor data to incorporate the technique of fusing, by a machine learning process, (a) a movement estimate of the road vehicle indicative of a change in a location of the vehicle between an acquisition of different images acquired by a sensor of the vehicle and (b) the probabilistic location information that is indicative of probabilities of the having the vehicle located at different locations within the region, and is generated based on the aerial image information as taught by Ganjineh with reasonable expectation of success to ensure safe navigation and motion planning (Ganjineh [0003]).
Regarding Claim 2 (similarly claim 13), the combination of Efland in view of Ganjineh teaches The method of claim 1 (similarly claim 12),
Efland further teaches wherein the data layer is a narrow data layer created in association with the specified type; wherein the narrow data layer is selected out of multiple narrow data layers of the database; wherein the multiple narrow data layers comprises a narrow data layer associated with road lanes and one or more other narrow data layers associated with one or more movable road elements. (see at least Fig. 1A-7 [0002, 0041-0196]: The map comprising a plurality of layers, wherein each layer of the map is encoded with a different type of map data. The priors layer 204 may provide semantic information regarding dynamic and behavioral aspects of the real-world environment. The map may include a priors layers that may include prior observed information regarding fixed semantic elements within the real-world environment. For instance, semantic elements identified from captured sensor data during operation may be analyzed for their correlation to the semantic map layer as a reference. This type of snapping may attempt to align lane boundaries, painted road markings, traffic signals, and the like with corresponding elements in the semantic map layer. For example, the priors layer 204 may include priors for a given traffic signal that indicate the order of the light sequence and dwell time within each state that have been observed in the past. Further, different light sequences may be observed at different times of the day/week, which may also be embodied within the priors layer. Other examples of such fixed semantic elements within the real-world environment are also possible.)
Regarding Claim 3 (similarly claim 14), the combination of Efland in view of Ganjineh teaches The method of claim 1 (similarly claim 12),
Efland further teaches wherein the using of the data-layer comprises preparing the vehicle to drive through at least one of a curve, a right turn, or a highway exit. (see at least Fig. 1A [0041-0044]: Vehicle 101 may be an ego vehicle as discussed above, and may have used the map data to derive a planned trajectory, represented by an arrow 102 , that involves making a right turn at the intersection. For instance, vehicle 101 or transportation-matching platform may have determined an alternative route for vehicle 101 to follow that involved turning right at the previous intersection.)
Regarding Claim 4 (similarly claim 15), the combination of Efland in view of Ganjineh teaches The method of claim 1 (similarly claim 12),
Efland further teaches wherein the augmenting involves updating data layers signatures; wherein the obtaining of the localization information comprises generating the localization information. (see at least Fig. 1A-7 [0041-0196]: The vehicle may also capture GPS sensor data that may provide an approximation of the location within the given area of real-world environment. As vehicle collects sensor data within real-world environment, the localization operation of vehicle may correlate the collected sensor data to available map data in order to localize vehicle within the map. The vehicle equipped with a sensor that captures image data, such as a monocular camera that captures 2D image data. The 2D image data may then be evaluated by comparing it to a set of 2D reference images for the given area that corresponds to the current map data. For example, a computing platform may compare the two image sets using a change-detection model and then identify the detected changes as the traffic control elements. The computing platform may determine one or more map layers to update based on the detected changes and the derived information about the changes. The computing platform may effect updates to the real-time layer by adding information for new semantic elements as depicted in the top-down view showing a visualization of the updated map. Turning to FIG. 4C, another example of effecting map updates based on collected sensor data is illustrated. In FIG. 4C, a top down view of a vehicle 141 operating in a given area of a real-world environment 140 is shown. Within the given area, various permanent changes exist that are not reflected in the map data for the given area, including a new dedicated turn lane 142 , along with a new traffic signal 143 that controls traffic movements in the new lane. Further, the existing traffic signal 144 has been expanded from three lights (i.e., green, yellow, red) to five lights (i.e., green arrow, yellow arrow, green, yellow, red) to incorporate a new, protected left turn cycle for traffic in the opposite direction.)
Regarding Claim 6 (similarly claim 17), the combination of Efland in view of Ganjineh teaches The method of claim 1 (similarly claim 12),
Efland further teaches wherein the road element information is based on a sub-lane resolution determination of the location of the vehicle; and wherein the augmenting comprises adding static road elements information regarding at least one static road element that was absent from the data layer. (see at least Fig. 1A-7 [0004, 0041-0196]: In a similar way, the collected sensor data may include an indication of how well the captured sensor data in a given area compares to the semantic map layer. For instance, semantic elements identified from captured sensor data during operation may be analyzed for their correlation to the semantic map layer as a reference. This type of snapping may attempt to align lane boundaries, painted road markings, traffic signals, and the like with corresponding elements in the semantic map layer. As with the localization errors, semantic snapping errors may occur when detected semantic features do not sufficiently align with the reference data in the semantic map layer, and a corresponding alert may be generated. The type of change to the given area may include the addition of a new semantic feature to the given area of the real-world environment, and the one or more layers of the map data that is impacted by the detected change may include a semantic map layer that is encoded with semantic map data.)
Regarding Claim 9, the combination of Efland in view of Ganjineh teaches The method of claim 1,
It may be alleged the Efland does not explicitly teach wherein the aerial image information comprises segments of an aerial image that is segmented to a plurality of aerial image segments; wherein the determining of the probabilistic location information comprises matching a selected aerial image segment signature of a plurality of aerial image signatures to a selected sensed image signature of a plurality of sensed image signatures of the different images.
Ganjineh is directed to method and system for determining a geographical location and orientation of a vehicle traveling along a road network to determine a safe course of action, Ganjineh teaches wherein the aerial image information comprises segments of an aerial image that is segmented to a plurality of aerial image segments; wherein the determining of the probabilistic location information comprises matching a selected aerial image segment signature of a plurality of aerial image signatures to a selected sensed image signature of a plurality of sensed image signatures of the different images. (see at least [0026, 0065-0072]: the local map representation so obtained by processing the plurality of images is generally indicative of the environment of the road network of the area around the vehicle. The local map generation that is generated can then be compared (e.g. matched) with a reference map section covering (at least) the approximate area within which the vehicle is travelling in order to determine the geographical location and orientation of the vehicle within the road network, e.g. by determining the vehicle's ego motion relative to the reference map. At least some of the images are processed in order to detect (and extract) one or more landmark object features for inclusion into the local map representation. In general, a landmark object feature may comprise any feature that is indicative or characteristic of the environment of the road network and that may be suitably and desirably incorporated into the local map representation, e.g. to facilitate the matching and/or aligning of the local map representation with a reference map section. For instance, any pixels, or groups of pixels, in an image that have been allocated an object class corresponding to a landmark may be identified on that basis as being regions of interest, i.e. regions that may (potentially) contain a landmark. In this way, the semantic segmentation may be used directly to detect and identify various landmark objects within the image(s). That is, the system extracts features or regions of interest from images and matching them to corresponding portions of a reference map to determine alignment or possible location.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Efland’s systems and methods for effecting map layer updates based on collected sensor data to incorporate the technique of determining of the probabilistic location information comprises matching a selected image to a reference map as taught by Ganjineh with reasonable expectation of success to ensure safe navigation and motion planning (Ganjineh [0003]).
Claim(s) 5 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Efland in view of Ganjineh and Shen et al. (US 2024/0395027 A1 hereinafter Shen).
Regarding Claim 5 (similarly claim 16), the combination of Efland in view of Ganjineh teaches The method of claim 1 (similarly claim 12),
the combination of Efland in view of Ganjineh does not explicitly teach wherein the group of road elements of the specified type are static road elements; wherein method comprises generating static road elements information by applying zero-shot learning and wherein the augmenting is based on a confidence level associated to the static road elements, wherein the confidence level is based on a success rate of the zero-shot learning, wherein the zero-shot learning comprises training a model to recognize classes not provided to the model during the training.
Shen is directed to object classification using multiple labels for autonomous systems and applications, Shen teaches wherein the group of road elements of the specified type are static road elements; wherein method comprises generating static road elements information by applying zero-shot learning and wherein the augmenting is based on a confidence level associated to the static road elements, wherein the confidence level is based on a success rate of the zero-shot learning, wherein the zero-shot learning comprises training a model to recognize classes not provided to the model during the training. (see at least [0001-0005, 0130, 0178-0180]: the current systems are able to classify an object, such as a traffic sign, even when there was no and/or little data for training the neural network(s) for the object. The systems are able to classify such an object—e.g., using few shot or zero-shot learning—since the neural network(s) is trained to classify other objects that share one or more of the same attribute classifications as the classified object, which may mean that the other objects are at least partially related to the classified object. The DLA may be used to run any type of network to enhance control and driving safety, including for example, a neural network that outputs a measure of confidence for each object detection. Such a confidence value may be interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. This confidence value enables the system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Efland and Ganjineh to incorporate the technique of generating static road elements information by applying zero-shot learning wherein the confidence level is based on a success rate of the zero-shot learning as taught by Shen with reasonable expectation of success to allow machine learning models to identify and classify objects without prior exposure.
Claim(s) 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Efland in view of Ganjineh and Douglas (US 2024/0175715 A1).
Regarding Claim 7 (similarly claim 18), the combination of Efland in view of Ganjineh teaches The method of claim 1 (similarly claim 12), comprising
the combination of Efland in view of Ganjineh does not explicitly teach applying a hysteresis that imposes a minimum time between consecutive updates of the aerial images information.
Douglas is directed to systems and method of updating map, Douglas teaches applying a hysteresis that imposes a minimum time between consecutive updates of the aerial images information. (see at least [0017-0019, 0088-0091]: A high priority update refers to updates that need to be applied to the base map in real-time from when they are available. An intermediate priority refers to updates that need to be applied to the base map but do not need to be in real-time and can have a delay prior to applying them to the base map. A low priority update refers to updates that need to be applied to the base map but can be delayed when they are applied to the base map for a time period longer than the intermediate priority updates.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Efland and Ganjineh to incorporate the technique of imposing a minimum time between consecutive updates of the map information as taught by Douglas with reasonable expectation of success to ensure higher quality map data updates by obtaining consistent map state information.
Claim(s) 8, 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Efland in view of Ganjineh and Liu et al. (US 2022/0155098 A1 hereinafter Liu).
Regarding Claim 8 (similarly claim 19), the combination of Efland in view of Ganjineh teaches The method of claim 1 (similarly claim 12),
Efland further discloses wherein the group of road elements are relevant to a driving path of the vehicle, wherein the group of road elements of the specified type are static road elements; (see at least Fig. 1A-7 [0041-0196]: As another example, deriving information about the detected change may involve deriving a level of permanence associated with the change, which may be closely related to the type of change. Deriving a level of permanence associated with the change may inform which layer(s) of the map should be updated in response to the change, as discussed further below. For instance, a construction zone (i.e., a new semantic feature) and observed vehicle trajectories in the area of the construction zone (i.e., a newly observed behavior pattern) may represent temporary changes, whereas a widened roadway and a new traffic lane (i.e., changes to the physical geometry of the given area) may represent more permanent changes. In this regard, certain aspects of information related to the permanence of a given change might be inherently derived at the time that the type of change is classified.)
It may be alleged that the combination of Efland in view of Ganjineh does not explicitly teach wherein the augmenting comprises assigning weights to the static road elements, wherein the weights are based on confidence levels associated with static road elements, wherein different weights are assigned to static road elements located within different distance ranges from the driving path of the vehicle.
Liu is directed to map updating method and apparatus, Liu teaches wherein the augmenting comprises assigning weights to the static road elements, wherein the weights are based on confidence levels associated with static road elements, wherein different weights are assigned to static road elements located within different distance ranges from the driving path of the vehicle. (see at least [0014-0032, 0087-0111]: The map data includes a category of the map element and attribute information of the map element. The map data at the current position of the first vehicle is compared with the prestored map data at a corresponding position, to determine whether the map data at the current position of the first vehicle matches the prestored map data at the corresponding position. The map element in this embodiment includes but is not limited to a traffic signal light, a traffic sign, a road element, a toll station, an inspection station, and the like. The road element includes but is not limited to a lane line, a lane sideline, a stop line, a pedestrian crosswalk, a ramp, and the like. The map update information includes at least one of the following: a coordinate position of the to-be-updated map element on the map, a category of a to-be-updated map element, a variation of the to-be-updated map element on the map, an impact level that is of vehicle traveling and that corresponds to the to-be-updated map element, and a data source of the to-be-updated map element. The confidence of the map update information may be determined based on confidence and a weight value of the data collection apparatus that provides the to-be-updated map element. For example, the to-be-updated map element includes only a lane width change, and the change is determined based on environmental data collected by the laser radar sensor and the camera. Therefore, the confidence of the map update information including the lane width change is determined by first confidence and a first weight value of the laser radar sensor, and second confidence and a second weight value of the camera. )
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Efland and Ganjineh to incorporate the technique of assigning weights to the static road elements, wherein the weights are based on confidence levels associated with static road elements, wherein different weights are assigned to static road elements located within different distance ranges from the driving path of the vehicle as taught by Liu with reasonable expectation of success and doing so would improve traveling safety of the automatic driving vehicle (Liu [0005]).
Regarding Claim 10, the combination of Efland in view of Ganjineh teaches The method of claim 1 (similarly claim 12),
It may be alleged that the combination of Efland in view of Ganjineh does not explicitly teach wherein the group of road elements; wherein the augmenting comprises assigning weights to the static road elements, wherein the weights are based on confidence levels associated to the static road elements, the confidence level is based on signal to noise of the information sensed by the vehicle.
Liu is directed to map updating method and apparatus, Liu teaches wherein the group of road elements; wherein the augmenting comprises assigning weights to the static road elements, wherein the weights are based on confidence levels associated to the static road elements, the confidence level is based on signal to noise of the information sensed by the vehicle. (see at least [0014-0032, 0087-0111]: The map data includes a category of the map element and attribute information of the map element. The map data at the current position of the first vehicle is compared with the prestored map data at a corresponding position, to determine whether the map data at the current position of the first vehicle matches the prestored map data at the corresponding position. The map element in this embodiment includes but is not limited to a traffic signal light, a traffic sign, a road element, a toll station, an inspection station, and the like. The road element includes but is not limited to a lane line, a lane sideline, a stop line, a pedestrian crosswalk, a ramp, and the like. The map update information includes at least one of the following: a coordinate position of the to-be-updated map element on the map, a category of a to-be-updated map element, a variation of the to-be-updated map element on the map, an impact level that is of vehicle traveling and that corresponds to the to-be-updated map element, and a data source of the to-be-updated map element. The confidence of the map update information may be determined based on confidence and a weight value of the data collection apparatus that provides the to-be-updated map element. For example, the to-be-updated map element includes only a lane width change, and the change is determined based on environmental data collected by the laser radar sensor and the camera. Therefore, the confidence of the map update information including the lane width change is determined by first confidence and a first weight value of the laser radar sensor, and second confidence and a second weight value of the camera. )
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Efland and Ganjineh to incorporate the technique of assigning weights to the static road elements, wherein the weights are based on confidence levels associated to the static road elements, the confidence level is based on signal to noise of the information sensed by the vehicle as taught by Liu with reasonable expectation of success and doing so would improve traveling safety of the automatic driving vehicle (Liu [0005]).
Claim(s) 11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Efland in view of Ganjineh and McGavran et al. (US 2020/0166363 A1 hereinafter McGavran).
Regarding Claim 11 (similarly claim 20), the combination of Efland in view of Ganjineh teaches The method of claim 1 (similarly claim 12),
Efland further teaches wherein the database is stored within a memory unit of the vehicle. (see at least Fig. 1A-7 [0041-0196]: The collected sensor data may be evaluated and, based on the evaluation, a change to a given area of the real-world environment may be detected. Depending on the nature of the detected change, these operations may be performed on-vehicle, off-vehicle by a back-end computing platform that collects captured sensor data from a plurality of vehicles, or some combination of these.)
The combination of Efland in view of Ganjineh does not explicitly teach wherein the database is access controlled and wherein the method further comprises granting access to the database to define entities and delivering the populated database as a downable software to a recipient;
McGavran is directed to system and method for accessing and processing data associated with a vehicle map service, McGavran teaches wherein the database is access controlled and wherein the method further comprises granting access to the database to define entities and delivering the populated database as a downable software to a recipient; (see at least Fig. 1- 8 [0057-0096, 0110-0286]: The map data for the current region can be downloaded before use from a map provider, so it can be available and up to date. The vehicle map service system can provide, to each of the plurality of client systems, access to the plurality of layers to which each of the plurality of client systems is subscribed. Access to the plurality of layers can include authorization to send or receive one or more portions of the vehicle map service data associated with a corresponding layer of the plurality of layers. For example, the vehicle map service system can provide access data to a client system that can be used to send and receive vehicle map service data to other client systems.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Efland’s systems and methods for effecting map layer updates based on collected sensor data to incorporate the technique of ensuring the database is access controlled and wherein the method further comprises granting access to the database to define entities and delivering the populated database as a downable software to a recipient as taught by McGavran with reasonable expectation of success and doing so would ensure that map data can be available and up to date (McGavran [0069]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANA F ARTIMEZ whose telephone number is (571)272-3410. The examiner can normally be reached M-F: 9:00 am-3:30 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris S. Almatrahi can be reached at (313) 446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANA F ARTIMEZ/Examiner, Art Unit 3667
/FARIS S ALMATRAHI/Supervisory Patent Examiner, Art Unit 3667