DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/19/2025 has been entered.
Claims 46-47 and 49-65 are currently pending and examined below. Claims 46 and 64-65 have been amended.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant's arguments, see pages 8-12, filed 12/19/2025 have been fully considered but they are not persuasive.
With respect to claims 46, 55, 64 and 65,
See below for detailed mapping, but the Examiner is maintaining the 103 rejections because:
Gupta determines the location at which the camera image is captured (which is the vehicle location relative to the identified road structure) and the location of the map image when it was captured. He then proceeds to align the road features in the camera image and retrieved map image to find a displacement between the 2 locations. The displacement is added to the location of the map image and that would be the refined vehicle location on the map image. This is the reason why the Examiner maintains the rejection on claim 55.
Gupta discloses in [0004] “Previous road and lane departure systems rely on the use of road and lane markings, which are not always visible (for instance, during poor-visibility weather conditions)” and [0057] “the road boundary model module 240 uses satellite imagery to identify road edges” meaning that the Gupta’s invention solves the problem which incorporates road and lane markings retrieved from satellite images that are not visible from the vehicle images into the road boundary model in addition to the road features used for alignment. The road boundary model is essentially the claimed merged road structure and that is relative refined vehicle location on the map image. This is the reason why the Examiner maintains the rejections on claim 46, 64 and 65.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 46-47, 49-55 and 59-65 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (US 20120050489 A1; hereinafter Gupta) in view of Zang et al. (US 20190130182 A1; hereinafter Zang).
Regarding claim 46, Gupta discloses:
A vehicle localization method implemented in a computer system (Fig. 3), the method comprising:
receiving a predetermined road map (map image module 130 in vehicle 100 retrieves map image; Fig. 3, [0024]-[0027]);
receiving at least one road image (camera 110 in vehicle 100 retrieves camera image of the road; Fig. 3, [0022]) for determining a vehicle location of a vehicle (initial location estimate; Fig. 3, [0047]-[0048])(initial location module 340 determines initial location estimate based on the camera image; Fig. 3, [0047]-[0048]);
processing, by a road detection component (processor 200; Fig. 2), the at least one road image, to identify therein road structure (road feature of the camera image; Fig. 3, [0049]-[0050]) for matching with corresponding structure (road feature of the map image; Fig. 3, [0049]-[0050]) of the predetermined road map ([0050] “The 3D feature alignment module 350 aligns the 3D features of the retrieved vehicle image with the 3D features of the retrieved map images”), and determine the vehicle location relative to the identified road structure ([0053] “the location at which the retrieved vehicle image was captured”);
using the determined vehicle location relative to the identified road structure ([0053] “the location at which the retrieved vehicle image was captured”) to determine a vehicle location on the predetermined road map ([0053] “The displacement module 360 determines the displacement (for instance, the distance and angle, or the delta in latitude/longitude coordinates) between the location at which the retrieved vehicle image was captured and the location represented by the selected map image's location information”; [0055] “The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information”), by matching the road structure identified in the at least one road image with the corresponding road structure of the predetermined road map ([0054] “the displacement between locations based on the 3D features of the selected map image and the retrieved vehicle image”);
wherein the method further comprises:
using the determined vehicle location on the predetermined road map to determine a location, relative to the vehicle location, of an expected road structure indicated by the predetermined road map, wherein the expected road structure is not identifiable in the at least one road image ([0004] “Previous road and lane departure systems rely on the use of road and lane markings, which are not always visible (for instance, during poor-visibility weather conditions)”; [0057] “the road boundary model module 240 uses satellite imagery to identify road edges” means that the Gupta’s invention solves the problem which incorporates road and lane markings retrieved from satellite images that are not visible from the vehicle images into the road boundary model); and
merging the road structure identified in the at least one road image with the expected road structure indicated by the predetermined road map, to determine merged road structure ([0060] “images retrieved from the map imagery module 130, and/or images from the camera 110 to create a road boundary model”) and a location of the merged road structure relative to the vehicle location on the predetermined road map ([0055] “The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information”).
Gupta does not specifically disclose:
controlling an operation of the vehicle based on the determined vehicle location on the predetermined road map;
wherein the road structure identified in the at least one road image comprises a centre line that is matched with a corresponding centre line of the predetermined road map, wherein determining the vehicle location relative thereto comprises determining a lateral separation between the vehicle and the centre line in a direction perpendicular to the centre line.
However, Zang discloses:
controlling an operation of the vehicle based on the determined vehicle location on the predetermined road map (using HD map to assist the vehicle in executing controlled maneuvers beyond its sensing range; [0032]);
wherein the road structure identified in the at least one road image comprises a centre line that is matched with a corresponding centre line of the predetermined road map, wherein determining the vehicle location relative thereto comprises determining a lateral separation between the vehicle and the centre line in a direction perpendicular to the centre line (overhead imagery is analyzed to define lane markings/centerlines/road chunks in the HD maps that is accessible to the vehicles in database 123 in the server 125. Probe 101 includes lane markings/centerlines/road chunks captured in images and is matched with the lane markings/centerlines/road chunks of the HD maps to deliver driving command that instructs the vehicle certain maneuvers in response to the position of the lane markings/centerlines/road chunks in lateral/perpendicular or longitudinal direction. In other words, vehicle maneuver is dependent on vehicle position relative to the position of the lane markings/centerlines/road chunks in lateral/perpendicular or longitudinal direction; [0040]-[0049], [0071], [0077]-[0078]).
Gupta and Zang are considered to be analogous because they are in the same field of vehicle localization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta’s vehicle localization to further incorporate Zang’s vehicle localization for the advantage of determining certain distance between the vehicle and the land marking/road segments which results in improved safety, automated driving, and the like (Zang’s [0003], [0049]).
Regarding claim 47, Gupta does not specifically disclose:
wherein the road structure identified in the at least one road image comprises a junction region for matching with a corresponding junction region of the predetermined road map;
wherein determining the vehicle location relative thereto comprises determining a longitudinal separation between the vehicle and the junction region in a direction along a road being travelled by the vehicle.
However, Zang discloses:
wherein the road structure identified in the at least one road image comprises a junction region (chunks; [0078]) for matching with a corresponding junction region of the predetermined road map (the length of roadway under analysis is divided into sections in the longitudinal direction, or direction of travel, called chunks; [0078]);
wherein determining the vehicle location relative thereto comprises determining a longitudinal separation between the vehicle and the junction region in a direction along a road being travelled by the vehicle (the point cloud may be measured in distances and angles between the object described by the points in the point cloud to the collection device, the object may be chunk; [0045], [0078]).
Gupta and Zang are considered to be analogous because they are in the same field of vehicle localization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta’s vehicle localization to further incorporate Zang’s vehicle localization for the advantage of determining certain distance between the vehicle and the land marking/road segments which results in improved safety, automated driving, and the like (Zang’s [0003], [0049]).
Regarding claim 49, Gupta discloses:
wherein the road detection component identifies the road structure in the at least one road image and the vehicle location relative to the identified road structure by assigning, to each of a plurality of spatial points (road pixels in the camera image; [0037]-[0041]) within the image, at least one road structure classification value (road pixels in the camera image are classified; [0037]-[0041])(classifier module 310 of the visual road classification 220 classifies the road pixels of the road structure of the camera image; [0037]-[0041], [0049]), and determining a location of those spatial points in a vehicle frame of reference (classifier module 310 of the visual road classification 220 classifies the road pixels of the road structure of the camera image of the camera 110 in the vehicle 100, thus the road pixels in the camera image is in vehicle 100 frame of reference; [0037]-[0041], [0049]).
Regarding claim 50, Gupta discloses:
wherein the merging comprises merging the road structure classification value assigned to each of those spatial points with a corresponding road structure value determined from the predetermined road map for a corresponding spatial point on the predetermined road map (output of classifier module 310 is merged with output of the 3D feature alignment module 350 that detects the road feature of the map image; Fig. 3, [0037]-[0041], [0049]).
Regarding claim 51, Gupta discloses:
comprising:
determining an approximate vehicle location on the predetermined road map (initial location module 340 determines initial location estimate; Fig. 3, [0047]-[0048]) and using the approximate vehicle location to determine a target area of the map (3D feature alignment module 350 retrieves the map image within a pre-determined radius of the initial location estimate; [0048]) containing the corresponding road structure for matching with the road structure identified in the at least one road image (3D feature alignment module 350 detects and aligns the road feature of the map image with the road feature of the camera image; Fig. 3, [0049]-[0050]), wherein the vehicle location on the predetermined road map that is determined by matching those structures has a greater accuracy than the approximate vehicle location (displacement module 360 determines the refined vehicle location 370 by adding the displacement based on the road feature of the map image and the road feature of the camera image, the refined vehicle location 370 has sub-meter accuracy, thus is more accurate than the initial vehicle location; [0054]-[0055]).
Regarding claim 52, Gupta discloses:
wherein the road image comprises 3D image data and the vehicle location relative to the identified road structure is determined using depth information of the 3D image data (camera image is a 3D image including the road feature of the camera image, relative to the initial location estimate and captured using depth information of the 3D image; [0022]).
Regarding claim 53, Gupta discloses:
wherein the predetermined road map is a two-dimensional road map (2D map image of the road; [0049], [0051]) and the method comprises a step of using the depth information to geometrically project the identified road structure onto a plane (ground plane; [0051]) of the two dimensional road map for matching with the corresponding road structure of the two dimensional road map (3D feature alignment module 350 projects 3D road feature onto the ground plane of 2D map image; [0051]).
Regarding claim 54, Gupta discloses:
wherein the road map is a three-dimensional road map (3D features of 2D map image indicates a 3D map image; [0051]), the vehicle location on the predetermined road map being a three dimensional location in a frame of reference of the predetermined road map (3D feature alignment module 350 retrieves a 3D map image based on the initial location estimate; [0048]).
Regarding claim 55, Gupta discloses:
comprising:
determining an error estimate for the determined vehicle location on the predetermined road map ([0055] “The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information”), based on the matching of the road structure with the corresponding road structure of the predetermined road map ([0054] “the displacement between locations based on the 3D features of the selected map image and the retrieved vehicle image”).
Regarding claim 59, Gupta does not specifically disclose:
wherein the road detection component comprises a convolutional neural network, the road structure being identified by applying the convolutional neural network to the at least one road image.
However, Zang discloses:
wherein the road detection component comprises a convolutional neural network, the road structure being identified by applying the convolutional neural network to the at least one road image (classifier 142 may execute a neural network such as a convolutional neural network for the image analysis; [0062]).
Gupta and Zang are considered to be analogous because they are in the same field of vehicle localization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta’s vehicle localization to further incorporate Zang’s vehicle localization for the advantage of summing a probability value of lane markings and determining land markings even when they are occluded which results in improved safety, automated driving, and the like (Zang’s [0036], [0064]).
Regarding claim 60, Gupta does not specifically disclose:
wherein the road structure identified in the at least one road image comprises a road shape identified by applying a first convolutional neural network to the at least one road image, and a junction region for matching with a corresponding junction region of the predetermined road map, the junction region identified by applying a second convolutional neural network to the at least one road image.
However, Zang discloses:
wherein the road structure identified in the at least one road image comprises a road shape identified by applying a first convolutional neural network to the at least one road image (classifier 142 determines shape of lane lines; [0058]), and a junction region for matching with a corresponding junction region of the predetermined road map, the junction region identified by applying a second convolutional neural network to the at least one road image (The neural network may provide the probability value for each pixel of the target region or each group of pixels within the roadway boundary; [0058]).
Gupta and Zang are considered to be analogous because they are in the same field of vehicle localization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta’s vehicle localization to further incorporate Zang’s vehicle localization for the advantage of summing a probability value of a target region and determining the target region even when they are occluded which results in improved safety, automated driving, and the like (Zang’s [0036], [0064]).
Regarding claim 61, Gupta discloses:
wherein the matching is performed by determining an approximate vehicle location on the predetermined road map (initial location module 340 determines initial location estimate; Fig. 3, [0047]-[0048]), determining a region of the predetermined road map corresponding to the at least one road image based on the approximate location (3D feature alignment module 350 retrieves the map image within a pre-determined radius of the initial location estimate; [0048]), computing an error between the at least one road image and the corresponding region of the predetermined road map (3D feature alignment module 350 detects and aligns the road feature of the map image with the road feature of the camera image; Fig. 3, [0049]-[0050]), and adapting the approximate location using an optimization algorithm to minimize the computed error, and thereby determining the said vehicle location on the predetermined road map (displacement module 360 determines the refined vehicle location 370 by adding the displacement based on the road feature of the map image and the road feature of the camera image, adding the displacement minimizes the error, therefore is considered as an optimization algorithm; [0054]-[0055]).
Regarding claim 62, Gupta discloses:
comprising determining an error estimate for the determined vehicle location on the predetermined road map, based on the matching of the road structure with the corresponding road structure of the predetermined road map (3D feature alignment module 350 detects and aligns the road feature of the map image with the road feature of the camera image; Fig. 3, [0049]-[0050]), wherein the determined error estimate comprises or is derived from the error between the road image and a corresponding region of the predetermined road map as computed upon completion of an optimization algorithm (displacement module 360 determines the displacement between locations of the road feature of the camera image and the road feature of the map image and determines the refined vehicle location 370 of the map image; Fig. 3, [0047]-[0050], [0054]-[0055]).
Regarding claim 63, Gupta discloses:
wherein the road structure identified in the at least one road image is matched with the corresponding road structure of the predetermined road map by matching a shape of the identified road structure with a shape of the corresponding road structure (3D feature alignment module 350 detects and aligns a shape of the road feature of the camera image with a shape of the road feature of the map image; Fig. 3, [0049]-[0050]).
Regarding claim 64, Gupta discloses:
A computer system (Fig. 3), comprising:
one or more hardware processors (processor 200; Fig. 2) configured to:
process at least one road image, to identify therein road structure (road feature of the camera image; Fig. 3, [0049]-[0050]) for matching with corresponding structure (road feature of the map image; Fig. 3, [0049]-[0050]) of a predetermined road map ([0050] “The 3D feature alignment module 350 aligns the 3D features of the retrieved vehicle image with the 3D features of the retrieved map images”), and determine a vehicle location of a vehicle relative to an identified road structure ([0053] “the location at which the retrieved vehicle image was captured”);
use a determined vehicle location relative to the identified road structure ([0053] “the location at which the retrieved vehicle image was captured”) to determine a vehicle location on the predetermined road map ([0053] “The displacement module 360 determines the displacement (for instance, the distance and angle, or the delta in latitude/longitude coordinates) between the location at which the retrieved vehicle image was captured and the location represented by the selected map image's location information”; [0055] “The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information”), by matching the road structure identified in the at least one road image with the corresponding road structure of the predetermined road map ([0054] “the displacement between locations based on the 3D features of the selected map image and the retrieved vehicle image”);
wherein the one or more hardware processors are further configured to:
use the determined vehicle location on the predetermined road map to determine a location, relative to the vehicle location, of an expected road structure indicated by the predetermined road map, wherein the expected road structure is not identifiable in the at least one road image ([0004] “Previous road and lane departure systems rely on the use of road and lane markings, which are not always visible (for instance, during poor-visibility weather conditions)”; [0057] “the road boundary model module 240 uses satellite imagery to identify road edges” means that the Gupta’s invention solves the problem which incorporates road and lane markings retrieved from satellite images that are not visible from the vehicle images into the road boundary model); and
merge the road structure identified in the at least one road image with the expected road structure indicated by the predetermined road map, to determine merged road structure ([0060] “images retrieved from the map imagery module 130, and/or images from the camera 110 to create a road boundary model”) and a location of the merged road structure relative to the vehicle location on the predetermined road map ([0055] “The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information”).
Gupta does not specifically disclose:
control an operation of the vehicle based on the determined vehicle location on the predetermined road map;
wherein the road structure identified in the at least one road image comprises a junction region that is matched with a corresponding region of the predetermined road map, wherein determining the vehicle location relative thereto comprises determining a separation between the vehicle and the junction region in a direction along a road being travelled by the vehicle;
wherein the expected road structure is not identifiable in the at least one road image.
However, Zang discloses:
control an operation of the vehicle based on the determined vehicle location on the predetermined road map (using HD map to assist the vehicle in executing controlled maneuvers beyond its sensing range; [0032]);
wherein the road structure identified in the at least one road image comprises a junction region that is matched with a corresponding region of the predetermined road map, wherein determining the vehicle location relative thereto comprises determining a separation between the vehicle and the junction region in a direction along a road being travelled by the vehicle (overhead imagery is analyzed to define lane markings/centerlines/road chunks in the HD maps that is accessible to the vehicles in database 123 in the server 125. Probe 101 includes lane markings/centerlines/road chunks captured in images and is matched with the lane markings/centerlines/road chunks of the HD maps to deliver driving command that instructs the vehicle certain maneuvers in response to the position of the lane markings/centerlines/road chunks in lateral/perpendicular or longitudinal direction. In other words, vehicle maneuver is dependent on vehicle position relative to the position of the lane markings/centerlines/road chunks in lateral/perpendicular or longitudinal direction; [0040]-[0049], [0071], [0077]-[0078]).
Gupta and Zang are considered to be analogous because they are in the same field of vehicle localization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta’s vehicle localization to further incorporate Zang’s vehicle localization for the advantage of determining certain distance between the vehicle and the land marking/road segments which results in improved safety, automated driving, and the like (Zang’s [0003], [0049]).
Regarding claim 65, Gupta discloses:
A computer program comprising executable instructions stored on a non-transitory computer-readable storage medium (memory 210; Fig. 2) and configured, when executed on one or more processors (processor 200; Fig. 2), to implement operations comprising:
receiving a predetermined road map (map image module 130 in vehicle 100 retrieves map image; Fig. 3, [0024]-[0027]);
receiving at least one road image (camera 110 in vehicle 100 retrieves camera image of the road; Fig. 3, [0022]) for determining a vehicle location (initial location estimate; Fig. 3, [0047]-[0048])(initial location module 340 determines initial location estimate based on the camera image; Fig. 3, [0047]-[0048]);
processing, by a road detection component (processor 200; Fig. 2), the at least one road image, to identify therein road structure (road feature of the camera image; Fig. 3, [0049]-[0050]) for matching with corresponding structure (road feature of the map image; Fig. 3, [0049]-[0050]) of the predetermined road map ([0050] “The 3D feature alignment module 350 aligns the 3D features of the retrieved vehicle image with the 3D features of the retrieved map images”), and determine the vehicle location relative to the identified road structure ([0053] “the location at which the retrieved vehicle image was captured”);
using the determined vehicle location relative to the identified road structure ([0053] “the location at which the retrieved vehicle image was captured”) to determine a vehicle location on the predetermined road map ([0053] “The displacement module 360 determines the displacement (for instance, the distance and angle, or the delta in latitude/longitude coordinates) between the location at which the retrieved vehicle image was captured and the location represented by the selected map image's location information”; [0055] “The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information”), by matching the road structure identified in the at least one road image with the corresponding road structure of the predetermined road map ([0054] “the displacement between locations based on the 3D features of the selected map image and the retrieved vehicle image”);
wherein the operations further comprise:
using the determined vehicle location on the predetermined road map to determine a location, relative to the vehicle location, of an expected road structure indicated by the predetermined road map, wherein the expected road structure is not identifiable in the at least one road image ([0004] “Previous road and lane departure systems rely on the use of road and lane markings, which are not always visible (for instance, during poor-visibility weather conditions)”; [0057] “the road boundary model module 240 uses satellite imagery to identify road edges” means that the Gupta’s invention solves the problem which incorporates road and lane markings retrieved from satellite images that are not visible from the vehicle images into the road boundary model); and
merging the road structure identified in the at least one road image with the expected road structure indicated by the predetermined road map, to determine merged road structure ([0060] “images retrieved from the map imagery module 130, and/or images from the camera 110 to create a road boundary model”) and a location of the merged road structure relative to the vehicle location on the predetermined road map ([0055] “The refined vehicle location 370 is determined by adding the determined displacement to the location represented by the selected map image's location information”).
Gupta does not specifically disclose:
controlling an operation of the vehicle based on the determined vehicle location on the predetermined road map;
wherein the road structure identified in the at least one road image comprises a centre line that is matched with a corresponding centre line of the predetermined road map, wherein determining the vehicle location relative thereto comprises determining a lateral separation between the vehicle and the centre line in a direction perpendicular to the centre line.
However, Zang discloses:
controlling an operation of the vehicle based on the determined vehicle location on the predetermined road map (using HD map to assist the vehicle in executing controlled maneuvers beyond its sensing range; [0032]);
wherein the road structure identified in the at least one road image comprises a centre line that is matched with a corresponding centre line of the predetermined road map, wherein determining the vehicle location relative thereto comprises determining a lateral separation between the vehicle and the centre line in a direction perpendicular to the centre line (overhead imagery is analyzed to define lane markings/centerlines/road chunks in the HD maps that is accessible to the vehicles in database 123 in the server 125. Probe 101 includes lane markings/centerlines/road chunks captured in images and is matched with the lane markings/centerlines/road chunks of the HD maps to deliver driving command that instructs the vehicle certain maneuvers in response to the position of the lane markings/centerlines/road chunks in lateral/perpendicular or longitudinal direction. In other words, vehicle maneuver is dependent on vehicle position relative to the position of the lane markings/centerlines/road chunks in lateral/perpendicular or longitudinal direction; [0040]-[0049], [0071], [0077]-[0078]).
Gupta and Zang are considered to be analogous because they are in the same field of vehicle localization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta’s vehicle localization to further incorporate Zang’s vehicle localization for the advantage of determining certain distance between the vehicle and the land marking/road segments which results in improved safety, automated driving, and the like (Zang’s [0003], [0049]).
Claim 56 is rejected under 35 U.S.C. 103 as being unpatentable over Gupta, in view of Zang, and in view of Turgay et al. “A framework for global vehicle localization using stereo images and satellite and road maps”, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), IEEEE, 6 November 2011, pages 2034-2041.
Regarding claim 56, Gupta discloses:
comprising:
receiving one or more further vehicle location estimates on the predetermined road map, each with an associated indication(s) of error (initial location module 340 determines initial location estimate based on the camera image, displacement module 360 determines the displacement between locations of the road feature of the camera image and the road feature of the map image and determines the refined vehicle location 370 of the map image; Fig. 3, [0047]-[0050], [0054]-[0055]).
Gupta and Zang do not specifically disclose:
applying a filter to: (i) the vehicle location on the predetermined road map as determined from the structure matching and the error estimate determined therefor, and (ii) the one or more further vehicle location estimates and the indication(s) of error received therewith, in order to determine an overall vehicle location estimate on the predetermined road map.
However, Turgay discloses:
applying a filter to: (i) the vehicle location on the predetermined road map as determined from the structure matching and the error estimate determined therefor (performs particle filter, particle filter holds a hypothesis of vehicle location; 3.5), and (ii) the one or more further vehicle location estimates and the indication(s) of error received therewith, in order to determine an overall vehicle location estimate on the predetermined road map (the most probable location; 3.5).
Gupta, Zang and Turgay are considered to be analogous because they are in the same field of vehicle localization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta and Zang’s vehicle localization to further incorporate Turgay’s filter for the advantage of determining the most probable particle/location which results in a final location/pose of the vehicle (Turgay’s 3.5).
Allowable Subject Matter
Claims 57-58 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAYSUN WU whose telephone number is (571)272-1528. The examiner can normally be reached Monday-Friday 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached on (571)272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAYSUN WU/Examiner, Art Unit 3665
/DONALD J WALLACE/Primary Examiner, Art Unit 3665