DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office action is in reply to filing by applicant on 11/03/2025.
Claims 1, 12, and 13 were amended by Applicant.
Claims 2 – 11 and 14 – 20 remain as original.
Claims 1 – 20 are currently pending and have been examined.
The prior 35 USC 101 claim rejections set forth in the Non-Final rejection of 08/01/2025 as to all claims are withdrawn in view of Applicant's arguments and amendments.
The prior 35 USC 103 claim rejections set forth in the Non-Final rejection of 08/01/2025 as to claims 1 – 3, 5, 10 – 14, 16, and 20 are maintained in view of Applicant's arguments and amendments.
THIS ACTION IS MADE FINAL.
Response to Arguments
There are no new grounds of rejection herein as to any of the claims.
Applicant has now substantially amended the initial claims of 12/08/2023, resulting in the claims of 11/03/2025. What was initially claimed by the independent claims was an autonomous vehicle determining a map via GPS signals, correlating the same to local objects via vehicle sensors so as to enhance location information, then driving on. What is now claimed (11/03/2025) is detecting an express distance related error in GPS location data by comparing such GPS location with locally known object location data on a map, then controlling autonomous driving of the vehicle within the detected error. This substantial change in the claims as above has necessitated in part new art that better represents the claims as now amended. The claims have therefore been remapped in part with art that better tracks those amended claim changes of 11/03/2025.
Given the above, and given that the 35 USC 101 rejection has been withdrawn, Applicant’s arguments herein pertaining to 35 USC 101 and 103 are moot.
Generally as to obviousness, examiner submits that it is determined on the basis of the evidence as a whole and the relative persuasiveness of the arguments. See In re Oetiker, 977 F.2d 1443, 1445, 24 USPQ2d 1443, 1444 (Fed. Cir. 1992); In re Hedges, 783 F.2d 1038, 1039, 228 USPQ 685,686 (Fed. Cir. 1992); In re Piasecki, 745 F.2d 1468, 1472, 223 USPQ 785,788 (Fed. Cir. 1984); and In re Rinehart, 531 F.2d 1048, 1052, 189 USPQ 143,147 (CCPA 1976). Using this standard, examiner submits that the burden of presenting a prima facie case of obviousness was successfully established in the prior Office Action of 08/01/2025, and also respecting the pending amended claim set of 11/03/2025, as seen below.
Examiner recognizes that references cannot be arbitrarily altered or modified, and that there must be some reason why a person having ordinary skill in the relevant art would be motivated to make the proposed modifications. Although the motivation or suggestion to make modifications must be articulated, it is respectfully submitted that there is no requirement that the motivation to make modifications must be expressly articulated within the references themselves. References are evaluated by what they suggest to one versed in the art, rather than by their specific disclosures, In re Bozek, 163 USPQ 545 (CCPA 1969).
Examiner also notes that the motivation to combine the applied references is, where appropriate in the below detailed analysis pursuant to 35 USC 103, additionally accompanied by select passages from the respective references which specifically support that particular motivation. It is also respectfully submitted that motivation based on the logic and scientific reasoning of one ordinarily skilled in the art at the time of the invention, which evidence can also support a finding of obviousness, is otherwise provided in the detailed 35 USC 103 analysis of the claim set below. In re Nilssen, 851 F.2d 1401, 1403, 7 USPQ2d 1500, 1502 (Fed. Cir. 1988) (references do not have to explicitly suggest combining teachings); Ex parte Clapp, 227 USPQ 972 (Bd. Pat. App. & Inter. 1985) (examiner must present convincing line of reasoning supporting rejection); and Ex parte Levengood, 28 USPQ2d 1300 (Bd. Pat. App. & Inter. 1993) (reliance on logic and sound scientific reasoning).
Examiner recognizes that obviousness can only be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to a person of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988) and In re Jones, 958 F.2d 347.
Claim Rejections – 35 USC 103
In the event the determination of the status of the application as subject to AIA 35 USC 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office Action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 USC 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating
obviousness or nonobviousness.
Claims 1 – 3, 5, 10 – 14, 16, and 20 are rejected pursuant to 35 USC 103 as being unpatentable over Giorgio (US20210131823A1) in view of Goldman (US20220215603A1).
Regarding claims 1, 11, and 12
Giorgio discloses:
A method performed by an apparatus of a vehicle, the method comprising:
receiving, by a processor of the vehicle, map information associated with an initial global navigation satellite system (GNSS) location; (“In one or more embodiments, one or more of those stages may be integrated in the form of a multi-functional stage and/or circuit, e.g. in a single processor or DSP.”, [0245]) and (“Alternatively, the i-th sensor/source may also provide as reading directly a grid map Gi, such as it may be the case when acquisition of a GIS/GPS map from a remote sensing source is involved.”, [057]);
generating, based on the map information associated with the initial GNSS location, a first grid map; (“Multi-sensor data fusion may facilitate combining information from different sources in order to form a unified picture, for instance by providing fused occupancy grid maps representing an environment based on data from multiple sources.”, [004]) and (“A grid map may facilitate providing to the drive assistance system information about the presence of obstacles and their space occupancy in the surrounding mapped environment.”, [012]) and (“Occupancy grid maps are well suited for path planning and obstacle avoidance tasks. Conversely, feature-based maps may be well suited for localization purposes, where relative pose of the objects may be required for accurate estimation of vehicle self-position and orientation.”, [009]), a grid map based on received GPS location data may be generated;
generating, based on sensing information from a sensor, a second grid map; (“Multiple grid maps may be generated as a function of individual sensors data, providing correlated information about the environment.”, [015]) and (“Fusing the multiple occupancy grid maps into a unified coherent map combining (e.g. by data fusion) the information of all sensors with high precision and accuracy is an object of the present description, especially with respect to, e.g., drive assistance systems and methods for vehicles or automated vehicles.”, [016]);
Giorgio does not expressly disclose, but Goldman teaches:
detecting an error of the initial GNSS location based on a comparison of the first grid map and the second grid map; and (“The localization of vehicle may be corrected or adjusted by image observations of landmarks. For example, when vehicle detects a landmark within an image captured by the camera, the landmark may be compared to a known landmark stored within the road model or sparse map 800. The known landmark may have a known location (e.g., GPS data) along a target trajectory stored in the road model and/or sparse map 800. Based on the current speed and images of the landmark, the distance from the vehicle to the landmark may be estimated.”, [0331]) and (“FIG. 14 illustrates raw location data 1410 (e.g., GPS data) received from five separate drives. … To account for errors in the location data 1410 (e.g., the GPS data) and for differing locations of vehicles within the same lane … server 1230 may generate a map skeleton 1420 using one or more statistical techniques to determine whether variations in the raw location data 1410 represent actual divergences or statistical errors.”, [0299]), GPS location errors may be detected and may also be compared to local sensor obtained data (e.g., landmarks) from the vehicle;
controlling, based on an initial location of the vehicle derived based on the detected error, autonomous driving of the vehicle. (“Points in the map may be referenced relative to an initial map origin.”, [010]) and (“A processor (e.g., processing unit 110) provided on vehicle 200 may receive data included in sparse map 800 from the remote server and may execute the data for guiding the autonomous driving of vehicle 200. “, [0215]) and (“FIG. 2F is a diagrammatic representation of exemplary vehicle control systems, consistent with the disclosed embodiments. As indicated in FIG. 2F, vehicle 200 may include throttling system 220, braking system 230, and steering system 240. System 100 may provide inputs (e.g., control signals) to one or more of throttling system 220, braking system 230, and steering system 240 over one or more data links (e.g., any wired and/or wireless link or links for transmitting data). For example, based on analysis of images acquired by image capture devices 122, 124, and/or 126, system 100 may provide control signals to one or more of throttling system 220, braking system 230, and steering system 240 to navigate vehicle 200 (e.g., by causing an acceleration, a turn, a lane shift, etc.).”, [0139]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to have modified Giorgio to incorporate the teachings of Goldman because Giorgio would be more complete and efficient if it could also compare errors between GPS obtained data and local sensor vehicle derived data in its efforts to precisely control the vehicle as done in Goldman (“At a later time, during navigation, a navigating vehicle may capture an image that includes a representation of the landmark, process the image (e.g., using a classifier), and compare the result landmark in order to confirm detection of the mapped landmark and to use the mapped landmark in localizing the navigating vehicle relative to the sparse map.”, [0276]), note that GPS data in Goldman may be compared to local sensor derived vehicle data, and it’s also noted that (see [0331] expressly mapped above) Goldman may also control the vehicle.
Regarding claims 2 and 13:
The combination of Giorgio and Goldman disclose all limitations of claims 1 and 12, respectively:
Goldman further discloses:
generating the first grid map comprises: setting a heading search range of angles for the vehicle in the map information; (“At step 576, processing unit 110 may determine a heading error and yaw rate command based on the look-ahead point determined at step 574. Processing unit 110 may determine the heading error by calculating the arctangent of the look-ahead point, e.g., arctan (xi/zi). Processing unit 110 may determine the yaw rate command as the product of the heading error and a high-level control gain. The high-level control gain may be equal to: (2/look-ahead time), if the look-ahead distance is not at the lower bound. Otherwise, the high-level control gain may be equal to: (2*speed of vehicle 200/look-ahead distance).”, [0180]) and (“FIG. 21 illustrates a block diagram of memory 2015, which may store computer code or instructions for performing one or more operations for generating a road navigation model for use in autonomous vehicle navigation.”, [0319]), broadly interpreted, a heading search range of angles vis a vis generating a grid map (which map contains as above at least a look ahead point and a heading) can be set;
detecting a lane within the heading search range of angles; (“FIG. 5C is a flowchart showing an exemplary process for detecting road marks and/or lane geometry information in a set of images consistent with the disclosed embodiments.”, [029}, and see Fig. 5C and (“FIGS. 24A, 24B, 24C, and 24D illustrate exemplary lane marks that may be detected consistent with the disclosed embodiments.”, [054]);
determining, based on the detected lane, heading information associated with the detected lane; and (“Such landmarks may be used, for example, to assist an autonomous vehicle in determining its current location relative to any of the shown target trajectories, such that the vehicle may adjust its heading to match a direction of the target trajectory at the determined location.”, [0253]);
generating a heading candidate group for the vehicle based on the heading information of the detected lane. (“To detect segments of lane markings, lane geometry information, and other pertinent road marks, processing unit 110 may filter the set of objects to exclude those determined to be irrelevant (e.g., minor potholes, small rocks, etc.). At step 552, processing unit 110 may group together the segments detected in step 550 belonging to the same road mark or lane mark. Based on the grouping, processing unit 110 may develop a model to represent the detected segments, such as a mathematical model.”, [0170]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to have modified Giorgio to incorporate the teachings of Goldman because Giorgio would be more complete and efficient if it could also compare errors between GPS obtained data and local sensor vehicle derived data in its efforts to precisely control the vehicle as done in Goldman (“At a later time, during navigation, a navigating vehicle may capture an image that includes a representation of the landmark, process the image (e.g., using a classifier), and compare the result landmark in order to confirm detection of the mapped landmark and to use the mapped landmark in localizing the navigating vehicle relative to the sparse map.”, [0276]), note that GPS data in Goldman may be compared to local sensor derived vehicle data, and it’s also noted that (see [0331] expressly mapped above) Goldman may also control the vehicle.
Regarding claims 3 and 14:
The combination of Giorgio and Goldman disclose all limitations of claims 2 and 13, respectively:
Goldman further discloses:
the heading candidate group for the vehicle comprises at least one heading candidate for the vehicle; and (“At step 552, processing unit 110 may group together the segments detected in step 550 belonging to the same road mark or lane mark. Based on the grouping, processing unit 110 may develop a model to represent the detected segments, such as a mathematical model.”, [0170]);
the at least one heading candidate for the vehicle is generated in units of a first angle within the heading search range of angles. (“For example, if analysis shows an image location of sign 2566 that is displaced in the image by a distance 2572 to the left of the expected image space location on line 2567, then the navigation processor may cause a heading change by the host vehicle (e.g., change the steering angle of the wheels) to move the host vehicle leftward by a distance 2573. In this way, each captured image can be used as part of a feedback loop process such that a difference between an observed image position of sign 2566 and expected image trajectory 2567 may be minimized to ensure that the host vehicle continues along target trajectory 2565 with little to no deviation.”, [0361]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to have modified Giorgio to incorporate the teachings of Goldman because Giorgio would be more complete and efficient if it could also compare errors between GPS obtained data and local sensor vehicle derived data in its efforts to precisely control the vehicle as done in Goldman (“At a later time, during navigation, a navigating vehicle may capture an image that includes a representation of the landmark, process the image (e.g., using a classifier), and compare the result landmark in order to confirm detection of the mapped landmark and to use the mapped landmark in localizing the navigating vehicle relative to the sparse map.”, [0276]), note that GPS data in Goldman may be compared to local sensor derived vehicle data, and it’s also noted that (see [0331] expressly mapped above) Goldman may also control the vehicle.
Regarding claims 5 and 16:
The combination of Giorgio and Goldman disclose all limitations of claims 2 and 13, respectively:
Goldman further discloses:
the heading candidate group for the vehicle comprises at least one heading candidate for the vehicle; and (“At step 552, processing unit 110 may group together the segments detected in step 550 belonging to the same road mark or lane mark. Based on the grouping, processing unit 110 may develop a model to represent the detected segments, such as a mathematical model.”, [0170])
the at least one heading candidate for the vehicle is generated in units of a second angle within a range of a heading rotation angle of the vehicle set based on the initial GNSS location. Interpreted broadly, the limitation includes steering the AV with a second angle, (“For example, if analysis shows an image location of sign 2566 that is displaced in the image by a distance 2572 to the left of the expected image space location on line 2567, then the navigation processor may cause a heading change by the host vehicle (e.g., change the steering angle of the wheels) to move the host vehicle leftward by a distance 2573. In this way, each captured image can be used as part of a feedback loop process such that a difference between an observed image position of sign 2566 and expected image trajectory 2567 may be minimized to ensure that the host vehicle continues along target trajectory 2565 with little to no deviation.”, [0361]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to have modified Giorgio to incorporate the teachings of Goldman because Giorgio would be more complete and efficient if it could also compare errors between GPS obtained data and local sensor vehicle derived data in its efforts to precisely control the vehicle as done in Goldman (“At a later time, during navigation, a navigating vehicle may capture an image that includes a representation of the landmark, process the image (e.g., using a classifier), and compare the result landmark in order to confirm detection of the mapped landmark and to use the mapped landmark in localizing the navigating vehicle relative to the sparse map.”, [0276]), note that GPS data in Goldman may be compared to local sensor derived vehicle data, and it’s also noted that (see [0331] expressly mapped above) Goldman may also control the vehicle.
Regarding claims 10 and 20:
The combination of Giorgio and Goldman disclose all limitations of claims 1 and 12, respectively:
Goldman further discloses:
wherein generating the second grid map comprises: accumulating the sensing information at least once under control of the processor; and generating the second grid map based on the accumulated sensing information. Broadly interpreted, this involves again the generation of a map based on sensor info, … (“Generating the mapped lane marks in the sparse map may also include detecting and/or mitigating errors based on anomalies in the images or in the actual lane marks themselves. FIG. 24F shows an exemplary anomaly 2495 associated with detecting a lane mark 2490. Anomaly 2495 may appear in the image captured by vehicle 200, for example, from an object obstructing the camera's view of the lane mark, debris on the lens, etc.”, [0352]) and (“The at least one vehicle sensor may include one or more cameras configured to capture images of an environment of the host vehicle. The received output may include at least one image captured by the one or more cameras. The instructions may also cause the at least one electronic horizon processor to localize the host vehicle relative to the map based on analysis of the at least one image captured by the one or more cameras.”, [005]), a “second” (or “local”) map may be generated with vehicle local sensor info.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to have modified Giorgio to incorporate the teachings of Goldman because Giorgio would be more complete and efficient if it could also compare errors between GPS obtained data and local sensor vehicle derived data in its efforts to precisely control the vehicle as done in Goldman (“At a later time, during navigation, a navigating vehicle may capture an image that includes a representation of the landmark, process the image (e.g., using a classifier), and compare the result landmark in order to confirm detection of the mapped landmark and to use the mapped landmark in localizing the navigating vehicle relative to the sparse map.”, [0276]), note that GPS data in Goldman may be compared to local sensor derived vehicle data, and it’s also noted that (see [0331] expressly mapped above) Goldman may also control the vehicle.
Allowable Subject Matter
Claims 4, 6 – 9, 15, and 17 – 19 would be allowable if rewritten and/or amended to be in an independent form. The following is a statement of reasons for the indication of allowable subject matter: Independently, while the claims' limitations most recently set forth herein may individually be disclosed by the prior art, the claims as a whole are not obvious because the examiner would have to improperly use their separate limitations as a road map to combine them.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see attached form 892.
Anirudh (US20200174487A1) – An approach is provided for localizing a vehicle pose on a map. The approach involves, receiving an input specifying the vehicle pose with respect to a road lane of the map. The approach also involves searching over a set of candidate lateral offsets to select a lateral offset that minimizes a lateral error between the vehicle position with the lateral offset applied and a lateral location of the road lane, wherein the lateral location and the travel direction of the lane are determined from the map. The approach further involves searching over a set of candidate vehicle headings at the selected lateral offset to select a vehicle heading that minimizes a heading error. The approach further involves determining a local optimum of the vehicle pose based on the selected lateral offset and vehicle heading, wherein the vehicle pose is localized to the map based on the local optimum.
Dae-Sung (US20220147744A1) – An apparatus for recognizing a driving lane based on multiple sensors is provided. The apparatus includes a first sensor configured to calculate road information, a second sensor configured to calculate moving obstacle information, a third sensor configured to calculate movement information of a vehicle, and a controller configured to remove the moving obstacle information from the road information to extract only road boundary data, accumulate the road boundary data to calculate a plurality of candidate location information on the vehicle based on the movement information, and select final candidate location information from the plurality of candidate location information.
Moskowitz (US20210063162A1) – Systems and methods are provided for vehicle navigation. In one implementation, a navigation system for a vehicle may comprise at least one processor. The at least one processor may be programmed to receive, from at least one sensor of the vehicle, information captured from an environment of the vehicle and determine, based on the information, a first position of the vehicle relative to a road navigation model. The at least one processor may further determine, based on at least one signal received from a satellite, a second position of the vehicle and determine, based on a comparison of the first position and the second position, error information associated with the second position. The at least one processor may cause a transmission of the error information to a server.
Taliwal (US20050278095A1) - A method for determining a lane change by a vehicle includes determining a vehicle heading using for example GPS information, determining a road heading at a location of the vehicle, and when the vehicle is on the multilane road, determining a lane change as a function of a heading difference between the vehicle heading and the road heading. A vehicle with a lane change determination device is also provided.
Yang (US20180189578A1) – An HD map system represents landmarks on a high definition map for autonomous vehicle navigation, including describing spatial location of lanes of a road and semantic information about each lane, and along with traffic signs and landmarks. The system generates lane lines designating lanes of roads based on, for example, mapping of camera image pixels with high probability of being on lane lines into a three-dimensional space, and locating/connecting center lines of the lane lines. The system builds a large connected network of lane elements and their connections as a lane element graph. The system also represents traffic signs based on camera images and detection and ranging sensor depth maps. These landmarks are used in building a high definition map that allows autonomous vehicles to safely navigate through their environments.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW COBB whose telephone number is (571) 272-3850. The examiner can normally be reached 9 - 5, M - F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to call examiner Cobb as above, or to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Nolan, can be reached at (571) 270-7016. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/MATTHEW COBB/Examiner, Art Unit 3661
/PETER D NOLAN/Supervisory Patent Examiner, Art Unit 3661