Prosecution Insights
Last updated: April 19, 2026
Application No. 17/542,164

AUTOMATIC BOOTSTRAP FOR AUTONOMOUS VEHICLE LOCALIZATION

Non-Final OA §103§112
Filed
Dec 03, 2021
Examiner
KHUU, IRENE C
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ford Global Technologies LLC
OA Round
3 (Non-Final)
47%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
7 granted / 15 resolved
-5.3% vs TC avg
Strong +89% interview lift
Without
With
+88.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
23 currently pending
Career history
38
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
24.8%
-15.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a non-final rejection is in response to Applicant’s amendment of 10 September 2025. Claims 1-20 are currently pending, as discussed below. Examiner Notes that the fundamentals of the rejections are based on the broadest reasonable interpretation of the claim language. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 19 February 2025 has been entered. Response to Arguments Applicant's arguments filed 10 September 2025 have been fully considered and are persuasive in part. Amendments to claims 1, 8 and 15 have been fully considered and are persuasive so rejections under 35 U.S.C. § 112(b) to claims 1, 8 and 15 have been withdrawn. Arguments regarding interpretation under 35 U.S.C. 112(f) of “a Light Detection and Ranging (lidar) apparatus" have been fully considered and is persuasive. 35 U.S.C. 112(f) interpretation of "a Light Detection and Ranging (lidar) apparatus" in claim 8 is withdrawn. Regarding interpretation under 35 U.S.C. 112(f), Interpretation of “a computing device" includes a general placeholder, is followed by a functional language "generate an initial pose estimate of an autonomous vehicle (AV) " and the word “computing” does not does not provide sufficient structure in Prong C of MPEP 2181 therefore " a computing device " interpretation under 35 U.S.C. 112f is sustained. Examiner’s Response- Examiner has carefully considered Applicant’s arguments and respectfully disagrees. Regarding limitation “wherein the reference map includes at least one ground height indicative of a z coordinate, and at least one ground plane normal indicative of a roll angle and a pitch angle”, Examiner cites ¶28 and ¶29 of Wang which teaches ground height of a z coordinate in the form of: Lidar intensity map may include a height map containing the height of each point in the intensity image with respect to a coordinate frame which is described by cartesian coordinates. Wang also teaches ground plane normal which is the dimensions used describe the attitude of the AV system of yaw, pitch and roll. Furthermore, ¶29 of Wang teaches limitation: “generating position candidates of the AV based on the reference map, wherein the position candidates include x coordinates and y coordinates. Examiner has carefully considered Applicant’s arguments and agrees that Wang nor Sarkar teaches the amended limitation: “bootstrapping the AV by transitioning an operating state of the AV from a running state in which autonomous driving is not allowed to a localized state in which autonomous driving is allowed based on the initial pose when the AV is stationary”. Examiner withdraws the 35 U.S.C. 103 rejection for claims 1-20 set forth in office action of 09 June 2025 but is moot in view of new obviousness rejection necessitated by the amendments. Applicant recites that Examiner has interpreted "initial pose estimate including a reference map" incorrectly and that that the above feature should be interpreted by the plain and ordinary meaning of the recitation. Applicant has not further explained or amended the language to clarify the record. It is unclear to the Examiner what is meant by “initial pose estimate including a reference map” in the context of the claims. For example, a pose estimate is typically overlayed on a reference map but it is unclear how a pose which consists of positional and orientation data can “include” a reference map. It is unclear if the plain meaning of “include” is (i) “comprise or contain as part of a whole” synonyms: incorporate, comprise, encompass, cover, contain, or (ii) “make part of a whole or set” synonyms: add, insert, put in, append. Examiner has fully considered argument regarding claim 5 argument that there is no motivation to combine Wang and Sarker with Lawlor and Chen because Lawlor and Chen have nothing to do with “validating and initial pose”. Examiner respectfully disagrees because Lawlor is directed to predicting a pose error applied to driving automated vehicles (abstract and ¶29, Lawlor) and Chen is directed to alignment of three dimensional representation of data captured by autonomous vehicles (¶2, Chen). These references teach validating an initial pose and are applied to the navigation of autonomous vehicles and the combination is appropriate. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a computing device in claim 8 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Upon reviewing of the specification, the following appears to be the corresponding structure for a computing device: " The vehicle on-board computing device 220 may be implemented using the computer system of FIG. 9 ", [¶ 37, Fig. 9] Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1, 8 and 15 is unclear to the Examiner what is meant by “initial pose estimate including a reference map” in the context of the claims. For example, a pose estimate is typically overlayed on a reference map but it is unclear how a pose which consists of positional and orientation data can “include” a reference map. It is unclear if the plain meaning of “include” is (i) “comprise or contain as part of a whole” synonyms: incorporate, comprise, encompass, cover, contain, or (ii) “make part of a whole or set” synonyms: add, insert, put in, append. Claims 2-7, 9-14, and 16-20 are rejected as being dependent on a rejected claim. Claim(s) depending from claims expressly noted above are also rejected under 35 U.S.C. 112 by/for reason of their dependency from a noted claim that is rejected under 35 U.S.C. 112, for the reasons given. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 8, 10, 15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20190383945 A1) in view of Frick et al. (US 20230020033 A1). Regarding claim 1, Wang teaches A computer-implemented method comprising: generating, by one or more computing devices of an autonomous vehicle (AV) (the vehicle computing system 102, see at least [¶ 18, Fig.1, Wang]), an initial pose estimate of the AV from Global Positioning System (GPS) data (the positioning system 120 can determine position by using a satellite positioning system, see at least [¶ 24 Wang]) and map data (The localization system 106 receives the map data 124, see at least [¶ 26-31, Wang]), the initial pose estimate including a reference map (the localization system 106 includes a Lidar localizer 122 that is configured to generate pose estimates based on a comparison of Lidar intensity maps which is interpreted as a reference map, see at least [¶ 29-30, Wang]), wherein the reference map includes at least one ground height indicative of a z coordinate (the Lidar intensity map may include a height map containing the height of each point in the intensity image with respect to cartesian coordinates which must contain a z coordinate, see at least [¶ 28-29, Wang]), and at least one ground plane normal indicative of a roll angle and a pitch angle (the attitude describes roll, pitch and yaw angles, see at least [¶ 29, Wang]); generating, by the one or more computing devices, an initial pose of the AV from the initial pose estimate (Fig.4: Operation 425-445 describes the process of selection of a vehicle pose in block 445 from a plurality of vehicle poses generated in block 425, see at least [¶ 64-69, Fig.4, Wang]), the generating the initial pose of the AV comprising: performing a Light Detection and Ranging (lidar) sweep to generate lidar data (Lidar system 118 generates a point cloud from single rotation or “sweep” of the array channels, see at least [¶ 39, Fig.1, Wang]), generating yaw angle candidates of the AV based on a correlation between the lidar data and the reference map, generating position candidates of the AV based on the reference map, wherein the position candidates include x coordinates and y coordinates (The lidar localizer 122 performs high-precision localization against pre-constructed Lidar intensity maps (reference map) where pose candidates consist of a 2D translation (x, y coordinates) and a heading angle, see at least [¶ 42, Fig.3, Wang]), combining the position candidates and the yaw angle candidates to generate a list of raw candidates (pose candidates include both position and yaw combined, see at least [¶ 29, Wang]), and performing a search operation on the raw candidates to determine the initial pose of the AV (Block 445 of Fig. 4 depicts that the Lidar localizer 122 determines a vehicle pose (initial pose) based on the localization score array which contain raw pose candidates, see at least [¶ 65-69, Fig. 4 Wang]); and responsive to the operating state in the localized state, perform, by the one or more computing devices, autonomous driving operations to the AV (Fig. 4 depicts a method of localizing or bootstrapping and AV. Once the vehicle is localized in block 445, the method performs autonomous driving operations on the AV based on the determined vehicle pose or localized state, see at least [¶ 70, Fig. 4 Wang]). Wang does not explicitly teach and bootstrapping the AV by transitioning an operating state of the AV from a running state in which autonomous driving is not allowed to a localized state in which autonomous driving is allowed based on the initial pose when the AV is stationary. Frick, directed autonomous machine navigation teaches and bootstrapping the AV by transitioning an operating state of the AV from a running state in which autonomous driving is not allowed to a localized state in which autonomous driving is allowed based on the initial pose when the AV is stationary (running state is a state where the machine determines that the current position and/or orientation is not localized, stops autonomous movement (stationary) and re-localizes (bootstrapping) based on the initial pose while the machine has stopped. The machine resumes autonomous movement when it is in a localized state, see at least [¶ 117, 123,135, Frick]. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have modified Wang’s method of localizing a vehicle which teaches a running state to incorporate the teachings of Frick which teaches and bootstrapping the AV by transitioning an operating state of the AV from a running state in which autonomous driving is not allowed to a localized state in which autonomous driving is allowed based on the initial pose when the AV is stationary Since both Wang and Frick related to localizing an AV, and incorporation of the teaches of Frick would facilitate autonomous machine operation in a wider range of times of day in different lighting conditions. Regarding claim 8, Wang teaches A system comprising: a Light Detection and Ranging (lidar) apparatus configured to perform a lidar sweep to generate lidar data (Lidar System 118 contains an array of channels 201 which may sweep to create a point cloud that corresponds to a 3D representation of the surrounding environment, see at least [¶ 39, Fig.1, Wang]); and a computing device configured to (Fig.1 depicts the vehicle computing system 102, see at least [¶ 18, Fig.1, Wang]): generate an initial pose estimate of an autonomous vehicle (AV) from Global Positioning System (GPS) data (the positioning system 120 can determine position by using a satellite positioning system, see at least [¶ 24 Wang]), the initial pose estimate including a reference map (the localization system 106 includes a Lidar localizer 122 that is configured to generate pose estimates based on a comparison of Lidar intensity maps which is interpreted as a reference map, see at least [¶ 29-30, Wang]), wherein the reference map includes at least one ground height indicative of a z coordinate (the Lidar intensity map (reference map) may include a height map containing the height of each point in the intensity image with respect to cartesian coordinates which must contain a z coordinate, see at least [¶ 28-29, Wang]); generate an initial pose of the AV from the initial pose estimate (Block 445 of Fig. 4 depicts that the Lidar localizer 122 determines a vehicle pose (initial pose) based on the localization score array which contain raw pose candidates (initial pose estimate), see at least [¶ 65-69, Fig. 4 Wang]); generate yaw angle candidates of the AV based on a correlation between the lidar data and the reference map; generate position candidates of the AV based on the reference map, wherein the position candidates include x coordinates and y coordinates (The lidar localizer 122 performs high-precision localization against pre-constructed Lidar intensity maps (reference map) where pose candidates consist of a 2D translation (x, y coordinates) and a heading angle, see at least [¶ 42, Fig.3, Wang]); combine the position candidates and the yaw angle candidates to generate a list of raw candidates (pose candidates include both position and yaw combined, see at least [¶ 29, Wang]); perform a search operation on the raw candidates to determine the initial pose of the AV (Block 445 of Fig. 4 depicts that the Lidar localizer 122 determines a vehicle pose (initial pose) based on the localization score array which contain raw pose candidates, see at least [¶ 65-69, Fig. 4 Wang]); bootstrap the AV based on the determined initial pose when the AV is stationary; and autonomously operate the AV by performing one or more driving operations (Fig. 4 depicts a method of localizing or bootstrapping and AV. Once the vehicle is localized in block 445, the method performs autonomous driving operations on the AV based on the determined vehicle pose or localized state, see at least [¶ 70, Fig. 4 Wang]). Wang does not explicitly teach bootstrap the AV based on the determined initial pose when the AV is stationary Frick, directed autonomous machine navigation teaches bootstrap the AV based on the determined initial pose when the AV is stationary (when the machine determines that the current position and/or orientation is not localized, stops autonomous movement (stationary) and re-localizes (bootstrapping) based on the initial pose while the machine has stopped. The machine resumes autonomous movement when it is in a localized state, see at least [¶ 117, 123,135, Frick]). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have modified Wang’s method of localizing a vehicle which teaches a running state to incorporate the teachings of Frick which teaches bootstrap the AV based on the determined initial pose when the AV is stationary wince both Wang and Frick related to localizing an AV, and incorporation of the teaches of Frick would facilitate autonomous machine operation in a wider range of times of day in different lighting conditions. Regarding claim 15, Wang teaches a non-transitory computer-readable medium having instructions stored thereon (The memory 540 and instructions 516, see at least [¶ 73, Wang]) that, when executed by at least one computing device, cause the at least one computing device to perform operations comprising: generating an initial pose estimate of an autonomous vehicle (AV) from Global Positioning System (GPS) data (the positioning system 120 can determine position by using a satellite positioning system, see at least [¶ 24 Wang]), the initial pose estimate including a reference map (the localization system 106 includes a Lidar localizer 122 that is configured to generate pose estimates based on a comparison of Lidar intensity maps which is interpreted as a reference map, see at least [¶ 29-30, Wang]), wherein the reference map includes at least one ground plane normal indicative of a roll angle and a pitch angle (map data includes a Lidar intensity map (reference map) and generates vehicle poses for the AV system which describe the position and attitude of the vehicle where attitude is a yaw about the vertical axis and pitch about a first horizontal axis and roll about a second horizontal axis. The two axis describe a ground normal plane, see at least [¶ 28-29, Wang]); generating an initial pose of the AV from the initial pose estimate (Fig.4: Operation 425-445 describes the process of selection of a vehicle pose in block 445 from a plurality of vehicle poses generated in block 425, see at least [¶ 64-69, Fig.4, Wang]), the generating the initial pose of the AV comprising: performing a Light Detection and Ranging (lidar) sweep to generate lidar data (Lidar system 118 generates a point cloud from single rotation or “sweep” of the array channels, see at least [¶ 39, Fig.1, Wang]), generating yaw angle candidates of the AV based on a correlation between the lidar data and the reference map, generating position candidates of the AV based on the reference map (The lidar localizer 122 performs high-precision localization against pre-constructed Lidar intensity maps (reference map) where pose candidates consist of a 2D translation (x, y coordinates) and a heading angle, see at least [¶ 42, Fig.3, Wang]), combining the position candidates and the yaw angle candidates to generate a list of raw candidates (pose candidates include both position and yaw combined, see at least [¶ 29, Wang]), and performing a search operation on the raw candidates to determine the initial pose of the AV (Block 445 of Fig. 4 depicts that the Lidar localizer 122 determines a vehicle pose (initial pose) based on the localization score array which contain raw pose candidates, see at least [¶ 65-69, Fig. 4 Wang]); bootstrapping the AV based on the determined initial pose, when the AV is stationary; and autonomously operating the AV by performing one or more driving operations (Fig. 4 depicts a method of localizing or bootstrapping and AV. Once the vehicle is localized in block 445, the method performs autonomous driving operations on the AV based on the determined vehicle pose or localized state, see at least [¶ 70, Fig. 4 Wang]). Wang does not explicitly teach bootstrapping the AV based on the determined initial pose, when the AV is stationary. Frick, directed autonomous machine navigation teaches bootstrapping the AV based on the determined initial pose, when the AV is stationary (when the machine determines that the current position and/or orientation is not localized, stops autonomous movement (stationary) and re-localizes (bootstrapping) based on the initial pose while the machine has stopped. The machine resumes autonomous movement when it is in a localized state, see at least [¶ 117, 123,135, Frick]). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have modified Wang’s method of localizing a vehicle which teaches a running state to incorporate the teachings of Frick which teaches bootstrapping the AV based on the determined initial pose, when the AV is stationary since both Wang and Frick related to localizing an AV, and incorporation of the teaches of Frick would facilitate autonomous machine operation in a wider range of times of day in different lighting conditions. Regarding claims 3, 10 and 17, Wang in view of Frick teach, the method of claim 1, further comprising (re-claim 3), The system of claim 8, wherein the computing device is further configured to (re-claim 10), The non-transitory computer-readable medium of claim 15, wherein the operations further comprise (re-claim 17): Frick, directed autonomous machine navigation teaches issuing instructions to navigate the AV to a new location in response to a detected failure of the bootstrapping; and in response to the AV being moved to a different location, transitioning an operating state of the AV from a failed state to a running state (when the machine determines that the current position and/or orientation is not localized (failed state), stops autonomous movement and take various actions while stopped such as making movements that do not significantly change location but may help in re-acquisition of position and/or orientation or the machine may travel a short distance away to get a location update (running state), see at least [¶115, Frick]). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have modified Wang’s method of localizing a vehicle which teaches a running state to incorporate the teachings of Frick which teaches bootstrapping the AV based on the determined initial pose, when the AV is stationary since both Wang and Frick related to localizing an AV, and incorporation of the teaches of Frick would facilitate autonomous machine operation in a wider range of times of day in different lighting conditions. Claims 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20190383945 A1) in view of Frick et al. (US 20230020033 A1) as applied to claims 1, 3, 8, 10, 15, and 17 and further in view of Duan et al. (US 20210331703 A1). Regarding claims 2, 9 and 16, Wang in view of Frick teach the method of claim 1, further comprising (re-claim 2), The system of claim 8, wherein the computing device is further configured to (re-claim 9), The non-transitory computer-readable medium of claim 15, wherein the operations further comprise (re-claim 16): Wang in view of Frick does not explicitly teach transitioning an operating state of the AV from a not-ready state indicative of AV motion to a running state when a linear speed of the AV is below a predetermined threshold. Duan, directed to techniques relating to monitoring map consistency teaches transitioning an operating state of the AV from a not-ready state indicative of AV motion to a running state when a linear speed of the AV is below a predetermined threshold (the consistency output is monitored and the vehicle uses the consistency output to determine how to control the vehicle. If consistency output indicates an inconsistency with the map data (not ready state) the computing device causes the vehicle to decelerate and travel at a velocity below a threshold until the inconsistency is resolved (running state), see at least [¶ 29, Duan]). Additionally, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have further modified the invention of Wang and Frick to incorporate the teachings of Duan which teaches transitioning an operating state of the AV from a not-ready state indicative of AV motion to a running state when a linear speed of the AV is below a predetermined threshold since they are both related to localizing an AV, and incorporation of the teaches of Duan would increase the accuracy of the overall localization operation by ensuring that the vehicle speed remains below a threshold moving slowly enough to reduce localization confidence error. Claims 4, 11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20190383945 A1) in view of Frick et al. (US 20230020033 A1) as applied to claims 1, 3, 8, 10, 15, and 17 and further in view of Majithia (US 20210001891 A1). Regarding claims 4, 11 and 18, Wang in view of Frick teach, the method of claim 1, wherein generating the yaw angle candidates further comprises (re-claim 4), The system of claim 8, wherein the computing device is further configured to (re-claim 11), The non-transitory computer-readable medium of claim 15, wherein the instructions cause the at least one computing device to perform further operations comprising (re-claim 18), Wang in view of Frick do not explicitly teach aligning the reference map with the lidar data, using an iterative closest point (ICP) algorithm to improve accuracy of the map. Majithia, directed at training data generation for dynamic objects using high definition (HD) map data teaches aligning the reference map with the lidar data, using an iterative closest point (ICP) algorithm to improve accuracy of the map (localization is done using the OMap which is a reference map and use ICP in relation to raw point cloud scans or lidar data, see at least [¶ 139, Majithia]: “A localizer may use OMap points to perform localization. The localizer may use raw point cloud scans to perform ICP vis-à-vis (in relation to) OMap to compute a vehicle pose”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have modified Wang in view of Sarkar’s method of localizing a vehicle to incorporate the teachings of Majithia which teaches aligning the reference map with the lidar data, using an iterative closest point (ICP) algorithm to improve accuracy of the map since they are all related to localizing an AV, and incorporation of the teaches of Majithia would increase accuracy of the overall localization of the vehicle, [¶ 166, Majithia]: The system may work well to generate locations of the cars in the OMap. Using a prior car model and performing ICP on vertices of a merged-convex hull, the system may determine an accurate car-orientation”. Claims 5, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20190383945 A1) in view of Frick et al. (US 20230020033 A1) as applied to claims 1, 3, 8, 10, 15, and 17 and further in view of Lawlor (US 20210089572 A1) and Chen et al. (US 20180188039 A1). Regarding claims 5, 13 and 19, Wang in view of Frick teach, the method of claim 1, further comprising (re-claim 5): The system of claim 8, wherein the computing device is further configured to (re-claim 13): The non-transitory computer-readable medium of claim 15, wherein the instructions cause the at least one computing device to perform further operations comprising (re-claim 19): Wang in view of Frick does not explicitly teach, validating the initial pose of the AV, by a machine learning binary classifier operating on the one or more computing devices, the validating including: determining a position and orientation of the AV; and comparing the position and orientation to the initial pose determined from the lidar data. Lawlor, directed to predicting a pose error for a sensor system based on a trained machine learning model teaches validating the initial pose of the AV, by a machine learning classifier operating on the one or more computing devices (see at least [¶ 29, Fig.1, Lawlor]: “the system 100 of FIG. 1 introduces a capability for predicting a pose error for a sensor system based on a trained machine learning model”), the validating including: determining a position and orientation of the AV (see at least [¶ 50, Fig. 4A, Lawlor]: “The model module 205 uses the sensor system pose data associated with the image 401 to determine the respective capture location 407 with respect to a common or global coordinate system 409”); and comparing the position and orientation to the initial pose determined from the lidar data (see at least [¶ 55, Fig. 3, Lawlor]: “In step 307, the model module 205 uses the above-discussed meta-data associated with the sensor system (e.g., GPS, IMU, camera, LiDAR, Radar, etc.) to determine the deviation between 3D coordinates of a survey point and the ray connecting the capture location with the pixel location, and calculates an error between the ray generated for the image and the known physical location”). Chen, directed to maps for autonomous vehicles teaches, a machine learning binary classifier (see at least [¶ 132, Chen]: “The HD map system first builds a set of human verified ground truth ICP results dataset, and computes a feature vector for each ICP result. This collection of feature vectors allows the HD map system to train a binary classifier, e.g., an SVM classifier, which can predict the probability of current ICP result being correct according to the corresponding feature vector. The trained machine learning model (e.g., SVM model), reports a probability of each ICP result being correct, which allows the HD map system to filter out bad ICP results, and inform human labelers of bad ICP results that may need manual adjustment.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have modified Wang in view of Sarkar’s method of localizing a vehicle to incorporate the teachings of Lawlor which teaches validating the initial pose of the AV, by a machine learning classifier operating on the one or more computing devices, the validating including: determining a position and orientation of the AV; and comparing the position and orientation to the initial pose determined from the lidar data since they are all related to localizing an AV, and incorporation of the teaches of Lawlor would increase accuracy of the overall localization of the vehicle by validating vehicle poses against a machine learning classifier trained in the operating environment. Additionally, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have further modified the invention of Wang, Frick and Lawlor method of localizing a vehicle to incorporate the teachings of Chen which teaches a machine learning binary classifier since they are all related to localizing an AV, and incorporation of the teaches of Chen would increase accuracy of the overall localization of the vehicle by utilizing a binary classifier distinguishing between acceptable and unacceptable pose predictions, informing the AV whether the pose is trustworthy or not. Claims 6 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20190383945 A1) in view of Frick et al. (US 20230020033 A1), Lawlor (US 20210089572 A1) and Chen et al. (US 20180188039 A1) as applied to claims 5, 13 and 19 and further in view of Tay et al. (US 20190196481 A1). Regarding claims 6 and 20, Wang in view of Frick, Lawlor and Chen teach, the method of claim 5 (re-claim 6), The non-transitory computer-readable medium of claim 19 (re-claim 20), Wang in view of Frick, Lawlor and Chen do not explicitly teach wherein the validating further comprises using camera images from one or more ring cameras to visually validate the initial pose. Tay, directed to autonomous navigation teaches wherein the validating further comprises using camera images from one or more ring cameras to visually validate the initial pose (redundant cameras positioned around an AV overlap with LIDAR fields of view, validating the pose in the environment, see at least [¶ 13, Tay]: “fields of view of adjacent LIDAR sensors and/or color cameras on the autonomous vehicle may exhibit known nominal spatial overlap (e.g., redundancy in three-dimensional space)….The autonomous vehicle can thus compare features (e.g., points, surfaces, objects) detected in overlapping regions of two concurrent images output by these sensors in order to confirm that these overlapping regions depict the same constellation of surfaces”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have modified Wang in view of Frick, Lawlor and Chen method of localizing a vehicle to incorporate the teachings of Tay which teaches wherein the validating further comprises using camera images from one or more ring cameras to visually validate the initial pose since they are all related to localizing an AV, and incorporation of the teachings of Tay would increase accuracy of the overall localization of the vehicle by validating vehicle poses against a camera overlapping the LIDAR field of view. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20190383945 A1) in view of Frick et al. (US 20230020033 A1) as applied to claims 1, 3, 8, 10, 15, and 17 and further in view of Tay et al. (US 20190196481 A1). Regarding claim 12, Wang in view of Frick, Lawlor and Chen teach, The system of claim 8, Wang in view of Frick, Lawlor and Chen do not explicitly teach further comprising ring cameras configured to provide camera images to visually validate the initial pose during a visual validation procedure. Tay, directed to autonomous navigation teaches further comprising ring cameras configured to provide camera images to visually validate the initial pose during a visual validation procedure (redundant cameras positioned around an AV overlap with LIDAR fields of view, validating the pose in the environment, see at least [¶ 13, Tay]: “fields of view of adjacent LIDAR sensors and/or color cameras on the autonomous vehicle may exhibit known nominal spatial overlap (e.g., redundancy in three-dimensional space)….The autonomous vehicle can thus compare features (e.g., points, surfaces, objects) detected in overlapping regions of two concurrent images output by these sensors in order to confirm that these overlapping regions depict the same constellation of surfaces”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have modified Wang in view of Frick, Lawlor and Chen method of localizing a vehicle to incorporate the teachings of Tay which teaches further comprising ring cameras configured to provide camera images to visually validate the initial pose during a visual validation procedure since they are all related to localizing an AV, and incorporation of the teachings of Tay would increase accuracy of the overall localization of the vehicle by validating vehicle poses against a camera overlapping the LIDAR field of view. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20190383945 A1) in view of Frick et al. (US 20230020033 A1) as applied to claims 1, 3, 8, 10, 15, and 17 and further in view of Longin (US 20170285179 A1). Regarding claims 7 and 14, Wang in view of Frick teach, the method of claim 1 (re-claim 7), The system of claim 8 (re-claim 14), Wang in view of Frick does not explicitly teach wherein generating the position candidates is based on a reference map derived from multiple GPS satellites, the reference map having a horizontal accuracy of at least ten meters. Longin, directed to determining the accuracy of a satellite-based navigation system teaches wherein generating the position candidates is based on a reference map derived from multiple GPS satellites, the reference map having a horizontal accuracy of at least ten meters (see at least [¶ 3, Longin]: “known satellite-based navigation systems, such as, for example, GPS (Global Positioning System) or GLONASS (GLObal NAvigation Satellite System), a horizontal position determination having an accuracy of approximately 10 meters can be achieved in the civilian sector”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, with a reasonable expectation of success, to have modified Wang in view of Frick Lawlor and Chen method of localizing a vehicle to incorporate the teachings of Longin which teaches wherein generating the position candidates is based on a reference map derived from multiple GPS satellites, the reference map having a horizontal accuracy of at least ten meters since they are all related to localizing an AV, and incorporation of the teachings of Longin would increase accuracy of the overall localization of the vehicle by achieving an initial location pose within 10 meters of accuracy which is standard in the art. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRENE C KHUU whose telephone number is (703)756-1703. The examiner can normally be reached Monday - Friday 0900-1730. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached on (571)272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IRENE C KHUU/ Examiner, Art Unit 3664 /RACHID BENDIDI/Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Dec 03, 2021
Application Filed
Mar 07, 2025
Non-Final Rejection — §103, §112
Jun 09, 2025
Response Filed
Jun 24, 2025
Final Rejection — §103, §112
Sep 10, 2025
Request for Continued Examination
Oct 02, 2025
Response after Non-Final Action
Feb 02, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12487098
OPTICAL MAP DATA AGGREGATION AND FEEDBACK IN A CONSTRUCTION ENVIRONMENT
2y 5m to grant Granted Dec 02, 2025
Patent 12473773
Access Door To A Vehicle
2y 5m to grant Granted Nov 18, 2025
Patent 12455564
VEHICLE DATA SHARING FOR COORDINATED ROBOT ACTIONS
2y 5m to grant Granted Oct 28, 2025
Patent 12441371
DRIVER AND ENVIRONMENT MONITORING TO PREDICT HUMAN DRIVING MANEUVERS AND REDUCE HUMAN DRIVING ERRORS
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
47%
Grant Probability
99%
With Interview (+88.9%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month