DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is a Final Office Action on the merits. Claims 1-14 are currently pending and are addressed below.
Response to Amendment
The drawings were objected to due to minor informalities. Applicant amended the drawings accordingly; as such, the objection has been withdrawn.
The specification was objected to due to minor informalities. Applicant amended the abstract accordingly; as such, the objection has been withdrawn.
Claims 1 and 7 were objected to due to minor informalities. Applicant amended the claims accordingly; as such, the objection has been withdrawn.
Claims 6 and 12 were rejected under 35 U.S.C. 112 for being indefinite. Applicant amended the claims accordingly; as such, the rejection has been withdrawn.
Response to Arguments
Applicant’s arguments on pg. 18 of the response, with respect to the rejection(s) of claim(s) 1-12 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Lu.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 13-14 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 13-14 recite “…a point cloud…”. It is unclear whether this point cloud is the same as that already recited in parent claims 1 and 7: “…a point cloud captured by the LIDAR system…”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wheeler of US 20190204092 A1, filed 12/03/2018, hereinafter “Wheeler”, in view of Schroeter of US 20200386555 A1, filed 06/10/2020, hereinafter “Schroeter”, and further in view of Lu of US 20210365712 A1, filed 01/30/2019, hereinafter “Lu”.
Regarding claim 1, Wheeler teaches:
A method of controlling operation of a Self-Driving Car (SDC), (See at least Abstract: “A vehicle, for example, an autonomous vehicle performs localization to determine the current location of the vehicle using different localization techniques as the vehicle drives…”)
the SDC being communicatively coupled to a Light Detection and Ranging (LIDAR) system and (See at least [0038]: “…The vehicle sensors 105 comprise a camera, a light detection and ranging sensor (LIDAR)…”)
an electronic device configured to acquire data from a plurality of localization sources for locating the SDC on a map representation of a geographical region, (See at least [0042-0043]: “…The vehicle computing system 120 comprises a perception module 210…The perception module 210 receives sensor data 230 from the sensors 105 of the vehicle 150. This includes data collected by cameras of the car, LIDAR, IMU, GPS navigation system, and so on…”. See also [0080-0083] regarding the different localization techniques.)
the method comprising: at a given timestamp during operation of the SDC when the SDC is located at a current location in the geographical region: (See at least [0049]: “The localization APIs 250 determine the current location of the vehicle, for example, when the vehicle starts and as the vehicle moves along a route. The localization APIs 250 include a localize API that determines an accurate location of the vehicle within the HD Map…”)
generating, using a localization algorithm, a candidate location of the SDC using a point cloud captured by the LIDAR system, the map representation, (See at least [0083]: “Another localization technique is feature-based localization that detects features using sensor data such as camera images and lidar scans and compares the features with features in the HD map to determine the location of the vehicle…” & [0008]: “…Each localization variant represents a localization technique for determining location of an autonomous vehicle…”.)
generating, using a Machine Learning Algorithm (MLA), a parameter indicative of whether the localization algorithm is likely to generate divergent candidate locations for the SDC in a candidate portion of the map representation under a variety of conditions, the candidate portion including the current location, (See at least [0113]: “…a neural network such as a multilayered perceptron configured to receive an encoding of a geographical region as input and determine a score for a localization variant. The score indicates a measure of performance, for example, a high score may indicate that the localization variant performs well and a low score indicates that the localization variant performs poorly…” & [0088]: “The localization index generation module 930 evaluates different localization variants for each driving context and identifies one or more localization variants to be used in the geographical region. The driving context comprises information describing a current track of the autonomous vehicle, i.e., an instance during which the autonomous vehicle is driving along a portion of a route. A driving context may be represented as a tuple that has various elements such as geographical region, time of day, weather condition…”.)
wherein an input to the MLA comprises (See at least [0113-0114]: “…the localization module 290 the trained deep learning based model receives an encoding of a geographical region as input and predicts a localization variant that performs well in that geographical region. The encoding of the geographical region may comprise HD map data for the geographical region…The localization module 290 tests the performance of the deep learning based model to see if the accuracy of the results predicted is at least above a threshold value. The localization module 290 tests the performance by taking a map of one or more geographical regions, performing a brute force analysis of localization variants by measuring the performance of various localization variants, and various sensor configurations for each localization variant…”)
wherein an output of the MLA is the parameter; (See at least [0114-0116]: “…The localization module 290 executes the deep learning based model to determine the best performing localization variants or to determine a score for a particular localization variant. The localization module 290 compares the results of the brute force execution with the predictions of the deep learning based model and determine error statistics. The localization module 290 measures the net loss in performance to determine whether the deep learning based model is usable in particular geographical regions. If the localization module 290 determines that the deep learning based model has poor performance and is unable to predict the best localization variant, the localization module 290 identifies the geographical regions where the model is inaccurate…”)
determining, using the parameter, that the localization algorithm is an unreliable localization source in the candidate portion of the map representation; (See at least [0113]: “…The score indicates a measure of performance, for example, a high score may indicate that the localization variant performs well and a low score indicates that the localization variant performs poorly…” & [0122]: “…the localization module 290 prunes localization variants that are very likely to perform poorly in a given driving context…”)
determining a multi-sourced location of the SDC using data acquired from a reduced set of localization sources, the reduced set of localization sources excluding the localization algorithm; and (See at least [0122]: “…the localization module 290 prunes localization variants that are very likely to perform poorly in a given driving context. The localization module 290 may mark these localization variants for the geographical regions. Accordingly, the localization module 290 is able to eliminate these localization variants immediately from any analysis…” & [0095]: “…The localization index stores a mapping from each driving context to one or more localization variants based on a measure of performance of each localization variant in the driving context. An autonomous vehicle uses the localization index to determine the location of the autonomous vehicle as the autonomous vehicle is driving…”)
controlling operation of the SDC using the multi-sourced location as the current location of the SDC. (See at least [0095]: “…The localization index stores a mapping from each driving context to one or more localization variants based on a measure of performance of each localization variant in the driving context. An autonomous vehicle uses the localization index to determine the location of the autonomous vehicle as the autonomous vehicle is driving. The system navigates by determining control signals for the autonomous vehicle based on the determined location and sending 1060 control signals to the controls of the autonomous vehicle.”)
However, Wheeler does not explicitly teach using an initial approximation of the current location of the SDC on the map representation to generate the candidate location.
Schroeter, however, teaches estimating the location of the vehicle by performing a search within a map, which includes determining particles that represent an estimated pose (i.e., position and orientation) of a vehicle using an iterative method (See at least [0117] & [0119]: “…the particles of the set after each iteration representing better estimated poses of the vehicle 150a compared to the set of particles in each previous iteration). Therefore, one having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to combine Wheeler’s technique of generating the candidate location using a LIDAR point cloud and map with Schroeter’s technique of generating the candidate location using the initial approximation of the current vehicle location on the map. Doing so would be obvious “to determine a more accurate geographic location of the vehicle” which “may be significantly faster and more efficient” (See [0035] of Schroeter).
However, Wheeler and Schroeter in combination do not explicitly teach that an input to the MLA is a point cloud.
Lu, however, teaches a learning-based LiDAR localization system that receives “an online LiDAR point cloud, a pre-built 3D point cloud map, and a predicted pose of an ADV as inputs” for estimating an optimal pose of a vehicle (See at least [0027]).
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to combine Wheeler and Schroeter’s method with Lu’s technique of an input to the MLA comprising a point cloud. Doing so would be obvious to “accurately estimate the ADV's position and orientation” (See [0077] of Lu).
Regarding claim 2, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 1 as discussed above.
Wheeler additionally teaches:
wherein the candidate location is a candidate position of the SDC. (See at least [0083]: “Another localization technique is feature-based localization that detects features using sensor data such as camera images and lidar scans and compares the features with features in the HD map to determine the location of the vehicle…”)
Regarding claim 3, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 1 as discussed above.
Schroeter additionally teaches:
wherein the candidate location is a candidate position and a candidate orientation of the SDC. (See at least [0138]: “The vehicle computing system may perform localization, or determining the pose of a vehicle (its position and orientation) with respect to a given reference map…”)
Regarding claim 4, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 1 as discussed above.
Wheeler, Schroeter, and Lu in combination do not explicitly teach:
wherein the variety of conditions include at least one of rain, snow, and dirt occluding the LIDAR system.
However, Wheeler does teach a driving context that can include a weather condition (See [0088]). Therefore, it would be an obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include as one of the conditions at least one of rain, show, and dirt occluding the LIDAR system as an obvious design choice. Doing so would be obvious since “localization techniques have parameters that need to be tuned for different driving contexts” (See [0006] of Wheeler).
Regarding claim 5, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 1 as discussed above.
Wheeler additionally teaches:
wherein the map representation is a High Definition (HD) map. (See at least [0031]: “An autonomous vehicle that uses the HD map needs to localize, i.e., determine the current location of the autonomous vehicle with high accuracy to be able to navigate…”)
Regarding claim 6, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 1 as discussed above.
Wheeler additionally teaches:
wherein the plurality of localization sources comprise an odometry based localization source, a LIDAR based localization source, an image-based localization source, and a Global Navigation Satellite System (GNSS) source. (See at least [0043]: “The perception module 210 receives sensor data 230 from the sensors 105 of the vehicle 150. This includes data collected by cameras of the car, LIDAR, IMU, GPS navigation system, and so on…”. See also [0080-0083] regarding the different localization techniques that use data from the IMU, LIDAR, cameras, and GPS.)
Regarding claim 7, Wheeler teaches:
An electronic device for controlling operation of a Self-Driving Car (SDC), (See at least Fig. 2 & Abstract: “A vehicle, for example, an autonomous vehicle performs localization to determine the current location of the vehicle using different localization techniques as the vehicle drives…”)
the SDC being communicatively coupled to a Light Detection and Ranging (LIDAR) system and (See at least [0038]: “…The vehicle sensors 105 comprise a camera, a light detection and ranging sensor (LIDAR)…”)
the electronic device configured to acquire data from a plurality of localization sources for locating the SDC on a map representation of a geographical region, (See at least [0042-0043]: “…The vehicle computing system 120 comprises a perception module 210…The perception module 210 receives sensor data 230 from the sensors 105 of the vehicle 150. This includes data collected by cameras of the car, LIDAR, IMU, GPS navigation system, and so on…”. See also [0080-0083] regarding the different localization techniques.)
the electronic device being configured to: at a given timestamp during operation of the SDC when the SDC is located at a current location in the geographical region: (See at least [0049]: “The localization APIs 250 determine the current location of the vehicle, for example, when the vehicle starts and as the vehicle moves along a route. The localization APIs 250 include a localize API that determines an accurate location of the vehicle within the HD Map…”)
generate, using a localization algorithm, a candidate location of the SDC using a point cloud captured by the LIDAR system, the map representation, (See at least [0083]: “Another localization technique is feature-based localization that detects features using sensor data such as camera images and lidar scans and compares the features with features in the HD map to determine the location of the vehicle…” & [0008]: “…Each localization variant represents a localization technique for determining location of an autonomous vehicle…”.)
generate, using a Machine Learning Algorithm (MLA), a parameter indicative of whether the localization algorithm is likely to generate divergent candidate locations for the SDC in a candidate portion of the map representation under a variety of conditions, the candidate portion including the current location, (See at least [0113]: “…a neural network such as a multilayered perceptron configured to receive an encoding of a geographical region as input and determine a score for a localization variant. The score indicates a measure of performance, for example, a high score may indicate that the localization variant performs well and a low score indicates that the localization variant performs poorly…” & [0088]: “The localization index generation module 930 evaluates different localization variants for each driving context and identifies one or more localization variants to be used in the geographical region. The driving context comprises information describing a current track of the autonomous vehicle, i.e., an instance during which the autonomous vehicle is driving along a portion of a route. A driving context may be represented as a tuple that has various elements such as geographical region, time of day, weather condition…”.)
wherein an input to the MLA comprises (See at least [0113-0114]: “…the localization module 290 the trained deep learning based model receives an encoding of a geographical region as input and predicts a localization variant that performs well in that geographical region. The encoding of the geographical region may comprise HD map data for the geographical region…The localization module 290 tests the performance of the deep learning based model to see if the accuracy of the results predicted is at least above a threshold value. The localization module 290 tests the performance by taking a map of one or more geographical regions, performing a brute force analysis of localization variants by measuring the performance of various localization variants, and various sensor configurations for each localization variant…”)
wherein an output of the MLA is the parameter; (See at least [0114-0116]: “…The localization module 290 executes the deep learning based model to determine the best performing localization variants or to determine a score for a particular localization variant. The localization module 290 compares the results of the brute force execution with the predictions of the deep learning based model and determine error statistics. The localization module 290 measures the net loss in performance to determine whether the deep learning based model is usable in particular geographical regions. If the localization module 290 determines that the deep learning based model has poor performance and is unable to predict the best localization variant, the localization module 290 identifies the geographical regions where the model is inaccurate…”)
determine, using the parameter, that the localization algorithm is an unreliable localization source in the candidate portion of the map representation; (See at least [0113]: “…The score indicates a measure of performance, for example, a high score may indicate that the localization variant performs well and a low score indicates that the localization variant performs poorly…” & [0122]: “…the localization module 290 prunes localization variants that are very likely to perform poorly in a given driving context…”)
determine a multi-sourced location of the SDC using data acquired from a reduced set of localization sources, the reduced set of localization sources excluding the localization algorithm; and (See at least [0122]: “…the localization module 290 prunes localization variants that are very likely to perform poorly in a given driving context. The localization module 290 may mark these localization variants for the geographical regions. Accordingly, the localization module 290 is able to eliminate these localization variants immediately from any analysis…” & [0095]: “…The localization index stores a mapping from each driving context to one or more localization variants based on a measure of performance of each localization variant in the driving context. An autonomous vehicle uses the localization index to determine the location of the autonomous vehicle as the autonomous vehicle is driving…”. See also [0011] regarding the measure of performance.)
control operation of the SDC using the multi-sourced location as the current location of the SDC. (See at least [0095]: “…The localization index stores a mapping from each driving context to one or more localization variants based on a measure of performance of each localization variant in the driving context. An autonomous vehicle uses the localization index to determine the location of the autonomous vehicle as the autonomous vehicle is driving. The system navigates by determining control signals for the autonomous vehicle based on the determined location and sending 1060 control signals to the controls of the autonomous vehicle.”)
However, Wheeler does not explicitly teach using an initial approximation of the current location of the SDC on the map representation to generate the candidate location.
Schroeter, however, teaches estimating the location of the vehicle by performing a search within a map, which includes determining particles that represent an estimated pose (i.e., position and orientation) of a vehicle using an iterative method (See at least [0117] & [0119]: “…the particles of the set after each iteration representing better estimated poses of the vehicle 150a compared to the set of particles in each previous iteration). Therefore, one having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to combine Wheeler’s technique of generating the candidate location using a LIDAR point cloud and map with Schroeter’s technique of generating the candidate location using the initial approximation of the current vehicle location on the map. Doing so would be obvious “to determine a more accurate geographic location of the vehicle” which “may be significantly faster and more efficient” (See [0035] of Schroeter).
However, Wheeler and Schroeter in combination do not explicitly teach that an input to the MLA is a point cloud.
Lu, however, teaches a learning-based LiDAR localization system that receives “an online LiDAR point cloud, a pre-built 3D point cloud map, and a predicted pose of an ADV as inputs” for estimating an optimal pose of a vehicle (See at least [0027]).
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to combine Wheeler and Schroeter’s method with Lu’s technique of an input to the MLA comprising a point cloud. Doing so would be obvious to “accurately estimate the ADV's position and orientation” (See [0077] of Lu).
Regarding claim 8, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 7 as discussed above.
Wheeler additionally teaches:
wherein the candidate location is a candidate position of the SDC. (See at least [0083]: “Another localization technique is feature-based localization that detects features using sensor data such as camera images and lidar scans and compares the features with features in the HD map to determine the location of the vehicle…”)
Regarding claim 9, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 7 as discussed above.
Schroeter additionally teaches:
wherein the candidate location is a candidate position and a candidate orientation of the SDC. (See at least [0138]: “The vehicle computing system may perform localization, or determining the pose of a vehicle (its position and orientation) with respect to a given reference map…”)
Regarding claim 10, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 7 as discussed above.
Wheeler and Schroeter in combination do not explicitly teach:
wherein the variety of conditions include at least one of rain, snow, and dirt occluding the LIDAR system.
However, Wheeler does teach a driving context that can include a weather condition (See [0088]). Therefore, it would be an obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include as one of the conditions at least one of rain, show, and dirt occluding the LIDAR system as an obvious design choice. Doing so would be obvious since “localization techniques have parameters that need to be tuned for different driving contexts” (See [0006] of Wheeler).
Regarding claim 11, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 7 as discussed above.
Wheeler additionally teaches:
wherein the map representation is a High Definition (HD) map. (See at least [0031]: “An autonomous vehicle that uses the HD map needs to localize, i.e., determine the current location of the autonomous vehicle with high accuracy to be able to navigate…”)
Regarding claim 12, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 7 as discussed above.
Wheeler additionally teaches:
wherein the plurality of localization sources comprises an odometry based localization source, a LIDAR based localization source, an image-based localization source, and a Global Navigation Satellite System (GNSS) source. (See at least [0043]: “The perception module 210 receives sensor data 230 from the sensors 105 of the vehicle 150. This includes data collected by cameras of the car, LIDAR, IMU, GPS navigation system, and so on…”. See also [0080-0083] regarding the different localization techniques that use data from the IMU, LIDAR, cameras, and GPS.)
Regarding claim 13, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 1 as discussed above.
Wheeler additionally teaches:
wherein the MLA was trained using a plurality of training data points, and (See at least [0113]: “…The localization module 290 uses a training data set comprising the samples based on tracks representing past instances of autonomous vehicles driving through various geographical regions…”)
wherein each training data point comprises (See at least [0113-0114]: “…the localization module 290 the trained deep learning based model receives an encoding of a geographical region as input and predicts a localization variant that performs well in that geographical region. The encoding of the geographical region may comprise HD map data for the geographical region…The localization module 290 tests the performance of the deep learning based model to see if the accuracy of the results predicted is at least above a threshold value. The localization module 290 tests the performance by taking a map of one or more geographical regions, performing a brute force analysis of localization variants by measuring the performance of various localization variants, and various sensor configurations for each localization variant…” & [0011]: “The measure of performance of a localization variant in a particular driving context may be determined based on one or more factors including…a time of execution of the localization variant in the driving context…”)
a label indicating whether candidate locations for the timestamp converge within a pre-determined radius. (See at least [0119]: “…the localization module 290 collects statistics based on analysis of localization variants. Examples of statistics collected includes convergence radius…From the localization statistics the localization module 290 builds a map of a measure of confidence in the localization variant at each point in the map…The localization module 290 creates a visualization that shows a color-coded representation of the map, for example, a map with red indicating high error and green indicating low error. Red areas would indicate locations that need further investigation, for example, analysis of other localization variants. The map of confidence values also acts as a measure of a level of trust in localization results at specific locations…”)
Lu additionally teaches:
…a point cloud… (See at least [0029-0030]: “in one embodiment, the learning based LiDAR localization system is driven by data that can be automatically or semi-automatically collected in large volumes using offline methods. The large volumes of data include ground truth trajectories, and may be used to train the localization system for localization tasks…In one embodiment, the predicted pose can be generated by an inertial measurement unit (IMU) of the ADV or a vehicle dynamics model of the ADV, and can measure incremental motions between consecutive LiDAR frames. The predicted pose may diverge from the ground truth pose of the ADV, resulting in an offset. As such, recovering the offset is equivalent to estimating the vehicle location. The learning-based LiDAR localization system can generate an optimal offset between the predicted pose and the ground truth pose by minimizing a matching cost between the online point cloud and the pre-built 3D point cloud map…”)
Regarding claim 14, Wheeler, Schroeter, and Lu in combination teach all the limitations of claim 7 as discussed above.
Wheeler additionally teaches:
wherein the MLA was trained using a plurality of training data points, and (See at least [0113]: “…The localization module 290 uses a training data set comprising the samples based on tracks representing past instances of autonomous vehicles driving through various geographical regions…”)
wherein each training data point comprises (See at least [0113-0114]: “…the localization module 290 the trained deep learning based model receives an encoding of a geographical region as input and predicts a localization variant that performs well in that geographical region. The encoding of the geographical region may comprise HD map data for the geographical region…The localization module 290 tests the performance of the deep learning based model to see if the accuracy of the results predicted is at least above a threshold value. The localization module 290 tests the performance by taking a map of one or more geographical regions, performing a brute force analysis of localization variants by measuring the performance of various localization variants, and various sensor configurations for each localization variant…” & [0011]: “The measure of performance of a localization variant in a particular driving context may be determined based on one or more factors including…a time of execution of the localization variant in the driving context…”)
a label indicating whether candidate locations for the timestamp converge within a pre-determined radius. (See at least [0119]: “…the localization module 290 collects statistics based on analysis of localization variants. Examples of statistics collected includes convergence radius…From the localization statistics the localization module 290 builds a map of a measure of confidence in the localization variant at each point in the map…The localization module 290 creates a visualization that shows a color-coded representation of the map, for example, a map with red indicating high error and green indicating low error. Red areas would indicate locations that need further investigation, for example, analysis of other localization variants. The map of confidence values also acts as a measure of a level of trust in localization results at specific locations…”)
Lu additionally teaches:
…a point cloud… (See at least [0029-0030]: “in one embodiment, the learning based LiDAR localization system is driven by data that can be automatically or semi-automatically collected in large volumes using offline methods. The large volumes of data include ground truth trajectories, and may be used to train the localization system for localization tasks…In one embodiment, the predicted pose can be generated by an inertial measurement unit (IMU) of the ADV or a vehicle dynamics model of the ADV, and can measure incremental motions between consecutive LiDAR frames. The predicted pose may diverge from the ground truth pose of the ADV, resulting in an offset. As such, recovering the offset is equivalent to estimating the vehicle location. The learning-based LiDAR localization system can generate an optimal offset between the predicted pose and the ground truth pose by minimizing a matching cost between the online point cloud and the pre-built 3D point cloud map…”)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIKKI MARIE M MOLINA whose telephone number is (571)272-5180. The examiner can normally be reached M-F, 9am-6pm PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at 571-270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NIKKI MARIE M MOLINA/Examiner, Art Unit 3662
/ANISS CHAD/Supervisory Patent Examiner, Art Unit 3662