DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered.
Status of the Claims
This action is in response to the applicant’s amendment/response and RCE of December 10, 2025.
Claims 1-3, 5-8, 10-14, 16-19, 21-23, and 25-28 are pending and have been considered as follows.
Response to Arguments
Applicant’s arguments/amendments with respect to the objection to the claims have been fully considered and are persuasive. Therefore, the objection to the claims as presented in the Office Action of October 10, 2025 has been withdrawn. However, new objection to the claims is presented below based on the amendments to the claims presented in the Amendment of 10 December 2025.
Applicant’s arguments/amendments with respect to the rejection of claims under 35 USC §112(b) have been fully considered and are persuasive. Therefore, the rejection of claims under 35 USC §112(b) as presented in the Office Action of October 10, 2025 has been withdrawn. However, new rejection of claims under 35 USC §112(b) is presented below based on the amendments to the claims presented in the Amendment of 10 December 2025.
Applicant’s arguments/amendments with respect to the rejection of claims under 35 USC § 101 have been fully considered and are persuasive. Therefore, the rejection of claims under 35 USC § 101 has been withdrawn.
Applicant’s arguments/amendments with respect to the rejection of claims under 35 USC § 103 have been fully considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Objections
Claims 3, 12, 14, 21-23, and 25-28 are objected to because of the following informalities:
Claim 3, lines 3-4, “the front view image” should read “the front view digital image data”.
Claim 3, line 4, “the captured surround view image” should read “the captured surround view digital image data”.
Claim 12, line 23, “the comparison digital image processing comparison data” appears to be a typographical error and should read “the digital image processing comparison data”.
Claim 14, line 2, “HD digital map data updating operation” should read “the HD digital map data updating operation”.
Claim 21, line 6, “at the one processor” appears to be a typographical error and should read “at least one processor”.
Claim 21, line 24, “the predefined HD map” should read “the predefined HD digital map data”.
Claim 21, line 27, “digital data on the captured side lane mark” should read “the digital data on the captured side lane mark”.
Claim 21, line 28, “an in-map lane mark” should read “the in-map lane mark”.
Claim 22, line 1, “The vehicle of claim 21” should read “The HD digital map data management system of claim 21”.
Claim 22, line 3, “a satellite-based positioning module” should read “the satellite-based positioning module”.
Claim 23, line 1, “The vehicle of claim 22” should read “The HD digital map data management system of claim 22”.
Claim 23, lines 2-3, “the front view digital image data” should read “the front view image”.
Claim 25, line 1, “The vehicle of claim 23” should read “The HD digital map data management system of claim 23”.
Claim 25, lines 2-3, “the front view digital image data” should read “the front view image”.
Claim 26, line 1, “The vehicle of claim 25” should read “The HD digital map data management system of claim 25”.
Claim 27, line 1, “The vehicle of claim 26” should read “The HD digital map data management system of claim 26”.
Claim 28, line 1, “The vehicle of claim 27” should read “The HD digital map data management system of claim 27”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-3, 5-8, 10, 11, 21-23, and 25-28 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The claims appear to be a translation from a foreign document, as such there are multiple instances/issues of improper antecedent basis. The Examiner has done the best to attempt to point out all of the different issues. However, the Applicant is advised to ensure the claim(s) complies with the requirements of 35 USC 112.
As to claim 1, the limitation “the processor” at line 17 is unclear. There is insufficient antecedent basis for this limitation in the claim. For purposes of examination, the Examiner is interpreting the limitation to be “a processor”.
As to claim 10, the limitation “the feature” at line 9 is unclear. There is insufficient antecedent basis for this limitation in the claim. For purposes of examination, the Examiner is interpreting the limitation to be “a feature”.
As to claim 21, the limitation “the at the one processor” at line 6 is unclear. There is insufficient antecedent basis for this limitation in the claim. For purposes of examination, the Examiner is interpreting the limitation to be “at least one processor”.
Further, the limitations “the vehicle” at lines 9, 11, 13, and 14 are unclear. Specifically, it is unclear to the Examiner if these are the same “at least one vehicle” previously recited at line 3 or different vehicle. For purposes of examination, the Examiner is interpreting the limitation to be “the at least one vehicle”.
Furthermore, the limitations “the HD digital map data” at lines 28, 32, and 33 are unclear. Specifically, it is unclear to the Examiner if these are the same “predefined HD digital map data” previously recited at line 16 or different. For purposes of examination, the Examiner is interpreting the limitation to be “the predefined HD digital map data”.
As to claim 22, the limitation “the vehicle” at line 3 is unclear. Specifically, it is unclear to the Examiner if this is the same “at least one vehicle” previously recited in claim 21 or different vehicle. For purposes of examination, the Examiner is interpreting the limitation to be “the at least one vehicle”.
As to claim 25, the limitations “the HD digital map data” at lines 3 and 5 are unclear. Specifically, it is unclear to the Examiner if these are the same “predefined HD digital map data” previously recited in claim 21 or different. For purposes of examination, the Examiner is interpreting the limitation to be “the predefined HD digital map data”.
As to claim 26, the limitations “the HD digital map data” at lines 3 and 6 are unclear. Specifically, it is unclear to the Examiner if these are the same “predefined HD digital map data” previously recited in claim 21 or different. For purposes of examination, the Examiner is interpreting the limitation to be “the predefined HD digital map data”.
As to claim 27, the limitations “the HD digital map data” at lines 3 and 5 are unclear. Specifically, it is unclear to the Examiner if these are the same “predefined HD digital map data” previously recited in claim 21 or different. For purposes of examination, the Examiner is interpreting the limitation to be “the predefined HD digital map data”.
As to claim 28, the limitation “the HD digital map data” at line 2 is unclear. Specifically, it is unclear to the Examiner if this is the same “predefined HD digital map data” previously recited in claim 21 or different. For purposes of examination, the Examiner is interpreting the limitation to be “the predefined HD digital map data”.
Claims 2, 3, 5-8, 11, and 23 are rejected as being dependent upon a rejected claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5-8, 10-14, 16-19, 21-23, and 25-28 are rejected under 35 U.S.C. 103 as being unpatentable over PARK, US 2024/0153415 A1, hereinafter referred to as PARK, in view of Arditi, US 2019/0147331 A1, hereinafter referred to as Arditi, and further in view of SHASHUA et al., US 2017/0010618 A1, hereinafter referred to as SHASHUA, respectively.
As to claim 1, PARK teaches a method of updating a high-definition (HD) digital map data for navigating a vehicle using heterogeneous sensor map matching, comprising:
capturing, by a front view image sensor, a front view digital image data of the vehicle (see at least paragraphs 103-112 regarding the front camera 130a may be a camera provided in a black box, a camera of an autonomous driving control device for autonomous driving, or a camera for detecting obstacles. See also at least paragraphs 145-159 regarding the front camera 130a may obtain an image in a forward direction of the vehicle, PARK);
capturing, by a Light Detection and Ranging (LiDAR) sensor, a point cloud of the vehicle using a laser (see at least paragraphs 126-139 regarding the obstacle detector 120 may include one or more LiDAR sensors. The LiDAR sensors may include a transmitter for transmitting a laser and a receiver for receiving the laser that is reflected on a surface of an obstacle existing within the range of the sensor and returned, PARK);
capturing, by a surround view monitor (SVM) image sensor, a surround view digital image data of the vehicle including digital data on a side lane mark positioned laterally next to the vehicle (see at least paragraphs 103-112 regarding the first, second, third, and fourth cameras may be a camera of a surround monitoring device (e.g., surround view monitor (SVM) or around view monitor (AVM)), or a camera of a blind spot detection device (BSD). See also at least paragraphs 145-159 regarding recognizing the shape information of the objects, such as other vehicles, pedestrians, cyclists, lanes, curbs, guardrails, street trees, and streetlights located in front of the vehicle 1 and location information of at least one object. The front camera 130a and the first, second, third, and fourth cameras 131-134 may convert shape information of objects around the vehicle into electrical image signals, PARK).
PARK teaches the first processor 180 may search for a route from the current location of the vehicle to a destination based on the destination information and the current location information of the vehicle received from the first communication interface 170, match the route information for the searched route and the road information with the map information, generate navigation information from the map information in which the route information and the road information are matched, and control autonomous driving based on the generated navigation information (see at least paragraph 187, PARK), however, PARK does not explicitly teach performing HD digital map data updating operation, which comprises: processing the captured surround view digital image data including the digital data on the captured side lane mark and processing a portion of the HD digital map data including digital data on an in-map lane mark, to generate digital image processing comparison data.
However, such matter is taught by Arditi (see at least paragraphs 14. See also at least paragraphs 46-51 regarding at step 720, the computing system of the autonomous vehicle may process the sensor data to identify any objects of interest. At step 730, the computing system may access the HD map stored on the autonomous vehicle for map data associated with the particular location (e.g., x, y coordinates). Then at step 740, the computing system may compare the map data associated with the location (e.g., x, y coordinates) with the object detected in step 720 to determine whether the detected objects exist in the map data. For example, for each detected object, the system may check whether that object exists in the map data. In particular embodiments, the system may generate a confidence score representing the likelihood of the detected object being accounted for in the map data. The confidence score may be based on, for example, a similarity comparison of the measured size, dimensions, classification, and/or location of the detected object with known objects in the map data. See also at least paragraph 74 regarding the cameras may be used for, e.g., recognizing roads, lane markings, street signs, traffic lights, police, other vehicles, and any other visible objects of interest. … For example, an autonomous vehicle 940 may build a 3D model of its surrounding based on data from its LiDAR, radar, sonar, and cameras, along with a pre-generated map obtained from the transportation management system 960 or the third-party system 970).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Arditi which teaches performing HD digital map data updating operation, which comprises: processing the captured surround view digital image data including the digital data on the captured side lane mark and processing a portion of the HD digital map data including digital data on an in-map lane mark, to generate digital image processing comparison data with the system of PARK as both systems are directed to a system and method for controlling the autonomous vehicle based on the sensor data and map data, and one of ordinary skill in the art would have recognized the established utility of performing HD digital map data updating operation, which comprises: processing the captured surround view digital image data including the digital data on the captured side lane mark and processing a portion of the HD digital map data including digital data on an in-map lane mark, to generate digital image processing comparison data and would have predictably applied it to improve the system of PARK.
PARK, as modified by Arditi, does not explicitly teach generating, by the processor, modified digital map data of the HD digital map data to update digital data on a size and a location of the in-map lane mark in the HD digital map data based on the digital image processing comparison data; or autonomously driving the vehicle based on the modified digital map data of the HD digital map data reflecting the updated size and location of the in-map lane mark.
However, SHASHUA teaches generating, by the processor, modified digital map data of the HD digital map data to update digital data on a size and a location of the in-map lane mark in the HD digital map data based on the digital image processing comparison data (see at least paragraphs 511-515 regarding receiving, from the one or more sensors, outputs indicative of a motion of vehicle 1205. Based on analysis of images output from camera 122, processor 1715 may identify landmarks along road segment 1200. Landmarks may include traffic signs (e.g., speed limit signs), directional signs (e.g., highway directional signs pointing to different routes or places), and general signs (e.g., a rectangular business sign that is associated with a unique signature, such as a color pattern). The identified landmark may be compared with the landmark stored in sparse map 800. Processor 1715 may analyze the at least one environmental image to determine information associated with at least one navigational constraint. The navigational constraint may include at least one of a barrier (e.g., a lane separating barrier), an object (e.g., a pedestrian, a lamppost, a traffic light post), a lane marking (e.g., a solid yellow lane marking), a sign (e.g., a traffic sign, a directional sign, a general sign), or another vehicle (e.g., a leading vehicle, a following vehicle, a vehicle that is traveling on the side of vehicle 1205). See also at least paragraphs 525-539. See also at least paragraphs 550-552 regarding receiving an identifier associated with a landmark (step 2810). For example, processor 2232 may receive at least one identifier associated with a landmark from autonomous vehicle 2201 or 2202. Process 2800 may include associating the landmark with a corresponding road segment (step 2820). For example, processor 2232 may associate landmark 2206 with road segment 2200. Process 2800 may include updating an autonomous vehicle road navigation model to include the identifier associated with the landmark (step 2830). For example, processor 2232 may update the autonomous vehicle road navigation model to include an identifier (including, e.g., position information, size, shape, pattern) associated with landmark 2205 in the model. In some embodiments, processor 2232 may also update sparse map 800 to include the identifier associated with landmark 2205. See also at least paragraphs 581-583); and autonomously driving the vehicle based on the modified digital map data of the HD digital map data reflecting the updated size and location of the in-map lane mark (see at least paragraphs 472 and 511-513 regarding the portion of the model transmitted from server 1230 to vehicle 1205 may include an updated portion of the model. The at least one processor 1715 may cause at least one navigational maneuver (e.g., steering such as making a turn, braking, accelerating, passing another vehicle, etc.) by vehicle 1205 based on the received autonomous vehicle road navigation model or the updated portion of the model. See also at least paragraphs 525-539. See also at least paragraphs 550-552 regarding processor 2232 may also update sparse map 800 to include the identifier associated with landmark 2205. Process 2800 may include distributing the updated model to a plurality of autonomous vehicles (step 2840). For example, processor 2232 may distribute the updated model to autonomous vehicles 2201, 2202, and other vehicles that travel on road segment 2200 at later times. The update model may provide updated navigation guidance to autonomous vehicles. See also at least paragraphs 581-583).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of SHASHUA which teaches generating, by the processor, modified digital map data of the HD digital map data to update digital data on a size and a location of the in-map lane mark in the HD digital map data based on the digital image processing comparison data; and autonomously driving the vehicle based on the modified digital map data of the HD digital map data reflecting the updated size and location of the in-map lane mark with the system of PARK, as modified by Arditi, as both systems are directed to a system and method for controlling the autonomous vehicle based on the sensor data and map data, and one of ordinary skill in the art would have recognized the established utility of generating, by the processor, modified digital map data of the HD digital map data to update digital data on a size and a location of the in-map lane mark in the HD digital map data based on the digital image processing comparison data; and autonomously driving the vehicle based on the modified digital map data of the HD digital map data reflecting the updated size and location of the in-map lane mark and would have predictably applied it to improve the system of PARK as modified by Arditi.
As to claim 2, PARK teaches wherein the HD digital map data updating operation comprises generating the digital image processing comparison data based on position values measured by a satellite-based positioning module mounted in the vehicle (see at least paragraph 18 regarding the processor may recognize an object around the vehicle based on the map information matched with the current location information of the vehicle and the view image, and transmit information on the recognized object and the distance information with the mobile device to the mobile device. See also at least paragraphs 184-185 regarding the GPS receiver includes an antenna module for receiving signals from a plurality of GPS satellites and a signal processing module, PARK).
As to claim 3, PARK teaches wherein the HD digital map data updating operation further comprises generating the digital image processing comparison data using the front view image, the point cloud, and the captured surround view image based on the position values (see at least paragraph 18 regarding the processor may recognize an object around the vehicle based on the map information matched with the current location information of the vehicle and the view image, and transmit information on the recognized object and the distance information with the mobile device to the mobile device. See also at least paragraphs 184-185. See also at least paragraph 190 regarding when controlling autonomous driving based on the navigation information, the first processor 180 may recognize the obstacles based on the image information acquired from the front camera 130a and the first, second, third, and fourth cameras 131 to 134 and obstacle information detected by the obstacle detector, and avoid the recognized obstacle, PARK).
As to claim 5, PARK does not explicitly teach matching the front view digital image data captured by the front view image sensor to the HD digital map data based on the position values and performing lane matching by comparing a first lane obtained from the front view digital image data with a second lane obtained from the HD digital map data.
However, such matter is taught by Arditi (see at least paragraphs 46-47 regarding at step 720, the computing system of the autonomous vehicle may process the sensor data to identify any objects of interest. In particular embodiments, the autonomous vehicle may use an object classifier to process the sensor data to detect and identify objects. Using the scenario depicted in FIG. 6 as an example, the object classifier, based on the sensor data (e.g., camera or LiDAR data), may detect the existence of the objects (i.e., the box 620 and the pothole 630) in the road, as well as other objects such as buildings, road dividers, sidewalks, etc. In particular embodiments, the object classifier may further label the detected objects by classification type (e.g., the box 620 and pothole 630 may be specifically labeled as such, or generally labeled as debris). At step 730, the computing system may access the HD map stored on the autonomous vehicle for map data associated with the particular location (e.g., x, y coordinates). Then at step 740, the computing system may compare the map data associated with the location (e.g., x, y coordinates) with the object detected in step 720 to determine whether the detected objects exist in the map data. See also at least paragraphs 74-77).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Arditi which teaches matching the front view digital image data captured by the front view image sensor to the HD digital map data based on the position values and performing lane matching by comparing a first lane obtained from the front view digital image data with a second lane obtained from the HD digital map data with the system of PARK as both systems are directed to a system and method for controlling the autonomous vehicle based on the sensor data and map data, and one of ordinary skill in the art would have recognized the established utility of matching the front view digital image data captured by the front view image sensor to the HD digital map data based on the position values and performing lane matching by comparing a first lane obtained from the front view digital image data with a second lane obtained from the HD digital map data and would have predictably applied it to improve the system of PARK.
As to claim 6, PARK teaches the first, second, third, and fourth cameras 131 to 134 may be a SVM camera, an AVM camera, or a BSD camera (see at least paragraph 157, PARK), however, PARK does not explicitly teach matching the captured surround view digital image data captured by the SVM image sensor to the HD digital map data based on the position values and performing road marker matching by comparing a first road marker obtained from the captured surround view digital image data with a second road marker obtained from the HD digital map data.
However, such matter is taught by Arditi (see at least paragraphs 46-47 regarding at step 720, the computing system of the autonomous vehicle may process the sensor data to identify any objects of interest. In particular embodiments, the autonomous vehicle may use an object classifier to process the sensor data to detect and identify objects. Using the scenario depicted in FIG. 6 as an example, the object classifier, based on the sensor data (e.g., camera or LiDAR data), may detect the existence of the objects (i.e., the box 620 and the pothole 630) in the road, as well as other objects such as buildings, road dividers, sidewalks, etc. In particular embodiments, the object classifier may further label the detected objects by classification type (e.g., the box 620 and pothole 630 may be specifically labeled as such, or generally labeled as debris). At step 730, the computing system may access the HD map stored on the autonomous vehicle for map data associated with the particular location (e.g., x, y coordinates). Then at step 740, the computing system may compare the map data associated with the location (e.g., x, y coordinates) with the object detected in step 720 to determine whether the detected objects exist in the map data. See also at least paragraphs 74-77).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Arditi which teaches matching the captured surround view digital image data captured by the SVM image sensor to the HD digital map data based on the position values and performing road marker matching by comparing a first road marker obtained from the captured surround view digital image data with a second road marker obtained from the HD digital map data with the system of PARK as both systems are directed to a system and method for controlling the autonomous vehicle based on the sensor data and map data, and one of ordinary skill in the art would have recognized the established utility of matching the captured surround view digital image data captured by the SVM image sensor to the HD digital map data based on the position values and performing road marker matching by comparing a first road marker obtained from the captured surround view digital image data with a second road marker obtained from the HD digital map data and would have predictably applied it to improve the system of PARK.
As to claim 7, PARK does not explicitly teach matching the point cloud captured by the LiDAR sensor to the HD digital map data based on the position values and performing feature matching by comparing a first point obtained from the point cloud with a second point obtained from the HD digital map data.
However, such matter is taught by Arditi (see at least paragraphs 46-47 regarding at step 720, the computing system of the autonomous vehicle may process the sensor data to identify any objects of interest. In particular embodiments, the autonomous vehicle may use an object classifier to process the sensor data to detect and identify objects. Using the scenario depicted in FIG. 6 as an example, the object classifier, based on the sensor data (e.g., camera or LiDAR data), may detect the existence of the objects (i.e., the box 620 and the pothole 630) in the road, as well as other objects such as buildings, road dividers, sidewalks, etc. In particular embodiments, the object classifier may further label the detected objects by classification type (e.g., the box 620 and pothole 630 may be specifically labeled as such, or generally labeled as debris). At step 730, the computing system may access the HD map stored on the autonomous vehicle for map data associated with the particular location (e.g., x, y coordinates). Then at step 740, the computing system may compare the map data associated with the location (e.g., x, y coordinates) with the object detected in step 720 to determine whether the detected objects exist in the map data. See also at least paragraphs 74-77).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Arditi which teaches matching the point cloud captured by the LiDAR sensor to the HD digital map data based on the position values and performing feature matching by comparing a first point obtained from the point cloud with a second point obtained from the HD digital map data with the system of PARK as both systems are directed to a system and method for controlling the autonomous vehicle based on the sensor data and map data, and one of ordinary skill in the art would have recognized the established utility of matching the point cloud captured by the LiDAR sensor to the HD digital map data based on the position values and performing feature matching by comparing a first point obtained from the point cloud with a second point obtained from the HD digital map data and would have predictably applied it to improve the system of PARK.
As to claim 8, PARK teaches wherein the generating the modified digital map data (see at least paragraph 18. See also at least paragraphs 190-195 regarding When controlling autonomous driving based on the navigation information, the first processor 180 may recognize the obstacles based on the image information acquired from the front camera 130a and the first, second, third, and fourth cameras 131 to 134 and obstacle information detected by the obstacle detector, and avoid the recognized obstacle. The first processor 180 may match the obstacles detected by front image information with the obstacles detected by the front radar information, and based on the matching result, obtain the type information, location information, and speed information on front obstacles of the vehicle 1, PARK).
PARK does not explicitly teach wherein the generating the modified digital map data of the HD digital map data is performed.
However, such matter is taught by Arditi (see at least paragraph 14 regarding generating and updating HD maps using data from different, heterogeneous sources).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Arditi which teaches wherein the generating the modified digital map data of the HD digital map data is performed with the system of PARK as both systems are directed to a system and method for controlling the autonomous vehicle based on the sensor data and map data, and one of ordinary skill in the art would have recognized the established utility of having wherein the generating the modified digital map data of the HD digital map data is performed and would have predictably applied it to improve the system of PARK.
As to claim 10, PARK does not explicitly teach updating the second lane on the HD digital map data using the first lane obtained from the front view digital image data captured by the front view image sensor based on the result of the lane matching; updating the second road marker on the HD digital map data using the first road marker obtained from the captured surround view digital image data captured by the SVM image sensor based on the result of the road marker matching; or updating the second point on the HD digital map data corresponding to the feature using the first point corresponding to the feature obtained from the point cloud captured by the LiDAR sensor based on the result of the feature matching.
However, such matter is taught by Arditi (see at least paragraphs 46-47 regarding at step 720, the computing system of the autonomous vehicle may process the sensor data to identify any objects of interest. In particular embodiments, the autonomous vehicle may use an object classifier to process the sensor data to detect and identify objects. Using the scenario depicted in FIG. 6 as an example, the object classifier, based on the sensor data (e.g., camera or LiDAR data), may detect the existence of the objects (i.e., the box 620 and the pothole 630) in the road, as well as other objects such as buildings, road dividers, sidewalks, etc. In particular embodiments, the object classifier may further label the detected objects by classification type (e.g., the box 620 and pothole 630 may be specifically labeled as such, or generally labeled as debris). At step 730, the computing system may access the HD map stored on the autonomous vehicle for map data associated with the particular location (e.g., x, y coordinates). Then at step 740, the computing system may compare the map data associated with the location (e.g., x, y coordinates) with the object detected in step 720 to determine whether the detected objects exist in the map data. Then at step 740, the computing system may compare the map data associated with the location (e.g., x, y coordinates) with the object detected in step 720 to determine whether the detected objects exist in the map data. For example, for each detected object, the system may check whether that object exists in the map data. In particular embodiments, the system may generate a confidence score representing the likelihood of the detected object being accounted for in the map data. The confidence score may be based on, for example, a similarity comparison of the measured size, dimensions, classification, and/or location of the detected object with known objects in the map data. At step 745, if the comparison results in a determination that the detected object(s) exists or is known in the HD map (e.g., the confidence score in the object existing in the map is higher than a threshold), then the system may not perform any map-updating operation and return to obtaining sensor data (e.g., step 710). On the other hand, if the comparison results in a determination that at least one detected object does not exist or is not known in the HD map (e.g., the confidence score in the object existing in the map is lower than a threshold), then the system may proceed with a map-updating operation. See also at least paragraphs 74-77).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Arditi which teaches updating the second lane on the HD digital map data using the first lane obtained from the front view digital image data captured by the front view image sensor based on the result of the lane matching; updating the second road marker on the HD digital map data using the first road marker obtained from the captured surround view digital image data captured by the SVM image sensor based on the result of the road marker matching; and updating the second point on the HD digital map data corresponding to the feature using the first point corresponding to the feature obtained from the point cloud captured by the LiDAR sensor based on the result of the feature matching with the system of PARK as both systems are directed to a system and method for controlling the autonomous vehicle based on the sensor data and map data, and one of ordinary skill in the art would have recognized the established utility of updating the second lane on the HD digital map data using the first lane obtained from the front view digital image data captured by the front view image sensor based on the result of the lane matching; updating the second road marker on the HD digital map data using the first road marker obtained from the captured surround view digital image data captured by the SVM image sensor based on the result of the road marker matching; and updating the second point on the HD digital map data corresponding to the feature using the first point corresponding to the feature obtained from the point cloud captured by the LiDAR sensor based on the result of the feature matching and would have predictably applied it to improve the system of PARK.
As to claim 11, PARK teaches wherein the HD digital map data updating operation is performed by an apparatus connected to the vehicle through a wireless communication network or performed by an electronic control unit (ECU) mounted in the vehicle (see at least paragraph 126 regarding the vehicle 1 includes the obstacle detector 120, the plurality of cameras 130, the display 140, the speaker 150, a first communication interface 170, a first processor 180, and a first memory 181. See also at least paragraphs 326-327 regarding when the instructions are executed by a processor, a program module is generated by the instructions so that the operations of the disclosed implementations may be carried out. The recording medium may be implemented as a computer-readable recording medium, PARK).
As to claim 12, Examiner notes claim 12 recites similar limitations to claim 1 and is rejected under the same rational.
As to claim 13, Examiner notes claim 13 recites similar limitations to claim 2 and is rejected under the same rational.
As to claim 14, Examiner notes claim 14 recites similar limitations to claim 3 and is rejected under the same rational.
As to claim 16, Examiner notes claim 16 recites similar limitations to claim 5 and is rejected under the same rational.
As to claim 17, Examiner notes claim 17 recites similar limitations to claim 6 and is rejected under the same rational.
As to claim 18, Examiner notes claim 18 recites similar limitations to claim 7 and is rejected under the same rational.
As to claim 19, Examiner notes claim 19 recites similar limitations to claim 8 and is rejected under the same rational.
As to claim 21, PARK teaches a high-definition (HD) digital map data management system comprising:
at least one vehicle, each of the at least one vehicle equipped with (see at least FIG. 3 and Abstract, PARK):
a satellite-based positioning module, controlled by the at the one processor, configured to measure a position value of the at least one vehicle (see at least paragraph 18 regarding the processor may recognize an object around the vehicle based on the map information matched with the current location information of the vehicle and the view image, and transmit information on the recognized object and the distance information with the mobile device to the mobile device. See also at least paragraphs 183-185 regarding receiving location information regarding the current location of the vehicle and outputs the received location information. The GPS receiver includes an antenna module for receiving signals from a plurality of GPS satellites and a signal processing module, PARK),
a front view image sensor configured to capture a front view image of the vehicle (see at least paragraphs 103-112 regarding the front camera 130a may be a camera provided in a black box, a camera of an autonomous driving control device for autonomous driving, or a camera for detecting obstacles. See also at least paragraphs 145-159 regarding the front camera 130a may obtain an image in a forward direction of the vehicle, PARK),
a Light Detection and Ranging (LiDAR) sensor configured to capture a point cloud of the vehicle using a laser (see at least paragraphs 126-139 regarding the obstacle detector 120 may include one or more LiDAR sensors. The LiDAR sensors may include a transmitter for transmitting a laser and a receiver for receiving the laser that is reflected on a surface of an obstacle existing within the range of the sensor and returned, PARK), and
a surround view monitor (SVM) image sensor configured to capture a surround view digital image data of the vehicle including digital data on a side lane mark positioned laterally next to the vehicle (see at least paragraphs 103-112 regarding the first, second, third, and fourth cameras may be a camera of a surround monitoring device (e.g., surround view monitor (SVM) or around view monitor (AVM)), or a camera of a blind spot detection device (BSD). See also at least paragraphs 145-159 regarding recognizing the shape information of the objects, such as other vehicles, pedestrians, cyclists, lanes, curbs, guardrails, street trees, and streetlights located in front of the vehicle 1 and location information of at least one object. The front camera 130a and the first, second, third, and fourth cameras 131-134 may convert shape information of objects around the vehicle into electrical image signals, PARK),
a computer-based apparatus comprising (see at least Abstract, PARK):
a memory (see at least paragraph 126 regarding a first memory 181, PARK);
a wireless communication network, connected to the at least one vehicle, configured to wirelessly receive the position value of the at least one vehicle and the captured surround view digital image data from the at least one vehicle (see at least paragraphs 126-128 regarding the vehicle 1 includes the obstacle detector 120, the plurality of cameras 130, the display 140, the speaker 150, a first communication interface 170, a first processor 180, and a first memory 181. The obstacle detector 120 may detect obstacles around the vehicle and transmit the detected obstacle information to the first processor 180. The obstacle information may include location information of the obstacle and shape information of the obstacle. See also at least paragraphs 174-186 regarding the first processor 180 may determine the current location information of the vehicle received through the first communication interface as location information of a departure, search for a route to the current location of the mobile device based on the current location information of the mobile device and the location information of the departure received by the first communication interface, and control autonomous driving to the current location of the mobile device based on route information, road information, and map information on the searched route, PARK).
PARK teaches the first processor 180 may search for a route from the current location of the vehicle to a destination based on the destination information and the current location information of the vehicle received from the first communication interface 170, match the route information for the searched route and the road information with the map information, generate navigation information from the map information in which the route information and the road information are matched, and control autonomous driving based on the generated navigation information (see at least paragraph 187, PARK), however, PARK does not explicitly teach a memory storing a predefined HD digital map data; or performing HD digital map data updating operation for updating the predefined HD digital map data, wherein the HD digital map data updating operation include: selecting a portion from the predefined HD map, corresponding to the position value of the at least one vehicle, which includes an in-map lane mark, processing the captured surround view digital image data including digital data on the captured side lane mark and processing the selected portion of the HD digital map data including digital data on an in-map lane mark, to generate digital image processing comparison data.
However, Arditi teaches a memory storing a predefined HD digital map data (see at least paragraphs 46-51 regarding accessing the HD map stored on the autonomous vehicle. See also at least paragraphs 80-82 regarding computer system 1000 includes a processor 1002, memory 1004, storage 1006, an input/output (I/O) interface 1008, a communication interface 1010, and a bus 1012); and performing HD digital map data updating operation for updating the predefined HD digital map data (see at least paragraphs 46-51 regarding the system may proceed with a map-updating operation), wherein the HD digital map data updating operation include: selecting a portion from the predefined HD map, corresponding to the position value of the at least one vehicle, which includes an in-map lane mark (see at least paragraphs 46-51 regarding the computing system may access the HD map stored on the autonomous vehicle for map data associated with the particular location (e.g., x, y coordinates). Then at step 740, the computing system may compare the map data associated with the location (e.g., x, y coordinates) with the object detected in step 720 to determine whether the detected objects exist in the map data), processing the captured surround view digital image data including digital data on the captured side lane mark and processing the selected portion of the HD digital map data including digital data on an in-map lane mark, to generate digital image processing comparison data (see at least paragraphs 14. See also at least paragraphs 46-51 regarding at step 720, the computing system of the autonomous vehicle may process the sensor data to identify any objects of interest. At step 730, the computing system may access the HD map stored on the autonomous vehicle for map data associated with the particular location (e.g., x, y coordinates). Then at step 740, the computing system may compare the map data associated with the location (e.g., x, y coordinates) with the object detected in step 720 to determine whether the detected objects exist in the map data. For example, for each detected object, the system may check whether that object exists in the map data. In particular embodiments, the system may generate a confidence score representing the likelihood of the detected object being accounted for in the map data. The confidence score may be based on, for example, a similarity comparison of the measured size, dimensions, classification, and/or location of the detected object with known objects in the map data. See also at least paragraph 74 regarding the cameras may be used for, e.g., recognizing roads, lane markings, street signs, traffic lights, police, other vehicles, and any other visible objects of interest. … For example, an autonomous vehicle 940 may build a 3D model of its surrounding based on data from its LiDAR, radar, sonar, and cameras, along with a pre-generated map obtained from the transportation management system 960 or the third-party system 970).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Arditi which teaches a memory storing a predefined HD digital map data; and performing HD digital map data updating operation for updating the predefined HD digital map data, wherein the HD digital map data updating operation include: selecting a portion from the predefined HD map, corresponding to the position value of the at least one vehicle, which includes an in-map lane mark, processing the captured surround view digital image data including digital data on the captured side lane mark and processing the selected portion of the HD digital map data including digital data on an in-map lane mark, to generate digital image processing comparison data with the system of PARK as both systems are directed to a system and method for controlling the autonomous vehicle based on the sensor data and map data, and one of ordinary skill in the art would have recognized the established utility of having a memory storing a predefined HD digital map data; and performing HD digital map data updating operation for updating the predefined HD digital map data, wherein the HD digital map data updating operation include: selecting a portion from the predefined HD map, corresponding to the position value of the at least one vehicle, which includes an in-map lane mark, processing the captured surround view digital image data including digital data on the captured side lane mark and processing the selected portion of the HD digital map data including digital data on an in-map lane mark, to generate digital image processing comparison data and would have predictably applied it to improve the system of PARK.
PARK, as modified by Arditi, does not explicitly teach generating modified digital map data of the HD digital map data to update digital data on a size and a location of the in-map lane mark in the HD digital map data based on the digital image processing comparison data.
However, such matter is taught by SHASHUA (see at least paragraphs 511-515 regarding receiving, from the one or more sensors, outputs indicative of a motion of vehicle 1205. Based on analysis of images output from camera 122, processor 1715 may identify landmarks along road segment 1200. Landmarks may include traffic signs (e.g., speed limit signs), directional signs (e.g., highway directional signs pointing to different routes or places), and general signs (e.g., a rectangular business sign that is associated with a unique signature, such as a color pattern). The identified landmark may be compared with the landmark stored in sparse map 800. Processor 1715 may analyze the at least one environmental image to determine information associated with at least one navigational constraint. The navigational constraint may include at least one of a barrier (e.g., a lane separating barrier), an object (e.g., a pedestrian, a lamppost, a traffic light post), a lane marking (e.g., a solid yellow lane marking), a sign (e.g., a traffic sign, a directional sign, a general sign), or another vehicle (e.g., a leading vehicle, a following vehicle, a vehicle that is traveling on the side of vehicle 1205). See also at least paragraphs 525-539. See also at least paragraphs 550-552 regarding receiving an identifier associated with a landmark (step 2810). For example, processor 2232 may receive at least one identifier associated with a landmark from autonomous vehicle 2201 or 2202. Process 2800 may include associating the landmark with a corresponding road segment (step 2820). For example, processor 2232 may associate landmark 2206 with road segment 2200. Process 2800 may include updating an autonomous vehicle road navigation model to include the identifier associated with the landmark (step 2830). For example, processor 2232 may update the autonomous vehicle road navigation model to include an identifier (including, e.g., position information, size, shape, pattern) associated with landmark 2205 in the model. In some embodiments, processor 2232 may also update sparse map 800 to include the identifier associated with landmark 2205. See also at least paragraphs 581-583).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of SHASHUA which teaches generating modified digital map data of the HD digital map data to update digital data on a size and a location of the in-map lane mark in the HD digital map data based on the digital image processing comparison data with the system of PARK, as modified by Arditi, as both systems are directed to a system and method for controlling the autonomous vehicle based on the sensor data and map data, and one of ordinary skill in the art would have recognized the established utility of generating modified digital map data of the HD digital map data to update digital data on a size and a location of the in-map lane mark in the HD digital map data based on the digital image processing comparison data and would have predictably applied it to improve the system of PARK as modified by Arditi.
As to claim 22, Examiner notes claim 22 recites similar limitations to claim 2 and is rejected under the same rational.
As to claim 23, Examiner notes claim 23 recites similar limitations to claim 3 and is rejected under the same rational.
As to claim 25, Examiner notes claim 25 recites similar limitations to claim 5 and is rejected under the same rational.
As to claim 26, Examiner notes claim 26 recites similar limitations to claim 6 and is rejected under the same rational.
As to claim 27, Examiner notes claim 27 recites similar limitations to claim 7 and is rejected under the same rational.
As to claim 28, Examiner notes claim 28 recites similar limitations to claim 8 and is rejected under the same rational.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Akbarzadeh et al. (US 20220333950 A1) regarding a system for vehicle-based determination of HD map update information.
KIM et al. (US 20210009158 A1) regarding a system for guiding the vehicle based on the HD map.
Chandra et al. (US 20200207356 A1) regarding a system for identifying a speed bump for an autonomous vehicle on a roadway.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE S. PARK whose telephone number is (571)272-3151. The examiner can normally be reached Mon-Thurs 9:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M ANTONUCCI can be reached at (313)446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.S.P./Examiner, Art Unit 3666
/ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666