DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/03/2025 has been entered.
Status of the Claims
Claims 1-20 of U.S. Application No. 17/809,491 filed on 06/28/2022 were examined. Examiner filed a non-final rejection on 06/04/2024.
Applicant filed remarks and amendments on 09/03/2024. Claims 1, 5, 7, 8, 12, 14, 15, and 19 are amended, and claims 4, 11, and 18 are cancelled. Claims 1-3, 5-10, 12-17 and 19-20 were examined. Examiner filed a final rejection on 12/03/2024.
Applicant filed an RCE on 03/03/2025. Claims 2, 3, 9, 10, 16, and 17 were cancelled, and claims 1, 8, and 15 were amended. Claims 1, 5-8, 12-15 and 19-20 are presently pending and presented for examination.
Response to Arguments
Regarding the claim rejections under 35 USC 103: Applicant's arguments filed 03/03/2025 with respect to Soni (US 20200193157 A1) in view of Zang et al. (US 20190130182 A1), have been fully considered but they are not persuasive.
Regarding claims 1, 8 and 15, Applicant argues that, Soni and Zang, fail to disclose or teach certain limitations in amended claim 1, specifically “determining a target road surface in the road image, wherein the target road surface is a road surface matching the road line in first road network data; the road line into a road surface with a specified width, to obtain third road network data with the road image, to determine a candidate road surface, which overlaps with the road image, to determine a target road surface.” The applicant contends that Soni is silent about navigating based on road network data adjusted based on a road image (see Applicant’s Remarks, pages 2-3, 6, 17) and that Zang does not teach matching virtual road lines in the base map with road surfaces in road images or matching based on overlapping areas (see Applicant’s Remarks, pages 7-8, 17).
However, examiner respectfully disagrees. Regarding the applicant’s argument about determining a target road surface matching the road line in first road network data, Soni (paragraph [0057]) discloses modifying an aerial image 51 to include only the road network 54 by applying a filter on highlighted road surfaces, which inherently involves matching the road line to a road surface. This is further supported by Soni (paragraph [0061]), where the visual road line is expanded to a road surface with a specified width, aligning with the claimed step of obtaining third road network data.
On the issue of adjusting road network data, Soni (paragraph [0057]) describes how the aerial image editor 38 may modify or discard portions of the aerial image 51 based on road surface data, indicating an adjustment process. This contrasts with the applicant’s assertion of silence on navigation, as Soni’s modified data supports navigation improvements (see Soni, [0057]).
The examiner also relies on Zang paragraph [0064], which states, “When the classifier 142 includes a neural network, the classifier 142 analyzes the image systematically through the multiple parameters assigned to the multiple layers of the neural network. The neural network may provide the probability value for each pixel of the target region or each group of pixels within the roadway boundary.” This demonstrates determining a road boundary from a road image, supporting the matching process. Additionally, Zang (paragraphs [0113]-[0114]) describes using map matching values to determine optimal routes, which involves analyzing road segments and probe data, further corroborating the examiner’s interpretation of matching based on image analysis.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5-8, 12-15 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Soni (US 20200193157 A1) in view of Zang et al. (US 20190130182 A1), hereinafter referred to as Soni and Zang respectively.
Regarding claims 1, 8 and 15, Soni discloses A road network data processing method (“the road network module configured to identify map data corresponding to at least one path, and the map data defines the predetermined path geometry.” [0005]), comprising:
determining a position information of a road line in first road network data by using the first network data, the first road network data is line data of a basic base map of an electronic map, and the road line in the first road network data is a visual line representing a real road (“The HAD vehicle may control the vehicle through steering or braking in response to the on the position of the vehicle and may respond to the lane feature classifications and other geographic data received from geographic database 123 and the server 125 to generate driving commands or navigation commands.” [0093]);
wherein the target road surface is the road surface matching a road line in first road network data(“The road network module 37 may perform template matching or another image processing technique to analyze the road in the image data 33 that is identified from the location of the road network in the map data 31. The image processing algorithm may include segmentation of a road surface or filtering of pixel values for a centerline for the at least one path, and the width value is derived from an output of the image processing algorithm. “ [0053]),
adjusting the road line in the first road network data to a road surface corresponding to a road width of the target road surface in the road image (“FIG. 4 illustrates the highlighted road surface in the image after calculating the road width]. The aerial image editor 38 may modify or discard portions of the aerial image 51 by applying a filter on highlighted road surface obtained by adding the calculated road width/2 on each side of road centerline geometry plotted on the aerial image 51. “ [0057]),
to obtain second road network data of the basic base map(“As a result one or more road segments in the map data 31 are assigned to pixels in the aerial image such that the lane feature controller 121 identifies portions in the image data 33 that correspond to the road network of the map data 31. “ [0055]).
navigating based on the second road network data, wherein the adjusting the road line in the first road network data to a road surface corresponding to a road width of the target road surface in the road image (Soni discloses “In another embodiment, an apparatus for lane feature detection from an image according to predetermined path geometry includes a road network module, an aerial image editor, and a lane feature module. The road network module configured to identify map data corresponding to at least one path, and the map data defines the predetermined path geometry. The aerial image editor is configured to modify the image according to the map data including the predetermined path geometry. The lane feature module is trained according to the modified image and configured to identify at least one lane feature from a subsequent image.” [0005], comprises:
determining the road width of the target road surface(“The lane feature controller 121 may calculate the width value based on probe data. Traces or series of probe data collected at probes 101 may be analyzed to determine the width value. “ [0052]);
expanding the road line symmetrically into the road surface corresponding to the road width of the target road surface, based on the road width of the target road surface, by taking the road line as a center(“The road network module 37 may calculate the predetermined path geometry by constructing a polygon from the centerline and the calculated width of the road. The road network module 37 may convert the centerline coordinates into pixels using the scaling factor. The calculated width, converted to pixels, is divided by two (or another value depending on the number of lanes and other road attributes from the map data 31), and the result is added to the centerline coordinates. This forms the boundary of a polygon that represents a patch of the road. “ [0054]).
Soni does not explicitly teach determining a partial satellite image at a position corresponding to the position information of the road line from the satellite image, as a partial satellite image of the real road represented by the road line in the satellite image
determining a target road surface in a road image
performing semantic segmentation on the partial satellite image based on a trained U- shaped full convolution network, to extract a road image of the real road from the partial satellite image
and navigating based on the second road network data, wherein the determining the target road surface in the road image, comprises: expanding the road line into a road surface with a specified width, to obtain third road network data
matching the third road network data with the road image, to determine a candidate road surface, which overlaps with the road surface with the specified width, in the road image
calculating a road surface overlapping area between the road surface with the specified width and a corresponding candidate road surface
and determining the target road surface in the candidate road surface according to the road surface overlapping area and a corresponding set threshold.
However, Zang does teach determining a partial satellite image (“Although the satellite image resolution is close to the lane marking width, the lane markings may appear blurred due to image compression, hardware imperfections (imperfect lenses, etc.), and optical limitations (i.e. angular resolution). “ [0072]) at a position corresponding to the position information of the road line from the satellite image, as a partial satellite image of the real road represented by the road line in the satellite image(“A sliding window is designed to crop training patches from corresponding satellite image within the road surface. The label for each patch is determined by whether there are any lane marking pixels in the current patch. To reduce misleading ground truth patches (e.g. the patch contains two independent lines), an appropriate window size may be thinner than a single lane width. “ [0060]);
determining a target road surface in a road image(“ When the classifier 142 includes a neural network, the classifier 142 analyzes the image systematically through the multiple parameters assigned to the multiple layers of the neural network. The neural network may provide the probability value for each pixel of the target region or each group of pixels within the roadway boundary. “ [0064]).
performing semantic segmentation on the partial satellite image based on a trained U- shaped full convolution network, to extract a road image of the real road from the partial satellite image (“The following embodiments automatically extract lane boundary from overhead imagery using pixel-wise segmentation and machine learning, and convert unstructured lines into structured road model by using hypothesis linking algorithm.” [0036])
and navigating based on the second road network data, wherein the determining the target road surface in the road image, comprises: expanding the road line into a road surface with a specified width, to obtain third road network data (“To reduce noise (e.g. lane markings pixel from adjacent road surfaces), the surface region may be bounded by road boundaries. That is, the images 140 are filtered according to the road boundaries in the ground truth data 141, images outside of the road boundaries are removed before defining the positive patches 160 and negative patches 161. The road boundaries may be a set distances from the center line geometry of the road segments. The positive patches 160 and negative patches 161 are selected from the portions of images within the road boundaries, which is within the road surface. “ [0059]);
matching the third road network data with the road image, to determine a candidate road surface, which overlaps with the road surface with the specified width, in the road image (“Using input(s) including map matching values from the server 125, a mobile device 122 examines potential routes between the origin location and the destination location to determine the optimum route. The navigation device 122 may then provide the end user with information about the optimum route in the form of guidance that identifies the maneuvers required to be taken by the end user to travel from the origin to the destination location. Some mobile device 122 show detailed maps on display 211 outlining the route, the types of maneuvers to be taken at various locations along the route, locations of certain types of features, and so on, any of which may include the lane line objects for lane marking or roadside objects.” [0113]);
calculating a road surface overlapping area between the road surface with the specified width and a corresponding candidate road surface (“ That is, the images 140 are filtered according to the road boundaries in the ground truth data 141, images outside of the road boundaries are removed before defining the positive patches 160 and negative patches 161. The road boundaries may be a set distances from the center line geometry of the road segments. The positive patches 160 and negative patches 161 are selected from the portions of images within the road boundaries, which is within the road surface. “ [0059]);
and determining the target road surface in the candidate road surface according to the road surface overlapping area and a corresponding set threshold (“The ground truth data 141 may include a set of data that associates images using an image identifier with pixel coordinates for the locations of the lane markings. FIG. 4 illustrates example overhead images 150 with overlaid ground truth lane markings including a continuous object or solid lane line 135 or a semi-continuous object or a dash lane line 131.” [0054]). Both Soni and Zang teach methods road network data processing. However, Zang explicitly teaches a method that extracts a road image from a partial satellite image using a trained U-shaped convolution network, expands road lines into a surface to match with road image data, and determines the target road surface based on overlapping area for navigation.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the stability monitoring method of Soni to also include a method that extracts a road image from a partial satellite image using a trained U-shaped convolution network, expands road lines into a surface to match with road image data, and determines the target road surface based on overlapping area for navigation, as in Zang. Doing so improves quality of road network and mapping data (With regard to this reasoning, see at least [Zang, 0002 - 0007]).
Regarding claims 5 and 12, Soni discloses The method of claim 1, wherein in a case where there are adjacent road lines matching a same target road surface, the determining the road width of the target road surface comprises:
determining the adjacent road lines as up and down road lines required to be merged(“ The restrictions for traveling the roads or intersections may include turn restrictions, travel direction restrictions, speed limits, lane travel restrictions or other restrictions. Turn restrictions define when a road segment may be traversed onto another adjacent road segment.“ [0098] and “Travel direction restriction designate the direction of travel on a road segment or a lane of the road segment. The travel direction restriction may designate a cardinal direction (e.g., north, southwest, etc.) or may designate a direction from one node to another node. The roadway features may include the number of lanes, the width of the lanes, the functional classification of the road, or other features that describe the road represented by the road segment. The functional classifications of roads may include different levels accessibility and speed. An arterial road has low accessibility but is the fastest mode of travel between two points.” [0099]);
and calculating the road width of the target road surface according to the first distances and the second distance(“The road network module 37 may calculate the predetermined path geometry by constructing a polygon from the centerline and the calculated width of the road. “ [0054]).
Soni does not explicitly teach determining first distances from the up and down road lines required to be merged to respective adjacent roadside sidelines, wherein the adjacent roadside sidelines are roadside sidelines of the target road surface
determining a second distance between the adjacent road lines
However, Zang does teach determining first distances from the up and down road lines required to be merged to respective adjacent roadside sidelines, wherein the adjacent roadside sidelines are roadside sidelines of the target road surface(“ the images 140 are filtered according to the road boundaries in the ground truth data 141, images outside of the road boundaries are removed before defining the positive patches 160 and negative patches 161. The road boundaries may be a set distances from the center line geometry of the road segments. The positive patches 160 and negative patches 161 are selected from the portions of images within the road boundaries, which is within the road surface. “ [0059]);
determining a second distance between the adjacent road lines(“The road boundaries may be a set distances from the center line geometry of the road segments. “ [0059]). Both Soni and Zang teach methods road network data processing. However, Zang explicitly teaches determining first distances from the up and down road lines required to be merged to respective adjacent roadside sidelines of the target road surface and determining a second distance between the adjacent road lines.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the stability monitoring method of Soni to also include determining first distances from the up and down road lines required to be merged to respective adjacent roadside sidelines of the target road surface and determining a second distance between the adjacent road lines, as in Zang. Doing so improves quality of road network and mapping data (With regard to this reasoning, see at least [Zang, 0002 - 0007]).
Regarding claims 6, 13 and 20, Soni discloses The method of claim 5, wherein the expanding the road line symmetrically into the road surface corresponding to the road width of the target road surface, based on the road width of the target road surface, by taking the road line as the center, comprises:
expanding the adjacent road lines symmetrically into the road surface corresponding to the road width of the target road surface, by taking the adjacent road lines as the center(“The road network module 37 may calculate the predetermined path geometry by constructing a polygon from the centerline and the calculated width of the road. The road network module 37 may convert the centerline coordinates into pixels using the scaling factor. The calculated width, converted to pixels, is divided by two (or another value depending on the number of lanes and other road attributes from the map data 31), and the result is added to the centerline coordinates. This forms the boundary of a polygon that represents a patch of the road. “ [0054]).
Regarding claims 7, 14 and 19, Soni discloses The method of claim 1, wherein in a case where there are no adjacent road lines matching a same target road surface, the determining the road width of the target road surface comprises:
and calculating the road width of the target road surface according to the third distances(“The road network module 37 may calculate the predetermined path geometry by constructing a polygon from the centerline and the calculated width of the road. “ [0054])
Soni does not explicitly teach determining third distances from the road line to two roadside sidelines of the target road surface
However, Zang does teach determining third distances from the road line to two roadside sidelines of the target road surface(“The road boundaries may be a set distances from the center line geometry of the road segments. “ [0059]). Both Soni and Zang teach methods road network data processing. However, Zang explicitly teaches determining third distances from the road line to two roadside sidelines of the target road surface.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the stability monitoring method of Soni to also include determining third distances from the road line to two roadside sidelines of the target road surface, as in Zang. Doing so improves quality of road network and mapping data (With regard to this reasoning, see at least [Zang, 0002 - 0007]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED ALKIRSH whose telephone number is (703) 756-4503. The examiner can normally be reached M-F 9:00 am-5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FADEY JABR can be reached on (571) 272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AA/Examiner, Art Unit 3668
/Fadey S. Jabr/Supervisory Patent Examiner, Art Unit 3668