DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
• This action is in reply to the Application Number 18/722,852 filed on 06/21/2024.
• Claims 1-20 are currently pending and have been examined.
• This action is made NON-FINAL.
• The examiner would like to note that this application is now being handled by examiner Kai Wang.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d).
The certified copy has been filed in Application No. 18/722,852 filed on 06/21/2024.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/21/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 19-20 are objected to because of the following: Claim 19 is directed towards a data processing apparatus, claim 20 is directed towards a computer program product but they are dependent on claim 10 which is directed towards a method. Therefore, claim 19 and 20 appear to be directed towards two separate (but not distinct) inventions. It is recommended that the claim 19 and 20 should be re-write so that it is in independent form and includes all the limitations from claim 10.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Input device in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 and 10 recites the limitation of “and generate binary image data of the road segments from the remotely captured geographical image data using a semantic segmentation task”. It is not clear what this limitation of “a semantic segmentation task” is referring to. Is it referring to an algorithm or software component? Appropriate correction and/or clarification is required. For the purposes of examination, the Office will interpret the limitations as any teaching of algorithm regarding to the binary image data disclosed by the references.
Claims 2-9 and 11-20 are rejected because their dependence on the independent claim 1 and 10.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. These claims are directed towards a “computer program product.” The broadest reasonable interpretation of this phrase includes transitory media such as signals and carrier waves which are transitory and, therefore, non-statutory. The Office recommends amending the preamble of these claims so that the term “non-transitory” is recited.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5-8, 10, 14-17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fleisig (US20210342585A1) in view of Chase (US20190346857A1).
Regarding Claims 1, 10 and 19-20:
Fleisig teaches:
A system for detecting information about a road relating to digital geographical map data for an area including a plurality of road segments, the system comprising: an input device configured to obtain remotely captured geographical image data for the area; (Fleisig, abstract, “a pixel map based on an aerial or satellite image;”, para[23], “using a neural network (e.g., convolutional neural networks (CNN)) to perform semantic segmentation of objects in satellite imagery…This pixel map may show what type or class of object each pixel in the image is part of (e.g., road,”, and para[69], “information component 30 may download an area of interest (AOI) or entire satellite imagery (e.g., from a Digital Globe satellite database into storage 22)”)
and a processor configured to generate ground truth image data from the digital geographical map data, (Fleisig, para[30], “prediction component 34 may collect all roads from the 2019 imagery and collect all roads from the 2020 imagery”, and para[69], “information component 30 may download an area of interest (AOI) or entire satellite imagery)”)
and generate binary image data of the road segments from the remotely captured geographical image data using a semantic segmentation task, (Fleisig, para[122], “a pixel map may be predicted, via a machine learning model. The pixel map may include pixels… that has a binary value assigned to each pixel”, para[23], “using a neural network (e.g., convolutional neural networks (CNN)) to perform semantic segmentation of objects in satellite imagery”)
wherein the processor is further configured to: skeletonize the binary image data to generate skeletonized binary image data including a center line of each road segment of the road segments, (Fleisig, para[91], “raster phase component 36 may perform skeletonization, as exemplarily depicted in FIG. 5. For example, this component may employ the Zhang-Suen thinning algorithm to produce a pixel map, each road being one pixel thick”)
detect a first road segment missing from the digital geographical map data by converting the skeletonized binary image data to a graph structure of the road segments (Fleisig, para[102], “A connectivity graph may be created”, para[103], “a connectivity graph indicating where all the roads are”, para[30],” information component 30 … determine changes in the road network between 2019 and 2020.”))
and comparing the graph structure of the road segments with the ground truth image data, (Fleisig, para[30],” information component 30 may compare the vector file from 2019 with the vector file from 2020 to determine changes in the road network between 2019 and 2020.”)
detect a road width of each road segment of the road segments from the binary image data and the center line of each road segment of the road segments; (Fleisig, para[119],” identifying specific types of roads. … identifying other aspects, such as the width of the particular road”, and para[99], “a method may be previously performed to produce road network centerline ”)
Fleisig does not explicitly teach, but Chase teaches:
and detect number of lanes of each road segment of the road segments from the detected road width.(Chase, para[12], “Depending on the curb-to-curb width divided by the typical travel lane width would allow the system to determine the number of travel lanes on an obstructed (e.g., snow-covered) roadway”)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the methods for extracting and vectorizing features of satellite imagery of Fleisig to include these above teachings from Chase in order to include detecting number of lanes of each road segment of the road segments from the detected road width. One of ordinary skill in the art would have been motivated to make this modification so “ensure that operation of an autonomous vehicle is safe and efficient in all conditions” (Chase, Description).
Regarding Claims 5 and 14:
Fleisig in view of Chase, as shown in the rejection above, discloses the limitations of claim 1. Fleisig teaches:
The system according to claim 1, wherein the processor is configured to train a deep neural network model using the remotely captured geographical image data as an input. (Fleisig, para[39],” thousands or even millions of images may be obtained from one or more sources to determine (e.g., “train”) neural networks, these images being training data set”, para [39], “sensor(s) 50 may output an image taken at an altitude, e.g., from satellite 55 or an aircraft 55 (e.g., aerostat, drone, plane, balloon, dirigible, kite, and the like)”
Regarding Claims 6 and 15:
Fleisig in view of Chase, as shown in the rejection above, discloses the limitations of claim 5. Fleisig teaches:
The system according to claim 5, wherein the processor is configured to train the deep neural network model on the ground truth image data generated from the digital geographical map data, and tune the trained deep neural network model with annotated image data. (Fleisig, para[58],” a labeled training set may enable model improvement. That is, the training model may use a validation set of data to iterate over model parameters until the point where it arrives at a final set of parameters/weights to use in the model”)
Regarding Claims 7 and 16:
Fleisig in view of Chase, as shown in the rejection above, discloses the limitations of claim 5. Fleisig teaches:
The system according to claim 5, wherein the processor is configured to obtain the trained deep neural network model, and use the trained deep neural network model on the semantic segmentation task. (Fleisig, para[65],” Once trained, the model(s) may be stored in database/storage 60-2 of prediction database 60, as shown in FIG. 1, and then used to classify samples of images based on visible attributes”, para[23], “using a neural network (e.g., convolutional neural networks (CNN)) to perform semantic segmentation of objects in satellite imagery”)
Regarding Claims 8 and 17:
Fleisig in view of Chase, as shown in the rejection above, discloses the limitations of claim 1. Fleisig teaches:
The system according to claim 1, wherein the road segments include a second road segment which is overlapped by at least one object, and the system further comprises a context module configured to receive additional information, and decide which pixel belongs to the second road segment based on the additional information, to generate the binary image data of the road segments. (Fleisig, para[88],” the raw pixel map may be noisy, e.g., with imperfectly straight lines, an imperfect block over where the road is located”, and para[103], “Vector phase component 38 may thus perform gap jumping by identifying separate roads (e.g., separate from each other by a configurable or predetermined distance) and then jumping the gap between (i.e., connecting) these previously separate roads.”)
Claim(s) 2, 11 are rejected under 35 U.S.C. 103 as being unpatentable over Fleisig (US20210342585A1) in view of Chase (US20190346857A1), further in view of Singh (US20220228886A1) and Kong, "Vanishing point detection for road detection," 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, 2009.
Regarding Claims 2 and 11:
Fleisig in view of Chase, as shown in the rejection above, discloses the limitations of claim 1. Fleisig teaches:
…a line segment in the graph structure of the road…(Fleisig, para[102], “A connectivity graph may be created”, para[103], “a connectivity graph indicating where all the roads are”)
Fleisig in view of Chase does not explicitly teach, but Singh teaches:
The system according to claim 1, wherein the processor is configured to determine whether …road segments is the first road segment missing from the digital geographical map data…(, para [36], “The missing road segment module 230 matches the road segments identified by the model to the road segments of the electronic map data corresponding to the geographic region captured by the image. The missing road segment module 230 determines whether any identified road segments are not present in the electronic map data, and if so, identifies each as a missing road segment.”)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the methods for extracting and vectorizing features of satellite imagery of Fleisig to include these above teachings from Singh in order to wherein the processor is configured to determine whether road segments is the first road segment missing from the digital geographical map data. One of ordinary skill in the art would have been motivated to make this modification so “improve the utility of the transportation management system to coordinate trips” (Singh, Description).
Fleisig in view of Chase, Singh does not explicitly teach, but Kong teaches:
using a voting algorithm.( Kong, conclusion, “A novel framework for segmenting the general road region from one single image is proposed based on the road vanishing point estimation using a novel scheme, called Locally Adaptive Soft-Voting (LASV) algorithm.”)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the methods for extracting and vectorizing features of satellite imagery of Fleisig in view of Chase, Singh to include these above teachings from Kong in order to include wherein the processor is configured to determine whether road segments is the first road segment missing from the digital geographical map data using a voting algorithm . One of ordinary skill in the art would have been motivated to make this modification so “reduces the computational complexity and improves the accuracy significantly” (Kong, Conclusion).
Claim(s) 3, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Fleisig (US20210342585A1) in view of Chase (US20190346857A1), further in view of Singh (US20220228886A1) and Kong, "Vanishing point detection for road detection," 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, 2009, and Ma, "BoundaryNet: Extraction and Completion of Road Boundaries With Deep Learning Using Mobile Laser Scanning Point Clouds and Satellite Imagery," in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 6, pp. 5638-5654, June 2022 (Date of Publication: 10 February 2021).
Regarding Claims 3 and 12:
Fleisig in view of Chase, Singh, Kong as shown in the rejection above, discloses the limitations of claim 2. Fleisig in view of Chase, Singh, Kong does not explicitly teach, but Ma teaches:
The system according to claim 2, wherein the processor is configured to count number of pixels of the line segment that has a predetermined value, check whether the counted number is greater than a predetermined threshold value, and decide that the line segment is the first road segment missing from the digital geographical map data if the counted number is greater than the predetermined threshold value.(Ma, page 5664, “N denotes the total number of pixels in road segments”, and page 5641, “an intensity-based multi-threshold method [20] was proposed to extract lane markings (e.g., road boundaries)”, page 5642, “Fig. 3 shows that a CNN-based downsampling and upsampling model is developed to identify and fill the missing parts based on the road boundary extraction results”) Examiner note:Ma teaches count the number of pixels of the line segment, and use a threshold based extract method to identify the missing line segment. Therefore, it is obviously to decide that the line segment is the first road segment missing from the digital geographical map data if the counted number is greater than the predetermined threshold value.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the methods for extracting and vectorizing features of satellite imagery of Fleisig in view of Chase, Singh, Kong to include these above teachings from Ma in order to include wherein the processor is configured to count number of pixels of the line segment that has a predetermined value, check whether the counted number is greater than a predetermined threshold value, and decide that the line segment is the first road segment missing from the digital geographical map data if the counted number is greater than the predetermined threshold value. One of ordinary skill in the art would have been motivated to make this modification so “reduces the computational complexity and improves the accuracy significantly” (Kong, Conclusion).
Claim(s) 4, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Fleisig (US20210342585A1) in view of Chase (US20190346857A1), further in view of YOSHIAKI (JPH11345341A).
Regarding Claims 4 and 13:
Fleisig in view of Chase, as shown in the rejection above, discloses the limitations of claim 1. Fleisig does not explicitly teach, but YOSHIAKI teaches:
The system according to claim 1, wherein the processor is configured to use a polygonal approximation based on the binary image data and the center line of each road segment of the road segments, to detect the road width.( YOSHIAKI, para[214], “perform processes such as sequential boundary exploration or polygonal approximation for evaluation”, and para[14], “all images that have been the subject of labeling processing up to now have been binary images”, and para[31], “the labeling processing unit 124 calculates the area and center of gravity of the identically labeled regions”, and para [177], “the road width, is calculated.”)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the methods for extracting and vectorizing features of satellite imagery of Fleisig in view of Chase to include these above teachings from YOSHIAKI in order to include wherein the processor is configured to use a polygonal approximation based on the binary image data and the center line of each road segment of the road segments, to detect the road width. One of ordinary skill in the art would have been motivated to make this modification so “extract road areas efficiently” (YOSHIAKI, Description).
Claim(s) 9, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Fleisig (US20210342585A1) in view of Chase (US20190346857A1), further in view of Rabel (US 20200292331 A1).
Regarding Claims 9 and 18:
Fleisig in view of Chase, as shown in the rejection above, discloses the limitations of claim 1. Fleisig teaches:
The system according to claim 1, wherein the remotely captured geographical image data includes a satellite image collected by an imaging satellite, (Fleisig, para [39], “sensor(s) 50 may output an image taken at an altitude, e.g., from satellite 55 or an aircraft 55 (e.g., aerostat, drone, plane, balloon, dirigible, kite, and the like)”
Fleisig does not explicitly teach, but Rabel teaches:
and the digital geographical map data includes a crowd-sourced map. (Rabel, para [03], “the digital map is correct, accurate, and up-to-date is to crowdsource information/data from sensors installed on vehicle in order to timely detect changes to the road network”)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the methods for extracting and vectorizing features of satellite imagery of Fleisig in view of Chase to include these above teachings from Rabel in order to include the digital geographical map data includes a crowd-sourced map. One of ordinary skill in the art would have been motivated to make this modification so “timely detect changes to the road network” (Rabel, Description).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Li (US20220215658A1) teaches systems and methods for detecting road markings in a laser intensity image, and more particularly to, systems and methods for detecting road markings from a laser intensity image based on using both deep learning methods and traditional computer vision methods.
Rainbow (US20210027055A1) teaches a computer implemented method and system for identifying topographic features. In particular, aspects relate to a computer-implemented method of identifying topographic features that optimises a machine learning model for a geographic area to automatically classify and extract topographic features from a set of target imagery thereof, and systems for performing the same.
Endres (US20210383544A1) teaches training a predictor model. In particular, an example embodiment generally relates to using machine learning processes to train a segmentation model.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAI NMN WANG whose telephone number is (571)270-5633. The examiner can normally be reached Mon-Fri 0800-1700.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivek Koppikar can be reached on (571) 272-5109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KAI NMN WANG/Examiner, Art Unit 3667
/REDHWAN K MAWARI/Primary Examiner, Art Unit 3667