DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 16 objected to because of the following informalities: the dependency of claim 16 is incorrect, it should depend upon claim 15 rather claim 11. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-11 and 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kwon et al (US Pub. 2019/0279004) in view of Zou et al (CN 111444778 A).
With respect to claim 1, Kwon discloses A method for detecting lane lines comprising (see Abstract): obtaining road images; obtaining a splice area by performing an image processing on the road images, (see paragraph 0030, wherein …stitching module 209 stitches “splice” (e.g., combines) …into a stitched image (e.g., a combined image) from which the lane lines of the roadway “road” …can be reconstructed…); inputting the splice area into a preset trained lane line detection model and obtaining lane line detection images; and obtaining transformed images and lane line detection results of the transformed images, (see paragraph 0050, wherein …deployment module 213 is a machine-learned neural network model…)
Kwon fails to disclose the performing an image transformation on the lane line detection images, as claimed.
Zou in the same field teaches an image transformation on the lane line detection images, (see page 2, step S2, wherein …respectively performing grey processing to the area, color space transformation…), as claimed.
It would have been obvious to one ordinary skilled in the art at the effective date of invention to combine the two references as they are analogous because they are solving similar problem of lane line detection of road using image analysis. Teaching of Zou to transform an image can be incorporated into Kwon system as suggested in figure 6, 601 process images, for suggestion and modifying the images yields an intelligent automobile technology for lane detecting (see Zou page 1 technical field), for motivation.
With respect to claim 2, combination of Kwon and Zou further discloses wherein obtaining the splice area by performing the image processing on the road images, comprises: obtaining a region of interest by performing a lane line detection on the road images; obtaining a bird's-eye view area of the lane lines by transforming the region of interest, (see Zou page 2, step S3, wherein … converting the binarized image of the road obtained in step S2 into the overview view “birds-eye view” through inverse perspective conversion…); obtaining a grayscale area by performing a grayscale histogram equalization processing on the bird's-eye view area of the lane lines; obtaining a binarized area by performing a binarization processing on the grayscale area; obtaining a target area by converting the bird's-eye view area of the lane lines from an initial color space to a target color space, obtaining an equalized area by performing a histogram equalization processing on each channel of the target area, (see Zou page 2, step S2, wherein …respectively performing grey processing to the area, color space transformation, gradient threshold filtering, color space threshold filtering, finally obtaining the binary map after multi-threshold filtering); and generating the splice area according to the bird's-eye view area of the lane lines, the grayscale area, the equalization area, and the binarization area, (see Kwon paragraph 0030, wherein …the stitching module 209 determines an aerial view image (i.e., a bird's eye view, an inverse projective mapping (IPM) image, etc…), as claimed.
With respect to claim 3, combination of Kwon and Zou further discloses wherein obtaining the bird's-eye view area of the lane lines by transforming the region of interest, comprises: selecting target pixel points having a preset quantity from the region of interest, and obtaining an initial coordinate value of each target pixel point in the region of interest; calculating a transformation matrix according to preset coordinate values corresponding to each initial coordinate value and a plurality of the initial coordinate values; calculating a target coordinate value of the each pixel point in the region of interest according to a coordinate value of the each pixel point in the region of interest and the transformation matrix; and transforming a pixel value of the each pixel point in the region of interest into a target coordinate value corresponding to the pixel point, and obtaining the bird's-eye view area of the lane lines, (see Zou page 2, steps S4 and S5 wherein dynamic adaptive ROI to predict the lane line positions assuming that the detected lane line coordinate of the previous frame is (X, Y), wherein X represents the row number of the image matrix; Y represents the column number of the image matrix and step S2 for image processing using pixel color space), as claimed.
With respect to claim 6, combination of Kwon and Zou further discloses wherein before inputting the splice area into the preset trained lane line detection model, the method further comprises: obtaining a lane line detection network, lane line training images, and a labeling result of the lane line training images; inputting the lane line training images into the lane line detection network for feature extraction and obtaining lane line feature maps; obtaining a prediction result of the lane line feature maps by performing a lane line prediction on each pixel point in the lane line feature maps; and obtaining the preset trained lane line detection model by adjusting parameters of the lane line detection network according to the prediction result and the labeling result. (see Kwon paragraph 0050, wherein …he training module 203 trains the deployment module 213 to learn a set of weights on features of the training images with the reconstructed lane lines so that the deployment module 213 can predict lane lines in a given real-time image of a roadway at time t; and paragraph 0052, wherein …training module 203 may update the deployment module 213 by adjusting the weights of the features of the training images in the machine-learned neural network model. The training module 203 may iteratively update the deployment module 213 until the deployment module 213 can predict lane lines in the training images with a threshold accuracy (e.g., 90% accuracy)…), as claimed.
With respect to claim 7, combination of Kwon and Zou further discloses wherein obtaining the preset trained lane line detection model by adjusting parameters of the lane line detection network according to the prediction result and the labeling result, comprises: calculating a prediction index of the lane line detection network according to the prediction result and the labeling result; and obtaining the preset trained lane line detection model by adjusting the parameters of the lane line detection network according to the predictive index until the predictive index satisfies a preset condition. (see Kwon paragraph 0052, wherein …training module 203 may update the deployment module 213 by adjusting the weights of the features of the training images in the machine-learned neural network model. The training module 203 may iteratively update the deployment module 213 until the deployment module 213 can predict lane lines in the training images with a threshold accuracy (e.g., 90% accuracy)…), as claimed.
With respect to claim 8, combination of Kwon and Zou further discloses wherein calculating the prediction index of the lane line detection network according to the prediction result and the labeling result, comprises: calculating a training quantity of the lane line training images; and calculating a predicted quantity of the prediction result corresponding to the labeling result, and obtaining the prediction accuracy rate by calculating a ratio between the predicted quantity and the training quantity, (see Kwon paragraph 0051, wherein …Once the initial training of the deployment module 213 is complete, the training module 203 re-applies the training images to the trained deployment module 213 (e.g., to the machine-learned neural network model) to test the accuracy of the trained deployment module 213. Responsive to receiving a training image of the position of a vehicle in a roadway at time t, the deployment module 213 outputs a prediction of lane lines in the roadway. Given that each training image is part of a training image set with the reconstructed lane lines, the training module 203 can compare the lane lines predicted by the deployment module 213 to the lane lines reconstructed by the training module 203. The training module 203 determines whether the deployment module 213 accurately predicted the lane lines in the training image based on the comparison), as claimed.
Claims 9-11 and 14-16 are rejected for the same reasons as set forth in the rejections for claims 1-3 and 6-8, because claims 9-11 and 14-16 are claiming subject matter of similar scope as claimed in claims 1-3 and 6-8 respectively.
Claims 17-19 are rejected for the same reasons as set forth in the rejections for claims 1-3, because claims 17-19 are claiming subject matter of similar scope as claimed in claims 1-3 respectively.
Allowable Subject Matter
Claims 4-5, 12-13 and 20 are objected to as being dependent upon a rejected base claims, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIKKRAM BALI whose telephone number is (571)272-7415. The examiner can normally be reached Monday-Friday 7:00AM-3:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VIKKRAM BALI/Primary Examiner, Art Unit 2663