DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1 – 5 and 11 – 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kim; Jaekwang et al. (KR 20230001410 A; translated via Espacenet; hereinafter simply referred to as Kim).
Regarding independent claim 1, Kim teaches:
A method of recognizing a lane line based on LiDAR (See ¶ 1 – 3, wherein a method is disclosed of recognizing a lane line based on LiDAR)
acquiring candidate points of a lane line around an ego vehicle by using a LiDAR sensor; (See ¶ 13, 20 wherein points (candidate points) of a lane line are acquired around a vehicle using a LiDAR sensor)
determining at least one straight line using the candidate points; (See ¶ 81, 83, wherein the cubic curve which is a straight line made up of selected lane points is determined, furthermore, as can be seen in figure 8, the cubic curve is a straight line made from the points (candidate points))
and determining a curve by using final points corresponding to the at least one straight line. (See ¶ 83 – 87 wherein a curve is determined based on the base lane which is based off a straight line cubic curve made up of points (final points corresponding to the at least one straight line)).
Regarding dependent claim 2, Kim teaches:
Acquiring LiDAR point data for each frame of a plurality of time frames, (See ¶ 35 wherein point data is acquired from each frame of a plurality of time frames using LiDAR)
acquiring ground points for each frame from the LiDAR point data for each frame, (See ¶ 35 wherein ground points (point data indicating road surface /ground (road surface points)) are acquired from each frame of a plurality of time frames using LiDAR)
and selecting candidate points for each frame from the ground points for each frame (See ¶ 35 wherein lane points (candidate points) are selected/extracted from point data of road surface points (ground points) for each frame of a plurality of frames).
Regarding dependent claim 3, Kim teaches:
Selecting, as candidate points, ground points located at a first set distance or farther from the ego vehicle in a longitudinal direction among the ground points for each frame; (See ¶ 91, 35 wherein lane points (candidate points) are selected amongst point data including ground point data, at a set distance or father from the vehicle in a longitudinal or vertical distance for each frame).
Regarding dependent claim 4, Kim teaches:
Correcting coordinate values of the candidate points for each frame according to a movement amount of the ego vehicle. (See ¶ 35 wherein tracking of the lane occurs, necessarily meaning the lane points position/coordinates are updated, using LiDAR).
Regarding dependent claim 5, Kim teaches:
Determining a smaller number of secondary candidate points from whole candidate points obtained by combining the candidate points for each frame. (See ¶ 81 – 83, and 35 wherein Lane points (candidate points) are grouped together to create segments (secondary candidate points) which are used in the determination of the one straight line, wherein the candidate points are obtained from different frames).
Regarding independent claim 11, claim 11 is an apparatus claim corresponding to claim 1. Please see the discussion of claim 1 above. Furthermore, Kim teaches of an apparatus comprising an interface configured to receive LiDAR data about surroundings of an ego vehicle from a LiDAR sensor; a memory configured to store instructions for recognizing the lane line based on LiDAR; and at least one processor configured to execute the instructions (See ¶ 36 – 38, 42 and 43 wherein an apparatus/lane detection device comprises an interface/transceiver configured to receive LiDAR data about surroundings of an ego vehicle; a memory configured to store instructions/program for the lane detection; and a processor to execute the instructions from the memory).
Regarding dependent claim 12, claim 12 is an apparatus claim corresponding to claim 2. Please see the discussion of claim 2 above. Furthermore, Kim teaches of an apparatus comprising an interface configured to receive LiDAR data about surroundings of an ego vehicle from a LiDAR sensor; a memory configured to store instructions for recognizing the lane line based on LiDAR; and at least one processor configured to execute the instructions (See ¶ 36 – 38, 42 and 43 wherein an apparatus/lane detection device comprises an interface/transceiver configured to receive LiDAR data about surroundings of an ego vehicle; a memory configured to store instructions/program for the lane detection; and a processor to execute the instructions from the memory).
Regarding dependent claim 13, claim 13 is an apparatus claim corresponding to claim 3. Please see the discussion of claim 3 above. Furthermore, Kim teaches of an apparatus comprising an interface configured to receive LiDAR data about surroundings of an ego vehicle from a LiDAR sensor; a memory configured to store instructions for recognizing the lane line based on LiDAR; and at least one processor configured to execute the instructions (See ¶ 36 – 38, 42 and 43 wherein an apparatus/lane detection device comprises an interface/transceiver configured to receive LiDAR data about surroundings of an ego vehicle; a memory configured to store instructions/program for the lane detection; and a processor to execute the instructions from the memory).
Regarding dependent claim 14, claim 14 is an apparatus claim corresponding to claim 4. Please see the discussion of claim 4 above. Furthermore, Kim teaches of an apparatus comprising an interface configured to receive LiDAR data about surroundings of an ego vehicle from a LiDAR sensor; a memory configured to store instructions for recognizing the lane line based on LiDAR; and at least one processor configured to execute the instructions (See ¶ 36 – 38, 42 and 43 wherein an apparatus/lane detection device comprises an interface/transceiver configured to receive LiDAR data about surroundings of an ego vehicle; a memory configured to store instructions/program for the lane detection; and a processor to execute the instructions from the memory).
Regarding dependent claim 15, claim 15 is an apparatus claim corresponding to claim 5. Please see the discussion of claim 5 above. Furthermore, Kim teaches of an apparatus comprising an interface configured to receive LiDAR data about surroundings of an ego vehicle from a LiDAR sensor; a memory configured to store instructions for recognizing the lane line based on LiDAR; and at least one processor configured to execute the instructions (See ¶ 36 – 38, 42 and 43 wherein an apparatus/lane detection device comprises an interface/transceiver configured to receive LiDAR data about surroundings of an ego vehicle; a memory configured to store instructions/program for the lane detection; and a processor to execute the instructions from the memory).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kim; Jaekwang et al. (KR 20230001410 A; translated via Espacenet; hereinafter simply referred to as Kim) in view of Lu; Weixin et al. (KR20200096408 A; translated via Espacenet; hereinafter simply referred to as Lu).
Regarding dependent claim 6, Kim does not explicitly disclose:
Determining the secondary candidate points comprises applying voxel grid filtering to the whole candidate points.
However Lu teaches of determining the secondary candidate points comprises applying voxel grid filtering to the whole candidate points. (See ¶ 69 wherein the LiDAR points are put through a voxel grid filter creating the secondary candidate points).
As taught by Lu determining the secondary candidate points via voxel grid filtering allows for better storage efficiency. (See ¶ 69 wherein better efficiency occurs once the LiDAR points are put through a voxel grid filter). As both the teachings of Kim and Lu deal with the technical field of image processing using LiDAR it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kim with Lu to teach of determining the secondary candidate points comprises applying voxel grid filtering to the whole candidate points in order for there to be better storage efficiency.
Regarding dependent claim 16, claim 16 is an apparatus claim corresponding to claim 6. Please see the discussion of claim 6 above. Furthermore, Kim teaches of an apparatus comprising an interface configured to receive LiDAR data about surroundings of an ego vehicle from a LiDAR sensor; a memory configured to store instructions for recognizing the lane line based on LiDAR; and at least one processor configured to execute the instructions (See ¶ 36 – 38, 42 and 43 wherein an apparatus/lane detection device comprises an interface/transceiver configured to receive LiDAR data about surroundings of an ego vehicle; a memory configured to store instructions/program for the lane detection; and a processor to execute the instructions from the memory).
Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kim; Jaekwang et al. (KR 20230001410 A; translated via Espacenet; hereinafter simply referred to as Kim) in view of Abbott; Joshua et al. (US 20240280372 A1; hereinafter simply referred to as Abbott).
Regarding dependent claim 7, Kim does not explicitly disclose:
Determining a straight line through Hough transformation for the secondary candidate points.
However, Abbott teaches of determining a straight line through Hough transformation for the secondary candidate points. (See ¶ 3 wherein a straight line is determined using Hough Transform using the LiDAR points (secondary candidate points)).
As taught by Abbott using the Hough Transform is especially effective for determining a straight line. (See ¶ 3 wherein the Hough Transform is stated to be efficient for straight line detection using LiDAR points). As both the teachings of Kim and Abbott deal with the technical field of image processing using LiDAR, it would have been obvious to one of ordinary skill in the art before the effective filing date of he claimed invention to combine the teachings of Kim with Abbott to teach of Determining a straight line through Hough transformation for the secondary candidate points in order for the straight line detection to be efficient.
Regarding dependent claim 17, claim 17 is an apparatus claim corresponding to claim 7. Please see the discussion of claim 17 above. Furthermore, Kim teaches of an apparatus comprising an interface configured to receive LiDAR data about surroundings of an ego vehicle from a LiDAR sensor; a memory configured to store instructions for recognizing the lane line based on LiDAR; and at least one processor configured to execute the instructions (See ¶ 36 – 38, 42 and 43 wherein an apparatus/lane detection device comprises an interface/transceiver configured to receive LiDAR data about surroundings of an ego vehicle; a memory configured to store instructions/program for the lane detection; and a processor to execute the instructions from the memory).
Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim; Jaekwang et al. (KR 20230001410 A; translated via Espacenet; hereinafter simply referred to as Kim) in view of Lee; Nam et al. (US 20220366174 A1; hereinafter simply referred to as Lee)
Regarding dependent claim 10, Kim does not explicitly disclose:
Determining a curve through curve fitting for the final points.
However, Lee teaches of determining a curve through curve fitting for the final points. (See ¶ 10, 47, claim 1 and Abstract wherein a curve is necessarily found via the use of a curve fitting algorithm being used on LiDAR points (final points)).
As taught by Lee the curve fitting algorithm allows for lane recognition information and raw LiDAR data to be converted into lane information which can then be used by devices inside a vehicle. (See ¶ 47 wherein the curve fitting algorithm allows for lane information to be created that can be used by vehicles). As both the teachings of Kim and Lee deal with the technical field of image processing using LiDAR, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kim with Lee to teach of Determining a curve through curve fitting for the final points in order to allow for lane recognition information and raw LiDAR data to be converted into lane information which can then be used by devices inside a vehicle.
Regarding dependent claim 20, claim 20 is an apparatus claim corresponding to claim 10. Please see the discussion of claim 10 above. Furthermore, Kim teaches of an apparatus comprising an interface configured to receive LiDAR data about surroundings of an ego vehicle from a LiDAR sensor; a memory configured to store instructions for recognizing the lane line based on LiDAR; and at least one processor configured to execute the instructions (See ¶ 36 – 38, 42 and 43 wherein an apparatus/lane detection device comprises an interface/transceiver configured to receive LiDAR data about surroundings of an ego vehicle; a memory configured to store instructions/program for the lane detection; and a processor to execute the instructions from the memory).
Allowable Subject Matter
Claims 8, 9, 18, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indications of allowable subject matter:
Regarding clams 8 and 18, the reason of allowable subject matter is that the prior art fails to teach or reasonably suggest the limitations of claims 7 and 17 respectively, further comprising determining a straight line by applying Hough transformation to the secondary candidate points for each individual search region having a set angular range for a search region having a set range on both left and right sides in front of the ego vehicle.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEJANDRO HERNANDEZ whose telephone number is (703)756-1876. The examiner can normally be reached M-F 8 am - 5 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEJANDRO HERNANDEZ/Examiner, Art Unit 2661
/AARON W CARTER/Primary Examiner, Art Unit 2661