DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is reply to the Application Number 18/921,698 filed on 10/21/2024.
Claims 1 – 20 are currently pending and have been examined.
This action is made NON-FINAL.
Priority
Acknowledgement is made of applicant’s claim for foreign priority under 35 U.S.C. 119(a)-(d). Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement filed 10/21/2024 has been received and considered.
Specification
The disclosure is objected to because of the following informalities: The specification is conflicting regarding the definition of the claimed term “circular error probable” (CEP). Paragraph 0067 state the CEP to be the probability that the actual location of the vehicle falls within a circle with a radius “r” and that the CEP is the radius of the circle. There are two conflicting definitions for CEP.
Appropriate correction is required.
Claim Rejections – 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The determination of whether a claim recites patent ineligible subject matter is a 2 step inquiry.
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), see MPEP 2106.03, or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: see MPEP 2106.04
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? see MPEP 2106.04(II)(A)(1)
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? see MPEP 2106.04(II)(A)(2)
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? see MPEP 2106.05
101 Analysis – Step 1
Claim 1 is directed to a lane positioning method (i.e., a process). Therefore, claim 1 is within at least one of the four statutory categories.
Claim 15 is directed to a computer device (i.e., a machine). Therefore, claim 15 is within at least one of the four statutory categories.
Claim 20 is directed to a non-transitory computer readable storage medium (i.e., a machine). Therefore, claim 20 is within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. see MPEP 2106(A)(II)(1) and MPEP 2106.04(a)-(c)
Independent claim 1 includes limitations that recite an abstract idea (emphasized below [with the category of abstract idea in brackets]) and will be used as a representative claim for the remainder of the 101 rejection. Claims 15 and 20 recite the same method as claim 1, but performed respectively on a computer and non-transitory computer readable storage medium. Claim 1 recites:
A lane positioning method, performed by a computer device and comprising:
obtaining a road visible region corresponding to a target vehicle, the road visible region being related to the target vehicle and a component parameter of a photographing component installed on the target vehicle, and being a road location photographed by the photographing component;
obtaining, according to vehicle location status information of the target vehicle and the road visible region, local map data associated with the target vehicle, the road visible region being located in the local map data, and the local map data comprising at least one lane associated with the target vehicle; and
determining, from the at least one lane of the local map data, a target lane to which the target vehicle belongs. [mental process/step]
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “determining…” in the context of this claim encompasses a person looking at data collected and forming a simple judgement. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. see MPEP 2106.04(II)(A)(2) and MPEP 2106.04(d)(2). It must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” [with a description of the additional limitations in brackets], while the bolded portions continue to represent the “abstract idea”.):
A lane positioning method, performed by a computer device and comprising:
obtaining a road visible region corresponding to a target vehicle, the road visible region being related to the target vehicle and a component parameter of a photographing component installed on the target vehicle, and being a road location photographed by the photographing component; [pre-solution activity (data gathering) using generic sensors]
obtaining, according to vehicle location status information of the target vehicle and the road visible region, local map data associated with the target vehicle, the road visible region being located in the local map data, and the local map data comprising at least one lane associated with the target vehicle; and [pre-solution activity (data gathering)]
determining, from the at least one lane of the local map data, [pre-solution activity (data gathering)]
a target lane to which the target vehicle belongs. [mental process/step]
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitation of “obtaining…,” the examiner submits that these limitations are insignificant extra-solution activities. In particular, the receiving steps from the sensors and from the external source are recited at a high level of generality (i.e. as a general means of gathering vehicle and road condition data for use in the evaluating step), and amounts to mere data gathering, which is a form of insignificant extra-solution activity. Claim 15 teaches the same limitation being performed on the additional element of a computer device, which is merely use a computer to perform the mental process. Similarly, claim 20 teaches the same limitation being performed on the additional element of a non-transitory computer readable storage medium containing a computer program, which like claim 15 is merely use a computer to perform the mental process
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception. see MPEP § 2106.05. Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the Revised Guidance, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above, the additional limitations of “obtaining…” the examiner submits that this limitation is an insignificant extra-solution activity. In addition, this additional limitation (and the combination, thereof) amount to no more than what is well-understood, routine and conventional activity. In regards to claims 15 and 20, both their respective additional elements of a computer and non-transitory computer readable storage medium to perform the “determining…” amounts to nothing more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Hence, the claims are not patent eligible.
Dependent claim(s) 2 – 14 of independent claim 1 and dependent claims 16 – 19 of independent claim 15 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. For example, claim 2 states obtaining a component parameter of the photographing component (interpreted as an image of a camera) to determine the upper and lower boundaries of the component parameter and determining candidate road points. This can be performed mentally in the mind as, for example, a person is able to acquire an image (extra solution activity) and acquire the placement of a camera (extra solution activity) to then make determinations about the upper and lower boundaries of that image along with any candidate road points (mental process). Therefore, dependent claims 2 – 14 and 16 – 19 are not patent eligible under the same rationale as respectively provided for in the rejection of independent claim 1 and claim 15.
Therefore, claim(s) 1 – 20 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2 – 8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “relatively far” in claim 2 is a relative term which renders the claim indefinite. The term “relatively far” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. “Relatively far” is a subjective claim limitation as it cannot be defined. For example, terms like “predetermined distance away” or “five feet away” provide concrete limitations how far something is. Due to this indefiniteness, “relatively far” will be interpreted as any distance away from the claimed target vehicle.
Claim 2 states: “determining a target tangent formed by the photographing component and a vehicle head boundary point of the target vehicle, and determining an intersection point of the ground plane and the target tangent as a candidate road point corresponding to the target tangent; and”, however the claim is indefinite on what a vehicle head boundary point is. A vehicle head can be interpreted as the front of the vehicle in the direction it is driving, thus how is a boundary point being identified by the vehicle head? Due to this indefiniteness, the vehicle head boundary point is interpreted as a point along the lower boundary of an image captured facing in the direction of the vehicle heading.
Claim 3 states: “evenly dividing the vertical visible angle to obtain an average vertical visible angle of the photographing component; and obtaining, along the primary optical axis, the lower boundary line and the upper boundary line that form the average vertical visible angle with the primary optical axis, wherein the primary optical axis, the upper boundary line, and the lower boundary line are located on a same plane, and a plane on which the primary optical axis, the upper boundary line, and the lower boundary line are located is perpendicular to the ground plane.”. The upper and lower boundaries are obtained along the primary optical axis that form the average vertical visible angle however it is indefinite what is being obtained and how the average vertical visible angle is being utilized. For example if the average vertical angle is divided by 1 then the angle would still be 90 degrees (90 degrees as the photographing component is perpendicular to the ground plane which is 180 degrees), thus the upper and lower limits would be the same as the original boundaries. This further renders the first plane indefinite in this scenario as a camera facing forward at 90 degrees would not just capture the upper and lower boundary lines on one plane (refer to Figure A below displaying the ground plane along with the sky and other vehicles).
Claim 4 states: “obtaining a vehicle location point of the target vehicle in the vehicle location status information of the target vehicle, and determining, according to the vehicle location point, a circular error probable corresponding to the target vehicle;”, however, as also discussed above in the specification objection, that the claimed limitation of “circular error probable” has two definitions according to the specification paragraph 0067. The CEP is defined as to be the probability that the actual location of the vehicle falls within a circle with a radius “r” and that the CEP is the radius of the circle. Due to this indefiniteness, the CEP will be interpreted as a radius of the circle the vehicle is to be within.
Claim 6 states: “the road location indicated by the region upper limit being located in front of the target vehicle in a driving direction; and in the driving direction, the road location indicated by the region upper limit being in front of the road location indicated by the region lower limit.”, however it is indefinite to what defines a region lower limit is. In it’s current form, it is confusing how the region lower limit is defined. For example, the road location in front of the vehicle in the driving direction is also defined as the region upper limit, thus how is it different from the region lower limit. Due to this indefiniteness, region lower limit and region upper limit will be interpreted as the same.
Claims 5 and 7 are also rejected as being dependent upon claim 4. Claim 8 is also rejected as being dependent upon claim 6.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 4, 15, 18 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li et al. (US 20220019817 A1).
Regarding claim 1, Li teaches a lane positioning method, performed by a computer device and comprising: (Li: Abstract: “A vehicle locating method, an apparatus, an electronic device, a storage medium and a computer program product are provided, and relate to the technical field of intelligent transportation.”)
obtaining a road visible region corresponding to a target vehicle, the road visible region being related to the target vehicle and a component parameter of a photographing component installed on the target vehicle, and being a road location photographed by the photographing component; (Li: Paragraph 0035: “Alternatively, the size of the area in which the target vehicle is currently located may be related to the capture range of the image collection apparatus, that is, the camera. For example, the camera may be of a wide-angle acquisition, and the area in which the target vehicle is currently located may have a larger range. In addition, the size of the area in which the target vehicle is currently located can also be related to the installation angle of the camera.”)
obtaining, according to vehicle location status information of the target vehicle and (Li: Paragraph 0038: “The current locating of the target vehicle may be obtained through a locating module, which is a locating module installed on the target vehicle; the locating module may include a GPS module. That is, the current location detected by the GPS module in real time can be used as the current location of the target vehicle.”)
the road visible region, (Li: Paragraph 0035: “Alternatively, the size of the area in which the target vehicle is currently located may be related to the capture range of the image collection apparatus, that is, the camera. For example, the camera may be of a wide-angle acquisition, and the area in which the target vehicle is currently located may have a larger range. In addition, the size of the area in which the target vehicle is currently located can also be related to the installation angle of the camera.”)
local map data associated with the target vehicle, (Li: Paragraph 0039: “The map application may specifically be an Advanced Driving Assistance System (ADAS) map. The acquiring the current lane related information within the area in which the target vehicle is currently located from the map application may specifically be: acquiring the current lane related information within the area in which the target vehicle is currently located from the ADAS map based on the current location of the target vehicle.”)
the road visible region being located in the local map data, and
the local map data comprising at least one lane associated with the target vehicle; and (Li: Paragraph 0010: “an image acquisition module, configured for acquiring an image of a current road within an area in which a target vehicle is currently located;”; Paragraph 0011: “a map information acquisition module, configured for acquiring current lane related information within the area in which the target vehicle is currently located from a map application;”)
determining, from the at least one lane of the local map data, a target lane to which the target vehicle belongs (Li: Paragraph 0013: “a synchronous fusion module, configured for determining a lane in which the target vehicle is currently located based on at least one of the current road recognition result and the current lane related information, and taking the lane in which the target vehicle is currently located as a current lane location result for the target vehicle.”).
Regarding claim 4, Li teaches wherein obtaining the local map data associated with the target vehicle comprises:
obtaining a vehicle location point of the target vehicle in the vehicle location status information of the target vehicle, and (Li: Paragraph 0011: “a map information acquisition module, configured for acquiring current lane related information within the area in which the target vehicle is currently located from a map application;”)
determining, according to the vehicle location point, a circular error probable corresponding to the target vehicle; (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located, and the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located.”; Paragraph 0080: “The distance threshold range may include: less than one lane width is a threshold range, one lane width to two lane widths is a threshold range, two lane widths to three lane widths is a threshold range. Or, it can be other threshold ranges, which are not exhaustive here.”,
Supplemental Note: the range in which the ADAS map determines the vehicle to be in is interpreted as the circular error probable)
determining a distance between the road visible region and the target vehicle as a road visible point distance; (Li: Paragraph 0035: “The area in which the target vehicle is currently located is: an area containing the current location of the target vehicle. Exemplarily, the area in which the target vehicle is currently located may be an area that may have a length of 200 meters in an extension line extending in the travel direction of the target vehicle from the current location of the target vehicle”)
determining, according to the vehicle location status information, the circular error probable, and the road visible point distance, a region upper limit corresponding to the target vehicle and a region lower limit corresponding to the target vehicle; and (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located, and the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located. Alternatively, it may not be processed, and all current lane related information within the initial area range is used for subsequent processing,”)
determining, from global map data, map data between a road location indicated by the region upper limit and a road location indicated by the region lower limit as the local map data associated with the target vehicle, (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located,”)
the road location indicated by the region upper limit being located in front of the target vehicle in a driving direction; and in the driving direction, the road location indicated by the region upper limit being in front of the road location indicated by the region lower limit (Li: Paragraph 0040: “the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located. Alternatively, it may not be processed, and all current lane related information within the initial area range is used for subsequent processing, but only the current lane related information within the area in which the target vehicle is currently located may be used in processing.”).
Regarding claim 15, Li teaches a computer device, comprising: at least one processor and a memory storing a computer program that, when being executed, causes the at least one processor to perform: (Li: Abstract: “A vehicle locating method, an apparatus, an electronic device, a storage medium and a computer program product are provided, and relate to the technical field of intelligent transportation.”; Paragraph 0164: “The computing unit 501 may be various general-purpose and/or special-purpose processing components with processing and computing capabilities. Some examples of the computing unit 501 include but are not limited to Central Processing Unit (CPU), Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, Digital Signal Processing (DSP), and any appropriate processor, controller, microcontroller, etc.”)
obtaining a road visible region corresponding to a target vehicle, the road visible region being related to the target vehicle and a component parameter of a photographing component installed on the target vehicle, and being a road location photographed by the photographing component; (Li: Paragraph 0035: “Alternatively, the size of the area in which the target vehicle is currently located may be related to the capture range of the image collection apparatus, that is, the camera. For example, the camera may be of a wide-angle acquisition, and the area in which the target vehicle is currently located may have a larger range. In addition, the size of the area in which the target vehicle is currently located can also be related to the installation angle of the camera.”)
obtaining, according to vehicle location status information of the target vehicle and (Li: Paragraph 0038: “The current locating of the target vehicle may be obtained through a locating module, which is a locating module installed on the target vehicle; the locating module may include a GPS module. That is, the current location detected by the GPS module in real time can be used as the current location of the target vehicle.”)
the road visible region, (Li: Paragraph 0035: “Alternatively, the size of the area in which the target vehicle is currently located may be related to the capture range of the image collection apparatus, that is, the camera. For example, the camera may be of a wide-angle acquisition, and the area in which the target vehicle is currently located may have a larger range. In addition, the size of the area in which the target vehicle is currently located can also be related to the installation angle of the camera.”)
local map data associated with the target vehicle, (Li: Paragraph 0039: “The map application may specifically be an Advanced Driving Assistance System (ADAS) map. The acquiring the current lane related information within the area in which the target vehicle is currently located from the map application may specifically be: acquiring the current lane related information within the area in which the target vehicle is currently located from the ADAS map based on the current location of the target vehicle.”)
the road visible region being located in the local map data, and the local map data comprising at least one lane associated with the target vehicle; and (Li: Paragraph 0010: “an image acquisition module, configured for acquiring an image of a current road within an area in which a target vehicle is currently located;”; Paragraph 0011: “a map information acquisition module, configured for acquiring current lane related information within the area in which the target vehicle is currently located from a map application;”)
determining, from the at least one lane of the local map data, a target lane to which the target vehicle belongs (Li: Paragraph 0013: “a synchronous fusion module, configured for determining a lane in which the target vehicle is currently located based on at least one of the current road recognition result and the current lane related information, and taking the lane in which the target vehicle is currently located as a current lane location result for the target vehicle.”).
Regarding claim 18, Li teaches wherein the at least one processor is further configured to perform:
obtaining a vehicle location point of the target vehicle in the vehicle location status information of the target vehicle, and (Li: Paragraph 0011: “a map information acquisition module, configured for acquiring current lane related information within the area in which the target vehicle is currently located from a map application;”)
determining, according to the vehicle location point, a circular error probable corresponding to the target vehicle; (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located, and the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located.”; Paragraph 0080: “The distance threshold range may include: less than one lane width is a threshold range, one lane width to two lane widths is a threshold range, two lane widths to three lane widths is a threshold range. Or, it can be other threshold ranges, which are not exhaustive here.”,
Supplemental Note: the range in which the ADAS map determines the vehicle to be in is interpreted as the circular error probable)
determining a distance between the road visible region and the target vehicle as a road visible point distance; (Li: Paragraph 0035: “The area in which the target vehicle is currently located is: an area containing the current location of the target vehicle. Exemplarily, the area in which the target vehicle is currently located may be an area that may have a length of 200 meters in an extension line extending in the travel direction of the target vehicle from the current location of the target vehicle”)
determining, according to the vehicle location status information, the circular error probable, and the road visible point distance, a region upper limit corresponding to the target vehicle and a region lower limit corresponding to the target vehicle; and (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located, and the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located. Alternatively, it may not be processed, and all current lane related information within the initial area range is used for subsequent processing,”)
determining, from global map data, map data between a road location indicated by the region upper limit and a road location indicated by the region lower limit as the local map data associated with the target vehicle, (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located,”)
the road location indicated by the region upper limit being located in front of the target vehicle in a driving direction; and in the driving direction, the road location indicated by the region upper limit being in front of the road location indicated by the region lower limit (Li: Paragraph 0040: “the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located. Alternatively, it may not be processed, and all current lane related information within the initial area range is used for subsequent processing, but only the current lane related information within the area in which the target vehicle is currently located may be used in processing.”).
Regarding claim 20, Li teaches a non-transitory computer readable storage medium containing a computer program that, when being executed, causes one or more processors of a computer device to perform: (Li: Abstract: “A vehicle locating method, an apparatus, an electronic device, a storage medium and a computer program product are provided, and relate to the technical field of intelligent transportation.”; Paragraph 0164: “The computing unit 501 may be various general-purpose and/or special-purpose processing components with processing and computing capabilities. Some examples of the computing unit 501 include but are not limited to Central Processing Unit (CPU), Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, Digital Signal Processing (DSP), and any appropriate processor, controller, microcontroller, etc.”)
obtaining a road visible region corresponding to a target vehicle, the road visible region being related to the target vehicle and a component parameter of a photographing component installed on the target vehicle, and being a road location photographed by the photographing component; (Li: Paragraph 0035: “Alternatively, the size of the area in which the target vehicle is currently located may be related to the capture range of the image collection apparatus, that is, the camera. For example, the camera may be of a wide-angle acquisition, and the area in which the target vehicle is currently located may have a larger range. In addition, the size of the area in which the target vehicle is currently located can also be related to the installation angle of the camera.”)
obtaining, according to vehicle location status information of the target vehicle and (Li: Paragraph 0038: “The current locating of the target vehicle may be obtained through a locating module, which is a locating module installed on the target vehicle; the locating module may include a GPS module. That is, the current location detected by the GPS module in real time can be used as the current location of the target vehicle.”)
the road visible region, (Li: Paragraph 0035: “Alternatively, the size of the area in which the target vehicle is currently located may be related to the capture range of the image collection apparatus, that is, the camera. For example, the camera may be of a wide-angle acquisition, and the area in which the target vehicle is currently located may have a larger range. In addition, the size of the area in which the target vehicle is currently located can also be related to the installation angle of the camera.”)
local map data associated with the target vehicle, (Li: Paragraph 0039: “The map application may specifically be an Advanced Driving Assistance System (ADAS) map. The acquiring the current lane related information within the area in which the target vehicle is currently located from the map application may specifically be: acquiring the current lane related information within the area in which the target vehicle is currently located from the ADAS map based on the current location of the target vehicle.”)
the road visible region being located in the local map data, and the local map data comprising at least one lane associated with the target vehicle; and (Li: Paragraph 0010: “an image acquisition module, configured for acquiring an image of a current road within an area in which a target vehicle is currently located;”; Paragraph 0011: “a map information acquisition module, configured for acquiring current lane related information within the area in which the target vehicle is currently located from a map application;”)
determining, from the at least one lane of the local map data, a target lane to which the target vehicle belongs (Li: Paragraph 0013: “a synchronous fusion module, configured for determining a lane in which the target vehicle is currently located based on at least one of the current road recognition result and the current lane related information, and taking the lane in which the target vehicle is currently located as a current lane location result for the target vehicle.”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 3, 5 – 12, 16, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (US 20220019817 A1) further in view of Eran et al. (WO 2020240274 A1).
Regarding claim 2, Li does not teach wherein obtaining the road visible region corresponding to the target vehicle comprises: determining, according to the component parameter of the photographing component, M photographing boundary lines corresponding to the photographing component, M being a positive integer; the M photographing boundary lines comprising a lower boundary line; and the lower boundary line being a boundary line that is in the M photographing boundary lines and that is closest to a road; obtaining a ground plane in which the target vehicle is located, and determining an intersection point of the ground plane and the lower boundary line as a candidate road point corresponding to the lower boundary line determining a target tangent formed by the photographing component and a vehicle head boundary point of the target vehicle, and determining an intersection point of the ground plane and the target tangent as a candidate road point corresponding to the target tangent; and determining, from a candidate road point corresponding to the lower boundary line and a candidate road point corresponding to the target tangent, a candidate road point relatively far from the target vehicle as the road visible region corresponding to the target vehicle.
Eran teaches wherein obtaining the road visible region corresponding to the target vehicle comprises:
determining, according to the component parameter of the photographing component, M photographing boundary lines corresponding to the photographing component, M being a positive integer; the M photographing boundary lines comprising a lower boundary line; and the lower boundary line being a boundary line that is in the M photographing boundary lines and that is closest to a road; obtaining a ground plane in which the target vehicle is located, and determining an intersection point of the ground plane and the lower boundary line as a candidate road point corresponding to the lower boundary line; (Eran: Paragraph 0349: “Fig. 27 is an illustration of an example image 2700 that may be captured by a host vehicle, consistent with the disclosed embodiments. For example, image 2700 may be captured from an environment of host vehicle 200 using image acquisition unit 120, as described in detail above. Image 2700 may include a road surface 2730 traveled by host vehicle 200.”,
Supplemental Note: the M integer is interpreted to be a positive integer representing the boundary lines of the photographing component, the photographing component is interpreted as the image. As shown in the Figure A, the image has a lower boundary which encompasses the roadway. The candidate road point are interpreted as the areas where the roadway meets the lower boundary)
PNG
media_image1.png
481
797
media_image1.png
Greyscale
Figure A - Eran: Fig. 27
determining a target tangent formed by the photographing component and a vehicle head boundary point of the target vehicle, and determining an intersection point of the ground plane and the target tangent as a candidate road point corresponding to the target tangent; and (Eran: Paragraphs 0127 – 0128: “Forward-Facing Multi-Imaging System. As discussed above, system 100 may provide drive assist functionality that uses a multi camera system. The multi-camera system may use one or more cameras facing in the forward direction of a vehicle.”; Paragraph 0350: “The vehicle navigation system may be configured to detect road topology features from within image 2700. As used herein, a road topology feature may include any natural or manmade feature included within the environment of a host vehicle. Such road topology features may indicate the configuration, arrangement, and traffic pattern of section of roadway. The road topology features may include any of the features included in image 2700, as described above.”,
Supplemental Note: the front facing camera is able to capture images in the heading of the vehicle, thus a target tangent is formed while intersecting the lower boundary of the image. Please refer to Figure A above)
determining, from a candidate road point corresponding to the lower boundary line and a candidate road point corresponding to the target tangent, a candidate road point relatively far from the target vehicle as the road visible region corresponding to the target vehicle (Eran: Figure A,
Supplemental Note: based on Figure A the road visible region is interpreted as the furthest roadway the image is able to detect).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Eran with a reasonable expectation of success. One of ordinary skill in the art would find the ability of Eran teaching a camera on a vehicle to be nothing more than a mere simple substitution with the Li’s vehicle camera as both perform the same function of taking images of their environment. For example, both the cameras of Li and Eran are taught to acquire roadway information from the images taken by the camera, thus performing the same function to get the same results. Furthermore, Li teaches the ability to place the camera in front of the vehicle (Li: Paragraph 0034), thus the images are already positioned in the vehicle heading of the target vehicle. These images would also have a upper and lower boundary depending on the FOV of the camera and the images would capture a road visible region.
Regarding claim 3, Li, as modified, teaches wherein the component parameter of the photographing component comprises a vertical visible angle and a component location parameter; (Li: Paragraph 0029: “S101, acquiring an image of a current road within an area in which a target vehicle is currently located,”: Paragraph 0034: “Wherein, the image collection device may be a camera; the installation location of the image collection apparatus may be in front of the target vehicle.”)
the vertical visible angle being a photographing angle of the photographing component in a direction perpendicular to the ground plane; (Li: Paragraph 0035: “For example, in a case that the camera is set horizontally, that is, the camera horizontally shoots the road surface in front of the target vehicle, the collected image covers a larger range of the road surface, and correspondingly, the range of the area in which the target vehicle is currently located can also be larger.” Paragraph 0144: “ Wherein, the image collection module, for example, may be a camera, especially a camera disposed in front of the target vehicle; ”,
Supplemental Note: the camera can be placed horizontally Infront of the vehicle, thus perpendicular with the ground plane)
the component location parameter referring to an installation location and an installation direction of the photographing component installed on the target vehicle; (Li: Paragraph 0034: “Wherein, the image collection device may be a camera; the installation location of the image collection apparatus may be in front of the target vehicle.”)
… determining a primary optical axis of the photographing component according to the installation location and the installation direction in the component location parameter; (Li: Paragraph 0053: “It should be pointed out that, in addition to the image indicating the diversion line in the road, the diversion line recognition result can also include the relative location of the diversion line in the road in the image, or the absolute location in the world coordinate system. The method for obtaining the absolute location in the world coordinate system is not limited in this embodiment. Here, the diversion line recognition result may be obtained at local, or may be sent by the cloud server. Local refers to the aforementioned apparatus with data processing function.”,
Supplemental Note: the world coordinate system is used for detecting the location of different items found in the image).
In sum, Li teaches wherein the component parameter of the photographing component comprises a vertical visible angle and a component location parameter; the vertical visible angle being a photographing angle of the photographing component in a direction perpendicular to the ground plane; the component location parameter referring to an installation location and an installation direction of the photographing component installed on the target vehicle and determining a primary optical axis of the photographing component according to the installation location and the installation direction in the component location parameter. Li however does not teach the M photographing boundary lines further comprising an upper boundary line; and determining the M photographing boundary lines corresponding to the photographing component comprises: evenly dividing the vertical visible angle to obtain an average vertical visible angle of the photographing component; and obtaining, along the primary optical axis, the lower boundary line and the upper boundary line that form the average vertical visible angle with the primary optical axis, wherein the primary optical axis, the upper boundary line, and the lower boundary line are located on a same plane, and a plane on which the primary optical axis, the upper boundary line, and the lower boundary line are located is perpendicular to the ground plane.
Eran teaches the M photographing boundary lines further comprising an upper boundary line; and
determining the M photographing boundary lines corresponding to the photographing component comprises: (Eran: as shown in Figure A above, the upper boundary is upper limit of the image)
… evenly dividing the vertical visible angle to obtain an average vertical visible angle of the photographing component; and
obtaining, along the primary optical axis, the lower boundary line and the upper boundary line that form the average vertical visible angle with the primary optical axis, wherein the primary optical axis, the upper boundary line, and the lower boundary line are located on a same plane, and a plane on which the primary optical axis, the upper boundary line, and the lower boundary line are located is perpendicular to the ground plane (Eran: Please see Figure A above. The vertical visible angle can be evenly divided by 1 which is 90 degrees, thus the image shown in Figure A is incorporated the upper and lower boundaries within this image).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Eran with a reasonable expectation of success. As stated for claim 2, one of ordinary skill in the art would find the ability of Eran teaching a camera on a vehicle to be nothing more than a mere simple substitution with the Li’s vehicle camera as both perform the same function of taking images of their environment. For example, both the cameras of Li and Eran are taught to acquire roadway information from the images taken by the camera, thus performing the same function to get the same results. Furthermore, Li teaches the ability to place the camera in front of the vehicle (Li: Paragraph 0034), thus the images are already positioned in the vehicle heading of the target vehicle and perpendicular to the ground. These images would also have a upper and lower boundary depending on the FOV of the camera and the images would capture a road visible region to one of knowledge in the arts. Therefore it would be a simple substitution to one of ordinary skill in the art regarding the vehicle cameras and their images as taught by Li and Eran.
Regarding claim 5, Li teaches wherein the vehicle location status information further comprises a vehicle driving state of the target vehicle at the vehicle location point; and (Li: Paragraph 0148: “The locating module 420 may acquire the current location of the target vehicle. For example, based on multi-sensor fusion locating, including GPS, IMU, vehicle speed, steering wheel, etc., the location result can be sent to the ADAS map module 430 as the current location of the target vehicle;”,
Supplemental Note: based on the vehicle speed, it can be determined whether the vehicle is moving or stationary, thus interpreted as the vehicle location status information)
determining the region upper limit corresponding to the target vehicle and the region lower limit corresponding to the target vehicle comprises:
performing first operation processing on the circular error probable and the road visible point distance to obtain the region lower limit corresponding to the target vehicle; and (Li: Paragraph 0040: “the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located. Alternatively, it may not be processed, and all current lane related information within the initial area range is used for subsequent processing, but only the current lane related information within the area in which the target vehicle is currently located may be used in processing.”,
Supplemental Note: the initial range is able to identify the upper and lower limits of where the vehicle is).
In sum, Li teaches wherein the vehicle location status information further comprises a vehicle driving state of the target vehicle at the vehicle location point; and determining the region upper limit corresponding to the target vehicle and the region lower limit corresponding to the target vehicle comprises: performing first operation processing on the circular error probable and the road visible point distance to obtain the region lower limit corresponding to the target vehicle. Li however does not teach extending, by using the vehicle driving state, the road visible point distance along the driving direction to obtain an extended visible point distance, and performing second operation processing on the extended visible point distance and the circular error probable to obtain the region upper limit corresponding to the target vehicle.
Eran teaches extending, by using the vehicle driving state, the road visible point distance along the driving direction to obtain an extended visible point distance, and performing second operation processing on the extended visible point distance and the circular error probable to obtain the region upper limit corresponding to the target vehicle (Eran: Paragraph 0082: “Position sensor 130 may include any type of device suitable for determining a location associated with at least one component of system 100. In some embodiments, position sensor 130 may include a GPS receiver. Such receivers can determine a user position and velocity by processing signals broadcasted by global positioning system satellites. Position information from position sensor 130 may be made available to applications processor 180 and/or image processor 190.”; Paragraph 0166: “In another embodiment, processing unit 110 may compare the leading vehicle’s instantaneous position with the look-ahead point (associated with vehicle 200) over a specific period of time (e.g., 0.5 to 1.5 seconds). If the distance between the leading vehicle’s instantaneous position and the look-ahead point varies during the specific period of time, and the cumulative sum of variation exceeds a predetermined threshold (for example, 0.3 to 0.4 meters on a straight road, 0.7 to 0.8 meters on a moderately curvy road, and 1.3 to 1.7 meters on a road with sharp curves), processing unit 110 may determine that the leading vehicle is likely changing lanes.”).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Eran with a reasonable expectation of success. One of ordinary skill in the art would find it obvious to try to implement Eran’s vehicle ability to determine a look-ahead point associated with the vehicle in combination with the vehicle system of Li. Li already teaches the ability to determine if a vehicle is moving however it does not teach a way to determine where the vehicle is moving next whereas the look-ahead point of Eran does. Eran teaches the ability to use the vehicle original location and compare with the look-ahead point to determine if the vehicle is changing lanes or not. This would be obvious to try to implement with the vehicle Li as it now more accurately determine the target lane the target vehicle is in even while the vehicle is moving. Currently Li does not teach the ability to detect if the vehicle is changing lanes, thus the combination with Eran would be obvious to try as to mitigate inaccurate localization results to one of ordinary skill in the arts.
Regarding claim 6, Li teaches wherein the vehicle location status information comprises a vehicle driving state of the target vehicle; and (Li: Paragraph 0148: “The locating module 420 may acquire the current location of the target vehicle. For example, based on multi-sensor fusion locating, including GPS, IMU, vehicle speed, steering wheel, etc., the location result can be sent to the ADAS map module 430 as the current location of the target vehicle;”,
Supplemental Note: based on the vehicle speed, it can be determined whether the vehicle is moving or stationary, thus interpreted as the vehicle location status information)
obtaining the local map data associated with the target vehicle comprises:
determining a distance between the road visible region and the target vehicle as a road visible point distance, and determining the road visible point distance as a region lower limit corresponding to the target vehicle; (Li: Paragraph 0040: “the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located. Alternatively, it may not be processed, and all current lane related information within the initial area range is used for subsequent processing, but only the current lane related information within the area in which the target vehicle is currently located may be used in processing.”,
Supplemental Note: the initial range is able to identify the upper and lower limits of where the vehicle is)
… and determining, from global map data, map data between a road location indicated by the region upper limit and a road location indicated by the region lower limit as the local map data associated with the target vehicle; (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located,”)
the road location indicated by the region upper limit being located in front of the target vehicle in a driving direction; and in the driving direction, the road location indicated by the region upper limit being in front of the road location indicated by the region lower limit (Li: Paragraph 0040: “the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located. Alternatively, it may not be processed, and all current lane related information within the initial area range is used for subsequent processing, but only the current lane related information within the area in which the target vehicle is currently located may be used in processing.”).
In sum, Li teaches wherein the vehicle location status information comprises a vehicle driving state of the target vehicle; and obtaining the local map data associated with the target vehicle comprises: determining a distance between the road visible region and the target vehicle as a road visible point distance, and determining the road visible point distance as a region lower limit corresponding to the target vehicle; determining, from global map data, map data between a road location indicated by the region upper limit and a road location indicated by the region lower limit as the local map data associated with the target vehicle; the road location indicated by the region upper limit being located in front of the target vehicle in a driving direction; and in the driving direction, the road location indicated by the region upper limit being in front of the road location indicated by the region lower limit. Li however does not teach extending, by using the vehicle driving state, the road visible point distance along the driving direction to obtain an extended visible point distance, and determining the extended visible point distance as a region upper limit corresponding to the target vehicle.
Eran teaches extending, by using the vehicle driving state, the road visible point distance along the driving direction to obtain an extended visible point distance, and determining the extended visible point distance as a region upper limit corresponding to the target vehicle; (Eran: Paragraph 0082; Paragraph 0166)
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Eran with a reasonable expectation of success. Please refer to the rejection of claim 5 as both state the same functional language and therefore rejected under the same pretenses.
Regarding claim 7, Li, as modified, teaches wherein determining, from the global map data, the map data between the road location indicated by the region upper limit and the road location indicated by the region lower limit as the local map data associated with the target vehicle comprises:
determining, from the global map data, a map location point corresponding to the vehicle location status information; (Li: Paragraph 0011: “a map information acquisition module, configured for acquiring current lane related information within the area in which the target vehicle is currently located from a map application;”)
determining, from the global map data according to the map location point and the region lower limit, a road location indicated by the region lower limit; (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located, and the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located.”; Paragraph 0080: “The distance threshold range may include: less than one lane width is a threshold range, one lane width to two lane widths is a threshold range, two lane widths to three lane widths is a threshold range. Or, it can be other threshold ranges, which are not exhaustive here.”,
Supplemental Note: the range in which the ADAS map determines the vehicle to be in is interpreted as the upper and lower limits of the vehicle)
determining, from the global map data according to the map location point and the region upper limit, a road location indicated by the region upper limit; and (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located,”)
determining map data between the road location indicated by the region lower limit and a road location indicated by the region upper limit as the local map data associated with the target vehicle; the local map data belonging to the global map data (Li: Paragraph 0040: “the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located. Alternatively, it may not be processed, and all current lane related information within the initial area range is used for subsequent processing, but only the current lane related information within the area in which the target vehicle is currently located may be used in processing.”).
Regarding claim 8, Li, as modified, teaches wherein determining, from the global map data, the map data between the road location indicated by the region upper limit and the road location indicated by the region lower limit as the local map data associated with the target vehicle comprises:
determining, from the global map data, a map location point corresponding to the vehicle location status information; (Li: Paragraph 0011: “a map information acquisition module, configured for acquiring current lane related information within the area in which the target vehicle is currently located from a map application;”)
determining, from the global map data according to the map location point and the region lower limit, a road location indicated by the region lower limit; (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located, and the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located.”; Paragraph 0080: “The distance threshold range may include: less than one lane width is a threshold range, one lane width to two lane widths is a threshold range, two lane widths to three lane widths is a threshold range. Or, it can be other threshold ranges, which are not exhaustive here.”,
Supplemental Note: the range in which the ADAS map determines the vehicle to be in is interpreted as the upper and lower limits of the vehicle)
determining, from the global map data according to the map location point and the region upper limit, a road location indicated by the region upper limit; and (Li: Paragraph 0040: “Here, the area range of the current lane related information obtained from the ADAS map may be larger than the area in which the target vehicle is currently located,”)
determining map data between the road location indicated by the region lower limit and a road location indicated by the region upper limit as the local map data associated with the target vehicle; the local map data belonging to the global map data (Li: Paragraph 0040: “the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located. Alternatively, it may not be processed, and all current lane related information within the initial area range is used for subsequent processing, but only the current lane related information within the area in which the target vehicle is currently located may be used in processing.”).
Regarding claim 9, Li teaches, wherein determining the target lane to which the target vehicle belongs comprises:
performing region division on the local map data according to an appearance change point and a lane quantity change point, to obtain S pieces of divided map data in the local map data, S being a positive integer; (Li: Paragraph 0041: “ It should be understood that the acquiring the image of the current road within the area in which the target vehicle is currently located, and the acquiring the current lane related information within the area in which the target vehicle is currently located from the map application, both can be performed simultaneously. That is, while the target vehicle is collecting the image of the current road within the area in which the target vehicle is currently located in real time, the current lane related information within the area in which the target vehicle is currently located may be acquired from the ADAS map based on the current location of the target vehicle.”; Paragraph 0043: “The current road recognition result may include at least one of lane line detection result, diversion line recognition result, lane division line recognition result, road edge recognition result, road-surface-arrow-sequence recognition result, and lane change event.”,
Supplemental Note: the different types of lane lines can be identified on the images and by the ADAS map. The different lane lines detected are interpreted as the S pieces)
… obtaining lane line observation information corresponding to a lane line photographed by the photographing component; (Li: Paragraph 0054: “The lane line detection result may be included in the current road recognition result. Here, the lane line detection result can be obtained at local, which local refers to the aforementioned apparatus with data processing function; that is, when the image of the current road is obtained in real time, the image of the current road is analyzed in real time in the local apparatus with data processing function to obtain the lane line detection result. Alternatively, the lane line detection result may also be obtained through analysis by the cloud server.”)
separately matching the lane line observation information and the vehicle location status information with the S pieces of divided map data to obtain a lane probability respectively corresponding to at least one lane in each piece of divided map data; and (Li: Paragraphs 0128 – 0131: “an image acquisition module 301, configured for acquiring an image of a current road within an area in which a target vehicle is currently located; a map information acquisition module 302, configured for acquiring current lane related information within the area in which the target vehicle is currently located from a map application; an engine module 303, configured for determining a current road recognition result based on the image of the current road; and a synchronous fusion module 304, configured for determining a lane in which the target vehicle is currently located based on at least one of the current road recognition result and the current lane related information, and taking the lane in which the target vehicle is currently located as the current lane location result for the target vehicle.”; Paragraph 0148: “The locating module 420 may acquire the current location of the target vehicle. For example, based on multi-sensor fusion locating, including GPS, IMU, vehicle speed, steering wheel, etc., the location result can be sent to the ADAS map module 430 as the current location of the target vehicle; the ADAS map module 430 sends the current lane related information within the area in which the target vehicle is currently located to the engine module 303 according to the current location. For example, based on multi-sensor fusion locating, including GPS, IMU, vehicle speed, steering wheel, etc., the location result can be sent to the ADAS map module 430 as the current location of the target vehicle; the ADAS map module 430 sends the current lane related information within the area in which the target vehicle is currently located to the engine module 303 according to the current location. For example, the current lane related information may include road-surface-arrow-sequence information, intersection information, lane number change information, and so on. Here, the contents that the ADAS map module can provide can also include more, such as long solid lane lines, viaduct signals, main and auxiliary road signals, etc., which are not exhaustively listed in this embodiment.”,
Supplemental Note: the ADAS map module is able to obtain the vehicle status and per fusion locating, the vehicle is then able to match the lane line observations with the current location map)
determining, according to a lane probability respectively corresponding to at least one lane in the S pieces of divided map data, a candidate lane corresponding to each piece of divided map data from the at least one lane respectively corresponding to each piece of divided map data, and determining, from S candidate lanes, the target lane to which the target vehicle belongs (Li: Paragraph 0148: “For example, based on multi-sensor fusion locating, including GPS, IMU, vehicle speed, steering wheel, etc., the location result can be sent to the ADAS map module 430 as the current location of the target vehicle; the ADAS map module 430 sends the current lane related information within the area in which the target vehicle is currently located to the engine module 303 according to the current location. For example, the current lane related information may include road-surface-arrow-sequence information, intersection information, lane number change information, and so on. Here, the contents that the ADAS map module can provide can also include more, such as long solid lane lines, viaduct signals, main and auxiliary road signals, etc., which are not exhaustively listed in this embodiment.”: Paragraph 0156: “ The lane in which the target vehicle is currently located is determined based on a road-surface-arrow-sequence recognition result and road-surface-arrow-sequence information contained in the current lane related information in a case that the current road recognition result contains the road-surface-arrow-sequence recognition result. For example, in a case that the road-surface-arrow-sequence recognition result is detected, the road surface arrow sequence in the road-surface-arrow-sequence recognition result is matched with the current road segment arrow sequence given by the ADAS map. In a case that they matches, the locating of the current lane is determined according to the matching result;”,
Supplemental Note: in the example of paragraph 0156, the fusion model compares the detected lane arrow with the map data and evaluates the position and location of the arrow accordingly).
In sum, Li teaches wherein determining the target lane to which the target vehicle belongs comprises: performing region division on the local map data according to an appearance change point and a lane quantity change point, to obtain S pieces of divided map data in the local map data, S being a positive integer; obtaining lane line observation information corresponding to a lane line photographed by the photographing component; separately matching the lane line observation information and the vehicle location status information with the S pieces of divided map data to obtain a lane probability respectively corresponding to at least one lane in each piece of divided map data; and determining, according to a lane probability respectively corresponding to at least one lane in the S pieces of divided map data, a candidate lane corresponding to each piece of divided map data from the at least one lane respectively corresponding to each piece of divided map data, and determining, from S candidate lanes, the target lane to which the target vehicle belongs. Li however does not teach a quantity of map lane lines in a same divided map data being fixed, and a map lane line pattern type and a map lane line color on a same lane line in same divided map data being fixed; and the appearance change point referring to a location at which the map lane line pattern type or the map lane line color on the same lane line in the local map data changes, and the lane quantity change point referring to a location at which the map lane line color in the local map data changes.
Eran teaches a quantity of map lane lines in a same divided map data being fixed, and a map lane line pattern type and a map lane line color on a same lane line in same divided map data being fixed; and the appearance change point referring to a location at which the map lane line pattern type or the map lane line color on the same lane line in the local map data changes, and the lane quantity change point referring to a location at which the map lane line color in the local map data changes (Eran: Paragraph 0268: “As explained above, the autonomous vehicle road navigation model included in the sparse map may include other information, such as identification of at least one landmark along road segment 1200. The landmark may be visible within a field of view of a camera (e.g., camera 122) installed on each of vehicles 1205, 1210, 1215, 1220, and 1225. In some embodiments, camera 122 may capture an image of a landmark. A processor (e.g., processor 180, 190, or processing unit 110) provided on vehicle 1205 may process the image of the landmark to extract identification information for the landmark. The landmark identification information, rather than an actual image of the landmark, may be stored in sparse map 800.”… “The landmark may include at least one of a traffic sign, an arrow marking, a lane marking, a dashed lane marking, a traffic light, a stop line, a directional sign (e.g., a highway exit sign with an arrow indicating a direction, a highway sign with arrows pointing to different directions or places), a landmark beacon, or a lamppost.”; Paragraph 0387: “In some embodiments, the lane characterization may include a spatial relationship relative to the host vehicle lane. For example, the lane may be characterized as being a certain number of lanes (e.g., one, two, three, etc.) from the host vehicle lane in a particular direction (e.g., left or right). Various other characterizations may also be identified, including a lane surface type (e.g., dirt, gravel, asphalt, concrete, etc.), a lane condition (e.g., wet, dry, damaged, etc.), a lane color, a lane elevation, or various other characteristics. In some embodiments, one or more of these examples may be combined with or included in another classification attribute, such as lane type, or may be used to identify the other lane characterizations described above.”,
Supplemental Note: lane information from images can be used to update the sparse maps).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Eran with a reasonable expectation of success. One of ordinary skill in the art would find the function of Eran teaching in acquiring lane attribute information, such as lane surface type, lane condition, lane color, lane elevation and other characteristics, extracted from the captured images as to obvious to try to implement with the vehicle system of Li. Li already teaches the ability to detect different types of lane lines within the images, however Li does not explicitly teach gathering the remainder of the lane attributes as taught by Eran. Identifying these additional lane attributes as taught by Eran would allow the system of Li to better determine the target lane the vehicle is in. For example, multiple different standards are used throughout the world thus being able to recognize lane line color as well would further aid in the localization of the vehicle.
Regarding claim 10, Li, as modified, teaches wherein obtaining the lane line observation information corresponding to the lane line photographed by the photographing component comprises:
obtaining a road image that is photographed by the photographing component and that corresponds to a road in the driving direction; (Li: Paragraph 0035: “The area in which the target vehicle is currently located is: an area containing the current location of the target vehicle. Exemplarily, the area in which the target vehicle is currently located may be an area that may have a length of 200 meters in an extension line extending in the travel direction of the target vehicle from the current location of the target vehicle and a width of 100 meters. Alternatively, the size of the area in which the target vehicle is currently located may be related to the capture range of the image collection apparatus, that is, the camera. For example, the camera may be of a wide-angle acquisition, and the area in which the target vehicle is currently located may have a larger range.”)
performing element segmentation on the road image to obtain a lane line in the road image; and (Li: Paragraphs 0007 – 0008: “determining a current road recognition result based on the image of the current road; and determining a lane in which the target vehicle is currently located based on at least one of the current road recognition result and the current lane related information, and taking the lane in which the target vehicle is currently located as a current lane location result for the target vehicle.”)
performing attribute identification on the lane line to obtain the lane line observation information corresponding to the lane line (Li: Paragraph 0043 – 0044: “The current road recognition result may include at least one of lane line detection result, diversion line recognition result, lane division line recognition result, road edge recognition result, road-surface-arrow-sequence recognition result, and lane change event. In the determining the lane in which the target vehicle is currently located based on the lane line detection result and the diversion line recognition result, the lane in which the target vehicle is currently located may be determined based on the current road recognition result;”).
Regarding claim 11, Li, as modified, does not teach wherein the lane line observation information comprises a lane line color corresponding to the lane line and a lane line pattern type corresponding to the lane line; and performing the attribute identification on the lane line to obtain the lane line observation information corresponding to the lane line comprises: inputting the lane line to an attribute identification model, and performing feature extraction on the lane line by using the attribute identification model to obtain a color attribute feature corresponding to the lane line and a pattern type attribute feature corresponding to the lane line; and determining the lane line color according to the color attribute feature corresponding to the lane line, and determining the lane line pattern type according to the pattern type attribute feature corresponding to the lane line, the lane line color being configured for matching with a map lane line color in the local map data, and the lane line pattern type being configured for matching with a map lane line pattern type in the local map data.
Eran teaches wherein the lane line observation information comprises a lane line color corresponding to the lane line and a lane line pattern type corresponding to the lane line; and
performing the attribute identification on the lane line to obtain the lane line observation information corresponding to the lane line comprises:
inputting the lane line to an attribute identification model, and performing feature extraction on the lane line by using the attribute identification model to obtain a color attribute feature corresponding to the lane line and a pattern type attribute feature corresponding to the lane line; and (Eran: Paragraph 0268: “As explained above, the autonomous vehicle road navigation model included in the sparse map may include other information, such as identification of at least one landmark along road segment 1200. The landmark may be visible within a field of view of a camera (e.g., camera 122) installed on each of vehicles 1205, 1210, 1215, 1220, and 1225. In some embodiments, camera 122 may capture an image of a landmark. A processor (e.g., processor 180, 190, or processing unit 110) provided on vehicle 1205 may process the image of the landmark to extract identification information for the landmark. The landmark identification information, rather than an actual image of the landmark, may be stored in sparse map 800. The landmark identification information may require much less storage space than an actual image. Other sensors or systems (e.g., GPS system) may also provide certain identification information of the landmark (e.g., position of landmark). The landmark may include at least one of a traffic sign, an arrow marking, a lane marking, a dashed lane marking, a traffic light, a stop line, a directional sign (e.g., a highway exit sign with an arrow indicating a direction, a highway sign with arrows pointing to different directions or places), a landmark beacon, or a lamppost.”; Paragraph 0387: “In some embodiments, the lane characterization may include a spatial relationship relative to the host vehicle lane. For example, the lane may be characterized as being a certain number of lanes (e.g., one, two, three, etc.) from the host vehicle lane in a particular direction (e.g., left or right). Various other characterizations may also be identified, including a lane surface type (e.g., dirt, gravel, asphalt, concrete, etc.), a lane condition (e.g., wet, dry, damaged, etc.), a lane color, a lane elevation, or various other characteristics. In some embodiments, one or more of these examples may be combined with or included in another classification attribute, such as lane type, or may be used to identify the other lane characterizations described above.”,
Supplemental Note: lane information such as lane surface type, lane condition, lane color, lane elevation and other characteristics are extracted from the captured images to be used to update the sparse maps)
determining the lane line color according to the color attribute feature corresponding to the lane line, and determining the lane line pattern type according to the pattern type attribute feature corresponding to the lane line, the lane line color being configured for matching with a map lane line color in the local map data, and the lane line pattern type being configured for matching with a map lane line pattern type in the local map data (Eran: Paragraph 0146: “Processing unit 110 may also execute monocular image analysis module 402 to detect various road hazards at step 520, such as, for example, parts of a truck tire, fallen road signs, loose cargo, small animals, and the like. Road hazards may vary in structure, shape, size, and color, which may make detection of such hazards more challenging. In some embodiments, processing unit 110 may execute monocular image analysis module 402 to perform multi-frame analysis on the plurality of images to detect road hazards. For example, processing unit 110 may estimate camera motion between consecutive image frames and calculate the disparities in pixels between the frames to construct a 3D-map of the road. Processing unit 110 may then use the 3D-map to detect the road surface, as well as hazards existing above the road surface.”; Paragraph 0156: “At step 558, processing unit 110 may consider additional sources of information to further develop a safety model for vehicle 200 in the context of its surroundings. Processing unit 110 may use the safety model to define a context in which system 100 may execute autonomous control of vehicle 200 in a safe manner. To develop the safety model, in some embodiments, processing unit 110 may consider the position and motion of other vehicles, the detected road edges and barriers, and/or general road shape descriptions extracted from map data (such as data from map database 160). By considering additional sources of information, processing unit 110 may provide redundancy for detecting road marks and lane geometry and increase the reliability of system 100.”: Paragraph 0158: “At step 562, processing unit 110 may analyze the geometry of a junction. The analysis may be based on any combination of: (i) the number of lanes detected on either side of vehicle 200, (ii) markings (such as arrow marks) detected on the road, and (iii) descriptions of the junction extracted from map data (such as data from map database 160). Processing unit 110 may conduct the analysis using information derived from execution of monocular analysis module 402. In addition, Processing unit 110 may determine a correspondence between the traffic lights detected at step 560 and the lanes appearing near vehicle 200.”,
Supplemental Note: the processor is able to create a model from the images representing the roadway attributes and then able to compare with the map data).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Eran with a reasonable expectation of success. As stated for claim 9, one of ordinary skill in the art would find the function of Eran teaching in acquiring lane attribute information, such as lane surface type, lane condition, lane color, lane elevation and other characteristics, extracted from the captured images as to obvious to try to implement with the vehicle system of Li. Li already teaches the ability to detect different types of lane lines within the images, however Li does not explicitly teach gathering the remainder of the lane attributes as taught by Eran. Identifying these additional lane attributes as taught by Eran would allow the system of Li to better determine the target lane the vehicle is in. For example, multiple different standards are used throughout the world thus being able to recognize lane line color as well would further aid in the localization of the vehicle. Furthermore the ability of Eran teaching the vehicle system to be able to match the lanes identified in the images with the map data would be a use of known techniques to improve similar devices in the same way to one of ordinary skill in the art pertaining to Li’s vehicle system as it also teaches matching lane lines identified in the images with the map data. Eran differs as it is able to gather additional lane attributes as stated above, however both Li and Eran teach comparing the image data with some sort of map data. The additional lane attributes would aid in better localizing the target vehicle as taught by Eran but it still teaches the known method of data gathering from images, thus this method can be used to improve the data gathering pertaining to the lane attributes as taught by Li in the same way.
Regarding claim 12, Li, as modified, teaches wherein a quantity of the lane lines is at least two; the lane line observation information comprises a lane line equation; and (Li: Paragraph 0055: “The lane line detection result may be extracted from the image of the current road, or obtained by analyzing the image of the current road. The lane line detection result may be an image indicating the lane line in the road. It should be pointed out that in addition to the image indicating the lane line in the road, the lane line detection result may also include the relative location of the lane line in the image or the absolute location in the world coordinate system.”: Paragraph 0056: “The opposing-lanes-dividing line recognition result may be an image indicating the opposing-lanes-dividing line in the road. It should be pointed out that in addition to the image indicating the opposing-lanes-dividing line in the road, the opposing-lanes-dividing line recognition result may also include the relative location of the opposing-lanes-dividing line in the road in the image, or the absolute location in the world coordinate system.”)
performing the attribute identification on the lane line to obtain the lane line observation information corresponding to the lane line comprises: (Li: Paragraphs 0056 – 0057: “The opposing-lanes-dividing line recognition result may be an image indicating the opposing-lanes-dividing line in the road. It should be pointed out that in addition to the image indicating the opposing-lanes-dividing line in the road, the opposing-lanes-dividing line recognition result may also include the relative location of the opposing-lanes-dividing line in the road in the image, or the absolute location in the world coordinate system.”)
performing a reverse perspective change on the at least two lane lines to obtain changed lane lines respectively corresponding to the at least two lane lines; and (Li: Paragraphs 0056 – 0057: “The opposing-lanes-dividing line recognition result may be an image indicating the opposing-lanes-dividing line in the road. It should be pointed out that in addition to the image indicating the opposing-lanes-dividing line in the road, the opposing-lanes-dividing line recognition result may also include the relative location of the opposing-lanes-dividing line in the road in the image, or the absolute location in the world coordinate system.”,
Supplemental Note: a reverse perspective change is stated in the specification paragraph 00105 to convert the lane image information to world coordinates which is cited to be taught by Li)
separately performing fitting reconstruction on the at least two changed lane lines to obtain the lane line equation respectively corresponding to each changed lane line, (Li: Paragraphs 0128 – 0131: “an image acquisition module 301, configured for acquiring an image of a current road within an area in which a target vehicle is currently located; a map information acquisition module 302, configured for acquiring current lane related information within the area in which the target vehicle is currently located from a map application; an engine module 303, configured for determining a current road recognition result based on the image of the current road; and a synchronous fusion module 304, configured for determining a lane in which the target vehicle is currently located based on at least one of the current road recognition result and the current lane related information, and taking the lane in which the target vehicle is currently located as the current lane location result for the target vehicle.”; Paragraph 0148: “For example, based on multi-sensor fusion locating, including GPS, IMU, vehicle speed, steering wheel, etc., the location result can be sent to the ADAS map module 430 as the current location of the target vehicle; the ADAS map module 430 sends the current lane related information within the area in which the target vehicle is currently located to the engine module 303 according to the current location. For example, the current lane related information may include road-surface-arrow-sequence information, intersection information, lane number change information, and so on. Here, the contents that the ADAS map module can provide can also include more, such as long solid lane lines, viaduct signals, main and auxiliary road signals, etc., which are not exhaustively listed in this embodiment.”,
Supplemental Note: the lane lines are compared with the ADAS map module. The lane lines are plotted on a world coordinate system thus are interpreted as the changed line)
the lane line equation being configured for matching with shape point coordinates in the local map data; and the shape point coordinates in the local map data being configured for fitting a road shape of at least one lane in the local map data (Li: Paragraph 0041: “ It should be understood that the acquiring the image of the current road within the area in which the target vehicle is currently located, and the acquiring the current lane related information within the area in which the target vehicle is currently located from the map application, both can be performed simultaneously. That is, while the target vehicle is collecting the image of the current road within the area in which the target vehicle is currently located in real time, the current lane related information within the area in which the target vehicle is currently located may be acquired from the ADAS map based on the current location of the target vehicle.”; Paragraph 0043: “The current road recognition result may include at least one of lane line detection result, diversion line recognition result, lane division line recognition result, road edge recognition result, road-surface-arrow-sequence recognition result, and lane change event.”,
Supplemental Note: the different types of lane lines can be identified on the images and by the ADAS map. The different lane lines detected are interpreted as the S pieces).
Regarding claim 16, Li, does not teach wherein the at least one processor is further configured to perform: determining, according to the component parameter of the photographing component, M photographing boundary lines corresponding to the photographing component, M being a positive integer; the M photographing boundary lines comprising a lower boundary line; and the lower boundary line being a boundary line that is in the M photographing boundary lines and that is closest to a road; obtaining a ground plane in which the target vehicle is located, and determining an intersection point of the ground plane and the lower boundary line as a candidate road point corresponding to the lower boundary line; determining a target tangent formed by the photographing component and a vehicle head boundary point of the target vehicle, and determining an intersection point of the ground plane and the target tangent as a candidate road point corresponding to the target tangent; and determining, from a candidate road point corresponding to the lower boundary line and a candidate road point corresponding to the target tangent, a candidate road point relatively far from the target vehicle as the road visible region corresponding to the target vehicle.
Eran teaches wherein the at least one processor is further configured to perform:
determining, according to the component parameter of the photographing component, M photographing boundary lines corresponding to the photographing component, M being a positive integer; the M photographing boundary lines comprising a lower boundary line; and the lower boundary line being a boundary line that is in the M photographing boundary lines and that is closest to a road;
obtaining a ground plane in which the target vehicle is located, and determining an intersection point of the ground plane and the lower boundary line as a candidate road point corresponding to the lower boundary line; (Eran: Paragraph 0349: “Fig. 27 is an illustration of an example image 2700 that may be captured by a host vehicle, consistent with the disclosed embodiments. For example, image 2700 may be captured from an environment of host vehicle 200 using image acquisition unit 120, as described in detail above. Image 2700 may include a road surface 2730 traveled by host vehicle 200.”,
Supplemental Note: the M integer is interpreted to be a positive integer representing the boundary lines of the photographing component which is interpreted as the image. As shown in the Figure A, the image has a lower boundary which encompasses the roadway. The candidate road point are interpreted as the areas where the roadway meets the lower boundary)
determining a target tangent formed by the photographing component and a vehicle head boundary point of the target vehicle, and determining an intersection point of the ground plane and the target tangent as a candidate road point corresponding to the target tangent; and (Eran: Paragraphs 0127 – 0128: “Forward-Facing Multi-Imaging System. As discussed above, system 100 may provide drive assist functionality that uses a multi camera system. The multi-camera system may use one or more cameras facing in the forward direction of a vehicle.”; Paragraph 0350: “The vehicle navigation system may be configured to detect road topology features from within image 2700. As used herein, a road topology feature may include any natural or manmade feature included within the environment of a host vehicle. Such road topology features may indicate the configuration, arrangement, and traffic pattern of section of roadway. The road topology features may include any of the features included in image 2700, as described above.”,
Supplemental Note: the front facing camera is able to capture images in the heading of the vehicle, thus a target tangent is formed while intersecting the lower boundary of the image. Please refer to Figure A above)
determining, from a candidate road point corresponding to the lower boundary line and a candidate road point corresponding to the target tangent, a candidate road point relatively far from the target vehicle as the road visible region corresponding to the target vehicle (Eran: Figure A,
Supplemental Note: based on Figure A the road visible region is interpreted as the furthest roadway the image is able to detect).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Eran with a reasonable expectation of success. Please refer to claim 2 as both state the claim limitations therefore rejected under the same pretenses.
Regarding claim 17, Li, as modified, teaches wherein the component parameter of the photographing component comprises a vertical visible angle and a component location parameter; (Li: Paragraph 0029: “S101, acquiring an image of a current road within an area in which a target vehicle is currently located,”: Paragraph 0034: “Wherein, the image collection device may be a camera; the installation location of the image collection apparatus may be in front of the target vehicle.”)
the vertical visible angle being a photographing angle of the photographing component in a direction perpendicular to the ground plane; (Li: Paragraph 0035: “For example, in a case that the camera is set horizontally, that is, the camera horizontally shoots the road surface in front of the target vehicle, the collected image covers a larger range of the road surface, and correspondingly, the range of the area in which the target vehicle is currently located can also be larger.” Paragraph 0144: “ Wherein, the image collection module, for example, may be a camera, especially a camera disposed in front of the target vehicle; ”,
Supplemental Note: the camera can be placed horizontally Infront of the vehicle, thus perpendicular with the ground plane)
the component location parameter referring to an installation location and an installation direction of the photographing component installed on the target vehicle; (Li: Paragraph 0034: “Wherein, the image collection device may be a camera; the installation location of the image collection apparatus may be in front of the target vehicle.”)
… and the at least one processor is further configured to perform:
determining a primary optical axis of the photographing component according to the installation location and the installation direction in the component location parameter; (Li: Paragraph 0053: “It should be pointed out that, in addition to the image indicating the diversion line in the road, the diversion line recognition result can also include the relative location of the diversion line in the road in the image, or the absolute location in the world coordinate system. The method for obtaining the absolute location in the world coordinate system is not limited in this embodiment. Here, the diversion line recognition result may be obtained at local, or may be sent by the cloud server. Local refers to the aforementioned apparatus with data processing function.”,
Supplemental Note: the world coordinate system is used for detecting the location of different items found in the image).
In sum, Li teaches wherein the component parameter of the photographing component comprises a vertical visible angle and a component location parameter the vertical visible angle being a photographing angle of the photographing component in a direction perpendicular to the ground plane; the component location parameter referring to an installation location and an installation direction of the photographing component installed on the target vehicle; the at least one processor is further configured to perform: determining a primary optical axis of the photographing component according to the installation location and the installation direction in the component location parameter. Li however does not teach the M photographing boundary lines further comprising an upper boundary line; evenly dividing the vertical visible angle to obtain an average vertical visible angle of the photographing component; and obtaining, along the primary optical axis, the lower boundary line and the upper boundary line that form the average vertical visible angle with the primary optical axis, wherein the primary optical axis, the upper boundary line, and the lower boundary line are located on a same plane, and a plane on which the primary optical axis, the upper boundary line, and the lower boundary line are located is perpendicular to the ground plane.
Eran teaches the M photographing boundary lines further comprising an upper boundary line; (Eran: as shown in Figure A above, the upper boundary is upper limit of the image)
… evenly dividing the vertical visible angle to obtain an average vertical visible angle of the photographing component; and
obtaining, along the primary optical axis, the lower boundary line and the upper boundary line that form the average vertical visible angle with the primary optical axis, wherein the primary optical axis, the upper boundary line, and the lower boundary line are located on a same plane, and a plane on which the primary optical axis, the upper boundary line, and the lower boundary line are located is perpendicular to the ground plane (Eran: Please see Figure A above. The vertical visible angle can be evenly divided by 1 which is 90 degrees, thus the image shown in Figure A is incorporated the upper and lower boundaries within this image).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Eran with a reasonable expectation of success. Please refer to claim 3 as both state the claim limitations therefore rejected under the same pretenses.
Regarding claim 19, Li teaches, wherein the vehicle location status information further comprises a vehicle driving state of the target vehicle at the vehicle location point; and (Li: Paragraph 0148: “The locating module 420 may acquire the current location of the target vehicle. For example, based on multi-sensor fusion locating, including GPS, IMU, vehicle speed, steering wheel, etc., the location result can be sent to the ADAS map module 430 as the current location of the target vehicle;”,
Supplemental Note: based on the vehicle speed, it can be determined whether the vehicle is moving or stationary, thus interpreted as the vehicle location status information)
the at least one processor is further configured to perform:
performing first operation processing on the circular error probable and the road visible point distance to obtain the region lower limit corresponding to the target vehicle; and (Li: Paragraph 0040: “the current lane related information obtained from the ADAS map may be used as the current lane related information in the initial area range, and the current lane related information in the initial area range is further combined with the current location of the target vehicle, to determine the current lane related information of the target vehicle within the area in which the target vehicle is currently located. Alternatively, it may not be processed, and all current lane related information within the initial area range is used for subsequent processing, but only the current lane related information within the area in which the target vehicle is currently located may be used in processing.”,
Supplemental Note: the initial range is able to identify the upper and lower limits of where the vehicle is).
In sum, Li teaches wherein the vehicle location status information further comprises a vehicle driving state of the target vehicle at the vehicle location point; and the at least one processor is further configured to perform: performing first operation processing on the circular error probable and the road visible point distance to obtain the region lower limit corresponding to the target vehicle. Li however does not teach extending, by using the vehicle driving state, the road visible point distance along the driving direction to obtain an extended visible point distance, and performing second operation processing on the extended visible point distance and the circular error probable to obtain the region upper limit corresponding to the target vehicle.
Eran teaches extending, by using the vehicle driving state, the road visible point distance along the driving direction to obtain an extended visible point distance, and performing second operation processing on the extended visible point distance and the circular error probable to obtain the region upper limit corresponding to the target vehicle (Eran: Paragraph 0082: “Position sensor 130 may include any type of device suitable for determining a location associated with at least one component of system 100. In some embodiments, position sensor 130 may include a GPS receiver. Such receivers can determine a user position and velocity by processing signals broadcasted by global positioning system satellites. Position information from position sensor 130 may be made available to applications processor 180 and/or image processor 190.”; Paragraph 0166: “In another embodiment, processing unit 110 may compare the leading vehicle’s instantaneous position with the look-ahead point (associated with vehicle 200) over a specific period of time (e.g., 0.5 to 1.5 seconds). If the distance between the leading vehicle’s instantaneous position and the look-ahead point varies during the specific period of time, and the cumulative sum of variation exceeds a predetermined threshold (for example, 0.3 to 0.4 meters on a straight road, 0.7 to 0.8 meters on a moderately curvy road, and 1.3 to 1.7 meters on a road with sharp curves), processing unit 110 may determine that the leading vehicle is likely changing lanes.”).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Eran with a reasonable expectation of success. Please refer to claim 5 as both state the claim limitations therefore rejected under the same pretenses.
Claim(s) 13 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (US 20220019817 A1) in view of Eran et al. (WO 2020240274 A1) as applied to claim 9 above, and further in view of Huang et al. (CN 115451992 B).
Regarding claim 13, Li, as modified, teaches wherein the S pieces of divided map data comprise divided map data Li, and i is a positive integer less than or equal to S; and (Li: Paragraph 0148: “For example, based on multi-sensor fusion locating, including GPS, IMU, vehicle speed, steering wheel, etc., the location result can be sent to the ADAS map module 430 as the current location of the target vehicle; the ADAS map module 430 sends the current lane related information within the area in which the target vehicle is currently located to the engine module 303 according to the current location. For example, the current lane related information may include road-surface-arrow-sequence information, intersection information, lane number change information, and so on. Here, the contents that the ADAS map module can provide can also include more, such as long solid lane lines, viaduct signals, main and auxiliary road signals, etc., which are not exhaustively listed in this embodiment.”: Paragraph 0136: “determining a distance between a target lane line and a road edge based on the target lane line and the road edge recognition result in a case that the current road recognition result contains a road edge recognition result; and determining the lane in which the target vehicle is currently located based on the distance between the target lane line and the road edge.”,
Supplemental Note: the fusion method fuses the sensor data compared with the ADAS map data to locate the vehicle. This can be used to evaluate a target lane line which is interpreted as Li)
determining the candidate lane corresponding to each piece of divided map data and determining the target lane to which the target vehicle belongs comprises: (Li: Paragraph 0073: “The target lane line may be a lane line(s) of the lane in which the target vehicle is currently located, or referred to as a lane boundary line. In the lane line detection results detected from the image of the current road, two lane lines closest to the target vehicle can be determined as the target lane line.”)
determining a maximum lane probability in a lane probability respectively corresponding to at least one lane of the divided map data Li as a candidate probability corresponding to the divided map data Li, and determining a lane with a maximum lane probability in the at least one lane of the divided map data Li as a candidate lane corresponding to the divided map data Li; (Li: Paragraphs 0131 – 0134: “a synchronous fusion module 304, configured for determining a lane in which the target vehicle is currently located based on at least one of the current road recognition result and the current lane related information, and taking the lane in which the target vehicle is currently located as the current lane location result for the target vehicle. In an embodiment, the synchronous fusion module 304 is configured for implementing at least one of: determining the lane in which the target vehicle is currently located based on a lane line detection result and a diversion line recognition result in a case that the current road recognition result contains the diversion line recognition result; and determining the lane in which the target vehicle is currently located based on the lane line detection result and an opposing-lanes-dividing line recognition result in a case that the current road recognition result contains the opposing-lanes-dividing line recognition result.”)
obtaining a longitudinal average distance between the target vehicle and each of the S pieces of divided map data, and determining, according to a nearest road visible point and S longitudinal average distances, region weights respectively corresponding to the S pieces of divided map data; (Li: Paragraph 0136: “determining a distance between a target lane line and a road edge based on the target lane line and the road edge recognition result in a case that the current road recognition result contains a road edge recognition result; and determining the lane in which the target vehicle is currently located based on the distance between the target lane line and the road edge.”,
Supplemental Note: the distance to the edge identified in the image and ADAS map can be used to determine a target lane).
In sum, Li teaches wherein the S pieces of divided map data comprise divided map data Li, and i is a positive integer less than or equal to S; and determining the candidate lane corresponding to each piece of divided map data and determining the target lane to which the target vehicle belongs comprises: determining a maximum lane probability in a lane probability respectively corresponding to at least one lane of the divided map data Li as a candidate probability corresponding to the divided map data Li, and determining a lane with a maximum lane probability in the at least one lane of the divided map data Li as a candidate lane corresponding to the divided map data Li; obtaining a longitudinal average distance between the target vehicle and each of the S pieces of divided map data, and determining, according to a nearest road visible point and S longitudinal average distances, region weights respectively corresponding to the S pieces of divided map data. Li however does not teach multiplying a candidate probability by a region weight that belong to same divided map data to obtain S trusted weights respectively corresponding to the divided map data; and determining a candidate lane corresponding to a maximum trusted weight of the S trusted weights as the target lane to which the target vehicle belongs.
Huang teaches multiplying a candidate probability by a region weight that belong to same divided map data to obtain S trusted weights respectively corresponding to the divided map data; and
determining a candidate lane corresponding to a maximum trusted weight of the S trusted weights as the target lane to which the target vehicle belongs (Huang: Abstract: “The embodiment of the invention claims a lane locating method, a system, an electronic device and a storage medium, wherein the terminal self-learning analysis is performed based on the obtained vehicle line data and vehicle track data to obtain a first self-learning analysis result of each data selection point in the vehicle track data;”; Claim 1: “A lane positioning method, wherein the method comprises: collecting the vehicle line data and the vehicle track data of each vehicle, wherein the vehicle line data comprises the vehicle line type and the vehicle line colour; performing the vehicle terminal self-learning analysis based on the obtained vehicle line data and the vehicle track data, obtaining the first self-learning analysis result of each data selection point in the vehicle track data; connecting the first self-learning analysis result of the data selection point in series to obtain the second self-learning analysis result of the corresponding length line segment; obtaining a first target vehicle line data based on the second self-learning analysis result; obtaining a first lane locating data according to the first target vehicle line data and the current vehicle track data; obtaining the vehicle line data of the current position of the vehicle; comparing whether the vehicle line data of the current position of the vehicle is consistent with the first target vehicle line data; if the two are consistent, taking the first lane locating data as the output result, if the two are not consistent, correcting the vehicle line data of the current position of the vehicle based on the second self-learning analysis result, and taking the first correction result as the output result; the vehicle terminal self-learning analysis based on the obtained vehicle line data and vehicle track data, obtaining the first self-learning analysis result of each data selection point in the vehicle track, specifically is: counting the first occurrence times of each vehicle line data of each data selection point in the vehicle track data; when judging that the first occurrence times of each vehicle line data satisfy the first threshold value, counting the first weight of each vehicle line data of each data selection point, wherein the first weight is the occurrence probability of each vehicle line data; judging whether the first weight meets the second threshold; if so, the data selection point is successfully learned, the vehicle line data corresponding to the data selection point is used as the first self-learning analysis result; if not, the data selection point fails to learn.”,
Supplemental Note: based on the image, the learning model is able to add weights to the lane line data to determine the accuracy of the lane line to locate the lane line).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Huang with a reasonable expectation of success. One of ordinary skill in the art would find the combination of Huang’s self-learning model which identifies different weights of lane locating data to properly identify a target vehicle’s location in combination with the lane locating AI models of Li as obvious to try use of known technique to improve similar devices. Huang teaches that comparing the lane data with the map data as map data does not update all at once, thus can lead to errors when determining the correct lane location identified in the image. The self-learning model is able to utilize historical data and crowdsourced map data to further analyze the lane location in the image. This is an improvement to the vehicle system of Li as Li teaches the ability of performing a current road recognition results based on the image by an apparatus with an AI model. The self-learning model of Huang when combined with Li will allow the AI model of Li to better determine the location of the lane lines without the need to compare with the map. The comparing with the ADAS map may still be performed for double checking, however this combination mitigates situations in which the map data has not been properly updated and thus not useful for comparison.
Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (US 20220019817 A1) in view of Eran et al. (WO 2020240274 A1) and Huang et al. (CN 115451992 B) as applied to claim 13 above, and further in view of Hirota et al. (US 20240375654 A1).
Regarding claim 14, Li, as modified, teaches obtaining the longitudinal average distance between the target vehicle and each of the S pieces of divided map data comprises: (Li: Paragraph 0136: “determining a distance between a target lane line and a road edge based on the target lane line and the road edge recognition result in a case that the current road recognition result contains a road edge recognition result; and determining the lane in which the target vehicle is currently located based on the distance between the target lane line and the road edge.”,
Supplemental Note: the distance to the edge identified in the image and ADAS map can be used to determine a target lane).
In sum, Li teaches obtaining the longitudinal average distance between the target vehicle and each of the S pieces of divided map data comprises. Li however does not teach wherein the divided map data Li comprises a region upper boundary and a region lower boundary; in the driving direction, a road location indicated by the region upper boundary is in front of a road location indicated by the region lower boundary.
Eran teaches wherein the divided map data Li comprises a region upper boundary and a region lower boundary; in the driving direction, a road location indicated by the region upper boundary is in front of a road location indicated by the region lower boundary; and (Eran: Paragraph 0349: “Fig. 27 is an illustration of an example image 2700 that may be captured by a host vehicle, consistent with the disclosed embodiments. For example, image 2700 may be captured from an environment of host vehicle 200 using image acquisition unit 120, as described in detail above. Image 2700 may include a road surface 2730 traveled by host vehicle 200.”,
Supplemental Note: as seen above in Figure A, the upper and lower limits of the image are shown while the lane lanes are shown by reference markers 2734 and 2734).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Eran with a reasonable expectation of success. Please refer to the rejection of claim 2 as both state the same functional language and therefore rejected under the same pretenses. Li in view of Eran however still does not teach determining an upper boundary distance between the target vehicle and the road location indicated by the region upper boundary of the divided map data Li, and determining a lower boundary distance between the target vehicle and the road location indicated by the region lower boundary of the divided map data Li; and determining an average value of the upper boundary distance corresponding to the divided map data Li and the lower boundary distance corresponding to the divided map data Li as a longitudinal average distance between the target vehicle and the divided map data Li.
Hirota teaches determining an upper boundary distance between the target vehicle and the road location indicated by the region upper boundary of the divided map data Li, and determining a lower boundary distance between the target vehicle and the road location indicated by the region lower boundary of the divided map data Li; and
determining an average value of the upper boundary distance corresponding to the divided map data Li and the lower boundary distance corresponding to the divided map data Li as a longitudinal average distance between the target vehicle and the divided map data Li (Hirota: Paragraph 0005: “ When lane markings are detected by image recognition by the in-vehicle camera as does the vehicle control device described in Patent Literature 1, the accuracy of recognition of white lines may decrease at a distant location or a curve.”; Paragraph 0029: “The control part 20 identifies, from a camera image, a rectangular vehicle region including a vehicle ahead, i.e., a bounding box. Specifically, images that are shot continuously with the camera 40 are obtained and are subjected to lens distortion correction, etc. In addition, the control part 20 determines whether features of a vehicle (e.g., a truck, a passenger car, or a motorcycle) are included in a camera image by performing an image recognition process that uses, for example, You Only Look Once (YOLO) or pattern matching, and detects an image of a vehicle located ahead of the host vehicle. ”; Paragraph 0031: “A process for identifying a driving lane will be described. The size and location of a bounding box B are represented by, for example, the coordinates of an upper left vertex and the coordinates of a lower right vertex of the bounding box B. The control part 20 obtains a height h (the number of pixels) of the bounding box B and representative coordinates Bo (x, y) of the bounding box B from the coordinates of the two diagonal vertices of the bounding box. The representative coordinates Bo are, for example, the coordinates of the center of the bounding box B (the midpoint in a width direction and in a height direction). The control part 20 identifies a relative orientation of a vehicle ahead as viewed from the host vehicle, based on the location of the representative coordinates Bo of the bounding box. ”; Paragraph 0032: “Specifically, each set of coordinates in the image I is associated with a relative orientation of an object shown at the set of coordinates with respect to the host vehicle, and information indicating the correspondence is recorded in the recording medium 30. Based on the correspondence, the control part 20 obtains a relative orientation of a vehicle ahead shown at the representative coordinates Bo.”,
Supplemental Note: a bounding box can be made from the images in which a set of coordinates pertaining to the box and the vehicle environment are evaluated for their distances).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Li with the teachings of Hirota with a reasonable expectation of success. One of ordinary skill in the art would find the lane determination method of Hirota to be simple substitution with the lane determination system of Li. Both Li and Hirata use images captured from the vehicle camera to acquire lane data and then to verify this data with an acquired or stored map. Hirota differs in how it determines the lane for a preceding vehicle or a vehicle in front by the use of these sensors and map data whereas Li utilizes it system to determine the lane for the host vehicle. Both are obtaining predictable results of determining the location of specified vehicle by the use of the same sensors and data, thus would merely be a simple substitution to one of ordinary skill in the art.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVAM SHARMA whose telephone number is (703)756-1726. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached at 571-270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIVAM SHARMA/ Examiner, Art Unit 3665
/Erin D Bishop/ Supervisory Patent Examiner, Art Unit 3665