DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is a Final Action on the Merits. Claims 1-5 and 9-20 are currently pending and are addressed below.
Response to Amendments
The amendment filed on August 26th, 2025 has been considered and entered. Claim 1 has been amended. Claims 6-8 has been cancelled.
Response to Arguments
The previous rejection of claim 1-15 and 17-20 under 35 USC 101 has been overcome due to the applicant’s amendments.
The Applicant states (Amend. 9-14) that Bo (CN 112212872 B) (Translation Attached) in view of Son (KR 20180062504 A) (“Son”) (Translation Attached) further Fan (CN 108959321 B) (“Fan”) (Translation Attached) fail to teach the limitations of amended independent claim 1. The examiner respectfully disagrees.
The applicant states that the steps of claim 1 (steps 1-6 as stated by the applicant) are not taught by Bo and Son individually, however as stated in the rejection, Bo and Son are used in combination to disclose the steps of claim 1, whereas the applicant refers to each reference individually. Such arguments are not persuasive since one cannot show nonobviousness “by attacking references individually” where the rejections are based on combinations of references. In re Merck & Co., Inc., 800 F.2d 1091, 1097 (Fed. Cir. 1986) (citing In re Keller, 642 F.2d 413, 425 (CCPA 1981)).
The applicant states that Fan does not teach “Step 7 of claim 1”, however step 7 of claim 1, which was previously dependent claims 7-8 was taught by a combination of Bo in view of Son in view of Fan. Such arguments are not persuasive since one cannot show nonobviousness “by attacking references individually” where the rejections are based on combinations of references. In re Merck & Co., Inc., 800 F.2d 1091, 1097 (Fed. Cir. 1986) (citing In re Keller, 642 F.2d 413, 425 (CCPA 1981)).
Claim Objections
Claim 9 and 16 objected to because of the following informalities:
Claim 9’s status indicator states that it is “Previously Presented”, however it is incorrect since the claim has amendments.
Claim 16’s status indicator states that it is “Currently Amended”, however it is incorrect since there are no present amendments.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“a first acquisition module for acquiring” in at least claim 16
“a target top view acquisition module for calculating” in at least claim 16
“a partitioned image acquisition module for inputting” in at least claim 16
“a recognition module for scanning” in at least claim 16
“a path trajectory generation module for generating” in at least claim 16
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
The published specification provides corresponding structure for the claim limitations in paragraph 353.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 16 rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more.
In sum, claim 16 is rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception to patentability (i.e., a law of nature, a natural phenomenon, or an abstract idea) and do not include an inventive concept that is something “significantly more” than the judicial exception under the January 2019 patentable subject matter eligibility guidance (2019 PEG) analysis which follows.
Under the 2019 PEG step 1 analysis, it must first be determined whether the claims are directed to one of the four statutory categories of invention (i.e., process, machine, manufacture, or composition of matter). Applying step 1 of the analysis for patentable subject matter to the claims, it is determined that the claims are directed to the statutory category of a process. Therefore, we proceed to step 2A, Prong 1.
Revised Guidance Step 2A – Prong 1
Under the 2019 PEG step 2A, Prong 1 analysis, it must be determined whether the claims recite an abstract idea that falls within one or more designated categories of patent ineligible subject matter (i.e., organizing human activity, mathematical concepts, and mental processes) that amount to a judicial exception to patentability.
Here, with respect to independent claim 16 the claim recites the abstract idea of generating a path for vehicle based on travel information, and mentally determine “a path trajectory generation module for generating a path trajectory corresponding to the vehicle travel state information based on the travelable region, the path trajectory being one of the preset driving paths”, where these claims fall within one or more of the three enumerated 2019 PEG categories of patent ineligible subject matter, specifically, a mental process, that can be performed in the human mind since each of the above steps could alternatively be performed in the human mind or with the aid of pen and paper. This conclusion follows from CyberSource Corp. v. Retail Decisions, Inc., where our reviewing court held that section 101 did not embrace a process defined simply as using a computer to perform a series of mental steps that people, aware of each step, can and regularly do perform in their heads. 654 F.3d 1366, 1373 (Fed. Cir. 2011); see also In re Grams, 888 F.2d 835, 840–41 (Fed. Cir. 1989); In re Meyer, 688 F.2d 789, 794–95 (CCPA 1982); Elec. Power Group, LLC v. Alstom S.A., 830 F. 3d 1350, 1354–1354 (Fed. Cir. 2016) (“we have treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category”).
Additionally, mental processes remain unpatentable even when automated to reduce the burden on the user of what once could have been done with pen and paper. See CyberSource, 654 F.3d at 1375 (“That purely mental processes can be unpatentable, even when performed by a computer, was precisely the holding of the Supreme Court in Gottschalk v. Benson.”). These limitations, as drafted, are a simple process that under their broadest reasonable interpretation, covers the performance of the limitations of the mind. For example, the claim limitation encompasses mentally generating a path for vehicle based on travel information based off of the information provided by the car’s sensors while traveling, or alternatively, mentally generating a path for vehicle based on travel information based on observations by a human.
For example, a human could mentally and with the aid of pen and paper generating a path for vehicle based on travel information.
In addition, the limitation “a target top view acquisition module for calculating a target top view corresponding to the initial image via a nonlinear difference correction algorithm according to the initial image” recites the abstract idea of a mathematical concept in addition to being a mental process since the limitation invokes a “calculation” of a time estimation. See October 2019 Update: Subject Matter eligibility p. 3-4 “Mathematical Relationships” and “Mathematical Calculations” (“A mathematical relationship may be expressed in words or using mathematical symbols . . . [t]here is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word “calculating” in order to be considered a mathematical calculation. For example, a step of “determining” a variable or number using mathematical methods or “performing” a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.”) citing Diamond v. Diehr, Gottschalk v. Benson, Parker v. Flook, and Burnett v. Panasonic Corp (“using a formula to convert geospatial coordinates into natural numbers”).
Revised Guidance Step 2A – Prong 2
Under the 2019 PEG step 2A, Prong 2 analysis, the identified abstract idea to which the claim is directed does not include limitations that integrate the abstract idea into a practical application, since the additional elements of a vehicle sensors, processor, and memory are merely generic components used as a tool (“apply it”) to implement the abstract idea. (See, e.g., MPEP §2106.05(f)). See Alice, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”)
In addition, the limitation “a first acquisition module for acquiring vehicle travel state information and an initial image of a surrounding environment of a preset driving path corresponding to the vehicle travel state information in real time when the vehicle is traveling on the preset driving path; a target top view acquisition module for calculating a target top view corresponding to the initial image via a nonlinear difference correction algorithm according to the initial image; a partitioned image acquisition module for inputting the target top view into a preset deep learning model, and classifying pixel points of the target top view input into the preset deep learning model to obtain a partitioned image, the partitioned image comprising a travelable region image and a non-travelable region image; a recognition module for scanning the partitioned image to recognize a travelable region of the vehicle” constitutes insignificant presolution activity that merely gathers data and, therefore, do not integrate the exception into a practical application. See In re Bilski, 545 F.3d 943, 963 (Fed. Cir. 2008) (en banc), aff' d on other grounds, 561 U.S. 593 (2010) (characterizing data gathering steps as insignificant extra-solution activity); see also CyberSource, 654 F.3d at 1371–72 (noting that even if some physical steps are required to obtain information from a database (e.g., entering a query via a keyboard, clicking a mouse), such data-gathering steps cannot alone confer patentability); OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering). Accord Guidance, 84 Fed. Reg. at 55 (citing MPEP § 2106.05(g)).
In addition, merely “[u]sing a computer to accelerate an ineligible mental process does not make that process patent-eligible.” Bancorp Servs., L.L.C. v. Sun Life Assur. Co. of Canada (U.S.), 687 F.3d 1266, 1279 (Fed. Cir. 2012); see also CLS Bank Int’l v. Alice Corp. Pty. Ltd., 717 F.3d 1269, 1286 (Fed. Cir. 2013) (en banc) (“simply appending generic computer functionality to lend speed or efficiency to the performance of an otherwise abstract concept does not meaningfully limit claim scope for purposes of patent eligibility.”), aff’d, 573 U.S. 208 (2014). Accordingly, the additional element of a processor does not transform the abstract idea into a practical application of the abstract idea.
Revised Guidance Step 2B
Under the 2019 PEG step 2B analysis, the additional elements are evaluated to determine whether they amount to something “significantly more” than the recited abstract idea. (i.e., an innovative concept). Here, the additional elements, such as: a processor, a sensor, and a memory does not amount to an innovative concept since, as stated above in the step 2A, Prong 2 analysis, the claims are simply using the additional elements as a tool to carry out the abstract idea (i.e., “apply it”) on a computer or computing device and/or via software programming. (See, e.g., MPEP §2106.05(f)). The additional elements are specified at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. (See, e.g., MPEP §2106.05 I.A.). See Alice, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). Thus, these elements, taken individually or together, do not amount to “significantly more” than the abstract ideas themselves.
The elements of the instant claimed invention, when taken in combination do not offer substantially more than the sum of the functions of the elements when each is taken alone. The claims as a whole, do not amount to significantly more than the abstract idea itself because the claims do not effect an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of an electronic device itself which implements the abstract idea (e.g., the general purpose computer and/or the computer system which implements the process are not made more efficient or technologically improved); the claims do not perform a transformation or reduction of a particular article to a different state or thing (i.e., the claims do not use the abstract idea in the claimed process to bring about a physical change. See, e.g., Diamond v. Diehr, 450 U.S. 175 (1981), where a physical change, and thus patentability, was imparted by the claimed process; contrast, Parker v. Flook, 437 U.S. 584 (1978), where a physical change, and thus patentability, was not imparted by the claimed process); and the claims do not move beyond a general link of the use of the abstract idea to a particular technological environment.
Accordingly, claim 16 is rejected under 35 USC 101 as being drawn to an abstract idea without significantly more, and thus are ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 9-11, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bo (CN 112212872 B) (Translation Attached) in view of Son (KR 20180062504 A) (“Son”) (Translation Attached) in view of Fan (CN 108959321 B) (“Fan”) (Translation Attached).
With respect to claim 1, Bo teaches a path construction method comprising:
acquiring vehicle travel state information (See at least Bo Paragraph 6 “(1) Obtain a multi-line lidar top-view image of the road environment around the vehicle, a local navigation map, and historical vehicle movement information to form a data set;” | Paragraph 15 “The vehicle historical motion information in step (1) refers to the steering wheel angle and vehicle speed information at the past and current moments.”) and
an initial image of a surrounding environment of a preset driving path corresponding to the vehicle travel state information in real time when the vehicle is traveling on the preset driving path (See at least Bo Paragraph 6 “(1) Obtain a multi-line lidar top-view image of the road environment around the vehicle, a local navigation map, and historical vehicle movement information to form a data set;” | Paragraph 17 “(3a) Obtaining point cloud data of the vehicle's surrounding environment through a multi-line laser radar;” | Paragraph 21 “(4a) The user provides a starting point and an end point in advance and specifies a driving path. The gray line in the map represents the planned path, forming a global navigation map inside the end-to-end autonomous driving controller.” | Paragraph 23 “(4c) The vehicle’s real-time positioning information is matched with the global navigation map to obtain the vehicle’s location in the navigation map. The vehicle’s location is represented by a white dot in the map. A local navigation map with a pixel size of 50*50 is intercepted with the vehicle’s location as the center to obtain a local path planning map, which is fed into an end-to-end neural network to guide the vehicle to travel along the planned path.” | Paragraph 27 “Among them, and represent the set consisting of the steering angle and vehicle speed at time t-N+1, t-N+2, …, t-1, t respectively;”);
calculating a target top view corresponding to the initial image; inputting the target top view into a preset deep learning model, and classifying pixel points of the target top view input into the preset deep learning model to obtain a partitioned image, the partitioned image comprising a travelable region image and a non-travelable region image; scanning the partitioned image to recognize a travelable region of the vehicle (See at least Bo Paragraph 7 “(2) Construct an end-to-end neural network model including convolutional layers, fully connected layers, unfolding layers, and long short-term memory network layers (LSTM), and train it through the data set with the goal of minimizing the root mean square error (RMSE). This forms a mapping from the lidar top-view image, local navigation map, and vehicle historical motion information to the vehicle's expected steering wheel angle and speed at the next moment, completing the training of the end-to-end neural network model.” | Paragraph 16-19 “The method for obtaining the multi-line laser radar top view image in step (1) is as follows: (3a) Obtaining point cloud data of the vehicle's surrounding environment through a multi-line laser radar; (3b) According to the height information in the point cloud data, the obstacle points and ground points are identified, and the ground points are removed. The remaining point cloud data are projected into the specified image to achieve ground segmentation; (3c) Through the region generation method, the area where the obstacle point is located is generated as a non-drivable area and marked as a cross grid, and the non-obstacle area is generated as a drivable area and marked as white. At this time, the vehicle's surrounding environment can be divided into a drivable area and a non-drivable area, and sent to the end-to-end neural network at a rate of 10 frames/s as an input information for end-to-end autonomous driving.”); and
generating a path trajectory corresponding to the vehicle travel state information based on the travelable region, the path trajectory being one of the preset driving paths (See at least Bo Paragraph 21 “(4a) The user provides a starting point and an end point in advance and specifies a driving path. The gray line in the map represents the planned path, forming a global navigation map inside the end-to-end autonomous driving controller.” | Paragraph 23 “(4c) The vehicle’s real-time positioning information is matched with the global navigation map to obtain the vehicle’s location in the navigation map. The vehicle’s location is represented by a white dot in the map. A local navigation map with a pixel size of 50*50 is intercepted with the vehicle’s location as the center to obtain a local path planning map, which is fed into an end-to-end neural network to guide the vehicle to travel along the planned path.” | Paragraph 27 “Among them, and represent the set consisting of the steering angle and vehicle speed at time t-N+1, t-N+2, …, t-1, t respectively;”)
wherein the generating a path trajectory corresponding to the vehicle travel state information based on the travelable region further comprises:
when the vehicle is traveling on the preset driving path again, acquiring vehicle travel state information (See at least Bo Paragraph 6 “(1) Obtain a multi-line lidar top-view image of the road environment around the vehicle, a local navigation map, and historical vehicle movement information to form a data set;” | Paragraph 15 “The vehicle historical motion information in step (1) refers to the steering wheel angle and vehicle speed information at the past and current moments.”) and an initial image of a surrounding environment of a preset driving path corresponding to the vehicle travel state information in real time (See at least Bo Paragraph 6 “(1) Obtain a multi-line lidar top-view image of the road environment around the vehicle, a local navigation map, and historical vehicle movement information to form a data set;” | Paragraph 17 “(3a) Obtaining point cloud data of the vehicle's surrounding environment through a multi-line laser radar;” | Paragraph 21 “(4a) The user provides a starting point and an end point in advance and specifies a driving path. The gray line in the map represents the planned path, forming a global navigation map inside the end-to-end autonomous driving controller.” | Paragraph 23 “(4c) The vehicle’s real-time positioning information is matched with the global navigation map to obtain the vehicle’s location in the navigation map. The vehicle’s location is represented by a white dot in the map. A local navigation map with a pixel size of 50*50 is intercepted with the vehicle’s location as the center to obtain a local path planning map, which is fed into an end-to-end neural network to guide the vehicle to travel along the planned path.” | Paragraph 27 “Among them, and represent the set consisting of the steering angle and vehicle speed at time t-N+1, t-N+2, …, t-1, t respectively;”);
inputting the target top view into a preset deep learning model, and classifying pixel points of the target top view input into the preset deep learning model to obtain a partitioned image, the partitioned image comprising a travelable region image and a non-travelable region image; scanning the partitioned image to recognize a travelable region of the vehicle (See at least Bo Paragraph 7 “(2) Construct an end-to-end neural network model including convolutional layers, fully connected layers, unfolding layers, and long short-term memory network layers (LSTM), and train it through the data set with the goal of minimizing the root mean square error (RMSE). This forms a mapping from the lidar top-view image, local navigation map, and vehicle historical motion information to the vehicle's expected steering wheel angle and speed at the next moment, completing the training of the end-to-end neural network model.” | Paragraph 16-19 “The method for obtaining the multi-line laser radar top view image in step (1) is as follows: (3a) Obtaining point cloud data of the vehicle's surrounding environment through a multi-line laser radar; (3b) According to the height information in the point cloud data, the obstacle points and ground points are identified, and the ground points are removed. The remaining point cloud data are projected into the specified image to achieve ground segmentation; (3c) Through the region generation method, the area where the obstacle point is located is generated as a non-drivable area and marked as a cross grid, and the non-obstacle area is generated as a drivable area and marked as white. At this time, the vehicle's surrounding environment can be divided into a drivable area and a non-drivable area, and sent to the end-to-end neural network at a rate of 10 frames/s as an input information for end-to-end autonomous driving.”); and
generating a current path trajectory corresponding to the vehicle travel state information based on the travelable region, the path trajectory being one of the preset driving paths (See at least Bo Paragraph 21 “(4a) The user provides a starting point and an end point in advance and specifies a driving path. The gray line in the map represents the planned path, forming a global navigation map inside the end-to-end autonomous driving controller.” | Paragraph 23 “(4c) The vehicle’s real-time positioning information is matched with the global navigation map to obtain the vehicle’s location in the navigation map. The vehicle’s location is represented by a white dot in the map. A local navigation map with a pixel size of 50*50 is intercepted with the vehicle’s location as the center to obtain a local path planning map, which is fed into an end-to-end neural network to guide the vehicle to travel along the planned path.” | Paragraph 27 “Among them, and represent the set consisting of the steering angle and vehicle speed at time t-N+1, t-N+2, …, t-1, t respectively;”).
performing multi-trajectory fusion on the current path trajectory and the path trajectory obtained the last time, and reconstructing a map corresponding to the preset driving path (See at least Bo Paragraph 15 “The vehicle historical motion information in step (1) refers to the steering wheel angle and vehicle speed information at the past and current moments.” | Paragraphs 20-23 “The method for obtaining the local navigation map in step (1) is as follows: (4a) The user provides a starting point and an end point in advance and specifies a driving path. The gray line in the map represents the planned path, forming a global navigation map inside the end-to-end autonomous driving controller. (4b) Differential GPS and inertial measurement unit (IMU) perform information fusion through Kalman filter algorithm to achieve accurate positioning of the vehicle and obtain real-time positioning information of the vehicle; (4c) The vehicle’s real-time positioning information is matched with the global navigation map to obtain the vehicle’s location in the navigation map. The vehicle’s location is represented by a white dot in the map. A local navigation map with a pixel size of 50*50 is intercepted with the vehicle’s location as the center to obtain a local path planning map, which is fed into an end-to-end neural network to guide the vehicle to travel along the planned path.”).
Bo, however, fails to explicitly disclose that the target top view corresponding to the initial image is calculated via a nonlinear difference correction algorithm according to the initial image; calculating a target top view corresponding to the initial image via a nonlinear difference correction algorithm according to the initial image; before performing multi-trajectory fusion on the current path trajectory and the path trajectory obtained the last time, and reconstructing a map corresponding to the preset driving path, further comprising: judging whether the coincidence degree between the current path trajectory and the path trajectory obtained the last time is greater than or equal to a preset first threshold value; if the coincidence degree between the current path trajectory and the path trajectory obtained the last time is greater than or equal to the preset first threshold value, fusing the current path trajectory and the path trajectory obtained the last time.
Son teaches that the target top view corresponding to the initial image is calculated via a nonlinear difference correction algorithm according to the initial image (See at least Son Paragraph 66 “As the photographing unit 31, an omnidirectional camera using a fish-eye lens, for example, a closed circuit television (CCTV), or the like may be employed. That is, as shown in FIG. 5, the photographing unit 31 may be installed above the parking space in the indoor parking lot, the upper part of the passage, or the like” | Paragraph 72 “The image correcting unit 321 generates a top view image as shown in FIG. 8 by using the distortion correction algorithm when the images photographed by the photographing unit 31 are respectively input. The image correcting unit 321 performs all of the above-described processes on the image input from each of the photographing units 31”)
calculating a target top view corresponding to the initial image via a nonlinear difference correction algorithm according to the initial image (See at least Son Paragraph 66 “As the photographing unit 31, an omnidirectional camera using a fish-eye lens, for example, a closed circuit television (CCTV), or the like may be employed. That is, as shown in FIG. 5, the photographing unit 31 may be installed above the parking space in the indoor parking lot, the upper part of the passage, or the like” | Paragraph 72 “The image correcting unit 321 generates a top view image as shown in FIG. 8 by using the distortion correction algorithm when the images photographed by the photographing unit 31 are respectively input. The image correcting unit 321 performs all of the above-described processes on the image input from each of the photographing units 31”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Bo to include that the target top view corresponding to the initial image is calculated via a nonlinear difference correction algorithm according to the initial image and calculating a target top view corresponding to the initial image via a nonlinear difference correction algorithm according to the initial image, as taught by Son as disclosed above, in order to ensure accurate images are calculated (Son Paragraph 4 “Accordingly, it is possible to provide the vehicle driver with information on the parking lot adjacent to the current position or the destination of the vehicle so that the driver can recognize the current position or the information on the parking lots adjacent to the destination in advance”).
Bo in view of Son fail to explicitly disclose that before performing multi-trajectory fusion on the current path trajectory and the path trajectory obtained the last time, and reconstructing a map corresponding to the preset driving path, further comprising: judging whether the coincidence degree between the current path trajectory and the path trajectory obtained the last time is greater than or equal to a preset first threshold value; if the coincidence degree between the current path trajectory and the path trajectory obtained the last time is greater than or equal to the preset first threshold value, fusing the current path trajectory and the path trajectory obtained the last time.
Fan, however, teaches before performing multi-trajectory fusion on the current path trajectory and the path trajectory obtained the last time, and reconstructing a map corresponding to the preset driving path, further comprising: judging whether the coincidence degree between the current path trajectory and the path trajectory obtained the last time is greater than or equal to a preset first threshold value; if the coincidence degree between the current path trajectory and the path trajectory obtained the last time is greater than or equal to the preset first threshold value, fusing the current path trajectory and the path trajectory obtained the last time (See at least Donghua FIGS. 4-5 and Paragraphs 69-75 “For SLAM maps constructed in batches, you can also compare their similarities and differences, and stitch them together after successful matching to obtain a more complete map. Specifically, in this embodiment, the local parking lot maps constructed each time are compared and matched and then spliced until the global parking lot map is obtained. In this embodiment, as shown in FIG. 4 , the parking lot map construction method further includes step S150 , performing loop detection on landmark information. Since landmark information in the parking lot is repeated, loop detection based on landmark information is prone to matching errors. As shown in FIG5 , the loop detection in this embodiment specifically includes: Step S151, using the grid map to narrow the detection range of landmark information by scanning and matching. Loop closures are detected in large-scale space by scanning matching based on the network map. Step S152: further detect the landmark information using the landmark map. Narrow the search scope of landmark information, and then optimize the global map through landmark information loop detection. In this embodiment, the method further includes comparing the parking lot map with the vehicle map, and when the parking lot map is updated, sending the updated content to the cloud server of the vehicle map, so that the parking lot map is updated and displayed in the vehicle map. Therefore, the parking lot map construction method of this embodiment can also use SLAM to construct the parking lot map when there is a known map for positioning. Comparing this map to the known onboard map can reveal whether the current environment has changed since the map was built. The updated results can be uploaded to the cloud for other users to download. It can be seen from the above that the parking lot map construction method of this embodiment can realize real-time parking lot map construction.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Bo in view of Son to include that before performing multi-trajectory fusion on the current path trajectory and the path trajectory obtained the last time, and reconstructing a map corresponding to the preset driving path, further comprising: judging whether the coincidence degree between the current path trajectory and the path trajectory obtained the last time is greater than or equal to a preset first threshold value; if the coincidence degree between the current path trajectory and the path trajectory obtained the last time is greater than or equal to the preset first threshold value, fusing the current path trajectory and the path trajectory obtained the last time, as taught by Fan as disclosed above, in order to ensure accurate trajectory generation (Fan Paragraph 22 “The present invention provides an effective means for constructing a parking lot map and has high application value and market prospects”).
With respect to claim 2, Bo in view of Son in view of Fan teaches constructing a map corresponding to the preset driving path based on the path trajectory (See at least Son Paragraphs 20-23 “The method for obtaining the local navigation map in step (1) is as follows: (4a) The user provides a starting point and an end point in advance and specifies a driving path. The gray line in the map represents the planned path, forming a global navigation map inside the end-to-end autonomous driving controller. (4b) Differential GPS and inertial measurement unit (IMU) perform information fusion through Kalman filter algorithm to achieve accurate positioning of the vehicle and obtain real-time positioning information of the vehicle; (4c) The vehicle’s real-time positioning information is matched with the global navigation map to obtain the vehicle’s location in the navigation map. The vehicle’s location is represented by a white dot in the map. A local navigation map with a pixel size of 50*50 is intercepted with the vehicle’s location as the center to obtain a local path planning map, which is fed into an end-to-end neural network to guide the vehicle to travel along the planned path.”).
With respect to claim 3, Bo in view of Son in view of Fan teaches acquiring the vehicle travel state information when the vehicle is traveling on the preset driving path in real time, the vehicle travel state information including a travel strategy when the vehicle is traveling on the preset driving path and a driving habit of a driver (See at least Bo Paragraph 15 “The vehicle historical motion information in step (1) refers to the steering wheel angle and vehicle speed information at the past and current moments.”); and acquiring the initial image of the surrounding environment of the preset driving path when the vehicle is traveling on the preset driving path in real time according to the travel strategy and the driving habit of the driver (See at least Bo Paragraph 6 “(1) Obtain a multi-line lidar top-view image of the road environment around the vehicle, a local navigation map, and historical vehicle movement information to form a data set;”).
With respect to claim 9, Bo in view of Son in view of Fan teach that if the coincidence degree between the current path trajectory and the path trajectory obtained the last time is less than a preset first threshold value, determining whether a matching degree between the current path trajectory and the preset driving path is less than a matching degree between the path trajectory obtained the last time and the preset driving path; and if so, regenerating the current path trajectory (See at least Donghua FIGS. 4-5 and Paragraphs 69-75 “For SLAM maps constructed in batches, you can also compare their similarities and differences, and stitch them together after successful matching to obtain a more complete map. Specifically, in this embodiment, the local parking lot maps constructed each time are compared and matched and then spliced until the global parking lot map is obtained. In this embodiment, as shown in FIG. 4 , the parking lot map construction method further includes step S150 , performing loop detection on landmark information. Since landmark information in the parking lot is repeated, loop detection based on landmark information is prone to matching errors. As shown in FIG5 , the loop detection in this embodiment specifically includes: Step S151, using the grid map to narrow the detection range of landmark information by scanning and matching. Loop closures are detected in large-scale space by scanning matching based on the network map. Step S152: further detect the landmark information using the landmark map. Narrow the search scope of landmark information, and then optimize the global map through landmark information loop detection. In this embodiment, the method further includes comparing the parking lot map with the vehicle map, and when the parking lot map is updated, sending the updated content to the cloud server of the vehicle map, so that the parking lot map is updated and displayed in the vehicle map. Therefore, the parking lot map construction method of this embodiment can also use SLAM to construct the parking lot map when there is a known map for positioning. Comparing this map to the known onboard map can reveal whether the current environment has changed since the map was built. The updated results can be uploaded to the cloud for other users to download. It can be seen from the above that the parking lot map construction method of this embodiment can realize real-time parking lot map construction.”).
With respect to claim 10, Bo in view of Son teaches that calculating a target top view corresponding to the initial image via a nonlinear difference correction algorithm according to the initial image comprises: obtaining a target image based on the initial image, the target image comprising a top view of a region image coinciding with a region in which the target top view is located; acquiring a number of times the target image appears; judging whether the number of times the target image appears is greater than or equal to a preset second threshold value; if so, extracting a feature point of the region image in each of the target images; and matching the feature points of each of the region images to reconstruct the target top view (See at least Son Paragraphs 70-80 “The indoor parking lot drawing storage unit 322 stores the indoor parking lot drawings. The image correcting unit 321 generates a top view image of the whole indoor parking lot by using the image photographed by the photographing unit 31. he image correcting unit 321 generates a top view image as shown in FIG. 8 by using the distortion correction algorithm when the images photographed by the photographing unit 31 are respectively input. The image correcting unit 321 performs all of the above-described processes on the image input from each of the photographing units 31. That is, the image correcting unit 321 corrects the distortion of the image photographed through each of the photographing units 31 and combines the corrected images to generate a top view image of the entire indoor car park as shown in FIG. 9. The top view image is combined with the indoor parking lot drawing shown in Fig. 10, so that the position of the vehicle can be detected. This will be described later. On the other hand, when the vehicle is photographed, the image appears as shown in Fig. 7 by a fish-eye lens. In this case, it can be seen that the size and position are changed based on the recognized pattern of the vehicle. Thus, the image processing unit 32 corrects the distortion of the image as shown in FIG. 11 using the distortion correction algorithm as described above. The position detection unit 323 recognizes the vehicle information in the image corrected by the image correction unit 321. The position detection unit 323 generates the indoor parking map by synthesizing the top view image corrected by the image processing unit 32 and the indoor parking lot drawings stored in the indoor parking lot drawing storage unit 322 as described above, And detects the position of the vehicle. Here, the corrected top view image and the indoor parking map are matched with each other so that coordinates of each pixel of the top view image can be set. In this case, since the coordinates are set in advance in each pixel of the top view image as described above, the position detection unit 323 detects the position of the vehicle in the combined top view image, that is, By converting image pixels corresponding to the center position of the vehicle pattern (rectangular box) into coordinates, it is possible to detect the position of the vehicle with coordinates.”).
With respect to claim 11, Bo in view of Son teaches that calculating a target top view corresponding to the initial image via a nonlinear difference correction algorithm according to the initial image further comprises: acquiring a corresponding relationship between a top view of the initial image and the initial image based on the non-linear difference correction algorithm, the corresponding relationship comprising corresponding coordinate points between the top view of the initial image and the initial image; acquiring a target coordinate point from the initial image based on the corresponding relationship; and constructing a target top view corresponding to the initial image based on the target coordinate point (See at least Son Paragraphs 70-80 “The indoor parking lot drawing storage unit 322 stores the indoor parking lot drawings. The image correcting unit 321 generates a top view image of the whole indoor parking lot by using the image photographed by the photographing unit 31. he image correcting unit 321 generates a top view image as shown in FIG. 8 by using the distortion correction algorithm when the images photographed by the photographing unit 31 are respectively input. The image correcting unit 321 performs all of the above-described processes on the image input from each of the photographing units 31. That is, the image correcting unit 321 corrects the distortion of the image photographed through each of the photographing units 31 and combines the corrected images to generate a top view image of the entire indoor car park as shown in FIG. 9. The top view image is combined with the indoor parking lot drawing shown in Fig. 10, so that the position of the vehicle can be detected. This will be described later. On the other hand, when the vehicle is photographed, the image appears as shown in Fig. 7 by a fish-eye lens. In this case, it can be seen that the size and position are changed based on the recognized pattern of the vehicle. Thus, the image processing unit 32 corrects the distortion of the image as shown in FIG. 11 using the distortion correction algorithm as described above. The position detection unit 323 recognizes the vehicle information in the image corrected by the image correction unit 321. The position detection unit 323 generates the indoor parking map by synthesizing the top view image corrected by the image processing unit 32 and the indoor parking lot drawings stored in the indoor parking lot drawing storage unit 322 as described above, And detects the position of the vehicle. Here, t