Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The current application is DIV. of application No. 17/096279, now Pat. No. 12,091041 which is a CON. of application PCT/US2019/032429 and related provisional application No. 62/671779 filed on May 15, 2018.
A preliminary amendment claims filed on 08/27/2024 which shown claims 17-27 are pending for examination, and claims 1-16 & 28-253 canceled.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 17-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Independent claims as shown:
Claim 17. A system for autonomously navigating a host vehicle along a road segment, the system comprising: at least one processor programmed to:
receive from a server-based system an autonomous vehicle road navigation model, wherein the autonomous vehicle road navigation model includes a target trajectory for the host vehicle along the road segment and two or more location identifiers associated with at least one lane mark associated with the road segment;
receive from an image capture device at least one image representative of an environment of the vehicle;
determine a longitudinal position of the host vehicle along the target trajectory;
determine an expected lateral distance to the at least one lane mark based on the determined longitudinal position of the host vehicle along the target trajectory and based on the two or more location identifiers associated with the at least one lane mark;
analyze the at least one image to identify the at least one lane mark; determine an actual lateral distance to the at least one lane mark based on analysis of the at least one image; and
determine an autonomous steering action for the host vehicle based on a difference between the expected lateral distance to the at least one lane mark and the determined actual lateral distance to the at least one lane mark.
Claim 26. A method for autonomously navigating a host vehicle along a road segment, the method comprising:
receiving from a server-based system an autonomous vehicle road navigation model, wherein the autonomous vehicle road navigation model includes a target trajectory for the host vehicle along the road segment and two or more location identifiers associated with at least one lane mark associated with the road segment;
receiving from an image capture device at least one image representative of an environment of the vehicle;
determining a longitudinal position of the host vehicle along the target trajectory;
determining an expected lateral distance to the at least one lane mark based on the determined longitudinal position of the host vehicle along the target trajectory and based on the two or more location identifiers associated with the at least one lane mark;
analyzing the at least one image to identify the at least one lane mark;
determining an actual lateral distance to the at least one lane mark based on analysis of the at least one image; and determining an autonomous steering action for the host vehicle based on a difference between the expected lateral distance to the at least one lane mark and the determined actual lateral distance to the at least one lane mark.
101 Analysis - Step 1: Statutory category – Yes
The claim recites a system and a method including at least one step. The claim falls within one of the four statutory categories. MPEP 2106.03
101 Analysis - Step 2A Prong one evaluation: Judicial Exception – Yes – Mental processes.
In Step 2A, Prong one of the 2019 Patent Eligibility Guidance (PEG), a claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/or c) certain methods of organizing human activity.
The Office submits that the foregoing bolded limitation(s) constitutes judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the limitations can be “performed in the human mind, or by a human using a pen and paper”. See MPEP 2106.04(a)(2)(III)
The claims recite the limitations of “receive from a server-based system an autonomous vehicle road navigation model …; receive from an image capture device at least one image representative of an environment of the vehicle; determine a longitudinal position of the host vehicle along the target trajectory; determine an expected lateral distance to the at least one lane mark …; analyze the at least one image to identify the at least one lane mark; determine an actual lateral distance to the at least one lane mark …; and determine an autonomous steering action for the host vehicle …”. Those limitations, as drafted, are simple processing that, under their broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of by “at least one processor programmed”. That is, other than reciting ““at least one processor programmed”” nothing in the claim elements precludes the step from practically being performed in the mind. For example, but for the ““at least one processor programmed”” language, the claim encompasses a person looking at data collected and forming simple judgements. The mere nominal recitation of by “at least one processor programmed” does not take the claim limitations out of the mental process grouping.
Thus, the claim recites a mental process.
101 Analysis - Step 2A Prong two evaluation: Practical Application – No
In Step 2A, Prong two of the 2019 PEG, a claim is to be evaluated whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in MPEP 2106.04(d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
The Office submits that the foregoing bolded limitation(s) recite additional elements that do not integrate the recited judicial exception into a practical application.
The claim recites additional elements of the above claim 17 and claim 26 which shown the receiving step for vehicle road navigation model from the server and receiving step for the image of an environment from the image capture device, and the determining steps for the longitudinal position and lateral distance of the host vehicle along the trajectory and land mark, that are recited at a high level of generality which amount to mere data gathering and road condition. The analyzing steps for identifying the road’s lane mark and lateral distance, and the determining an autonomous steering action for the host vehicle which is a form of insignificant extra post solution. The “processor”, “server”, and “capture device” merely describes how to apply for “receiving”, “determining” and “analyzing” data, otherwise mental judgements using generic computer and well-known component (such as “server” and “image capture device”) or general-purpose for vehicle’s action on the road.
Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
101 Analysis - Step 2B evaluation: Inventive concept – No
In Step 2B of the 2019 PEG, a claim is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the receiving steps, analyzing step and determining steps were considered to be insignificant extra-solution activity in Step 2A, and thus they are re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The background recites a well-known autonomous vehicle identifies its location within a particular roadway, access quality data (e.g., captured image data, map data, sensor data, and etc.). The claims’ specification do not provide any indication that the autonomous vehicle’s system is anything other than a conventional processor (generic computer) within a vehicle. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Further, the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017). Accordingly, a conclusion that the processing steps in the independent claims are well-understood, routine, conventional activity which are supported under Berkheimer.
Thus, the claim is ineligible.
Dependent Claims
Dependent claims(s) 18-25 & 27 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of the dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, the dependent claims are not patent eligible under the same rationale as provided for in the rejection of the independent claims 17 & 26
Therefore, claim(s) 17-27 is/are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 17-27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shashua (20170010618) in view of Nagae (20180105170).
With regard to claim 17, Shashua discloses a system for autonomously navigating a host vehicle along a road segment (an autonomous vehicle 100, she Fig.1, [0262]-[0264]+), the system comprising: at least one processor programmed to:
receive from a server-based system an autonomous vehicle road navigation model, wherein the autonomous vehicle road navigation model includes a target trajectory for the host vehicle along the road segment and two or more location identifiers associated with at least one lane mark associated with the road segment (vehicle’s transceiver 172 receives a target trajectory for guiding navigation of the autonomous vehicle that will travel along the road segment. See [0404]+, the navigation model includes a plurality of target trajectories, lane mark associated with the road segment, [0017]+);
receive from an image capture device at least one image representative of an environment of the vehicle (receive from a camera, images of an environment of the vehicle, see [0330]+ and [0869]+);
determine a longitudinal position of the host vehicle along the target trajectory (vehicle 200 travels via previous locations 5922, 5924, 5926, … to current location 5934, see [0777]+);
determine an expected lateral distance to the at least one lane mark based on the determined longitudinal position of the host vehicle along the target trajectory and based on the two or more location identifiers associated with the at least one lane mark (radii of curvature C sub 1-subn segment of predetermined road model trajectory, see [0779]+);
analyze the at least one image to identify the at least one lane mark (processing 6000 determines a local feature based on the information received from imaging unit to identify a lane mark, and determine current distance between the vehicle and the landmark, see [0783]-[0784]+;
determine an actual lateral distance to the at least one lane mark based on analysis of the at least one image (the processing unit 110 determines current location 5934 of the vehicle 200 as a location 5970 when radius of curvature R sub t matches the radius of curvature R. sub. P, see [0779]+, or a lateral distance D, see [0655); and
determine an autonomous steering action for the host vehicle (the processing unit 110 determining an autonomous steering action for vehicle 200 based on a rotational angel between heading direction, see [0788]+).
Shashua fails to teach determine an autonomous steering action for the host vehicle based on a difference between the expected lateral distance to the at least one lane mark and the determined actual lateral distance to the at least one lane mark.
Nagae discloses a vehicle with a lane keep assist device (see the abstract). The vehicle comprises a LDA control unit 172 that calculate the current lateral position (lateral distance), an expected lateral velocity and departure angle, and sets an allowable departure distance of the vehicle from the traveling lane, see [0047]-[0048]+ which meets the scope of “determine an autonomous steering action for the host vehicle based on a difference between the expected lateral distance to the at least one lane mark and the determined actual lateral distance to the at least one lane mark”.
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Shashua by including determine an autonomous steering action for the host vehicle based on a difference between the expected lateral distance to the at least one lane mark and the determined actual lateral distance to the at least one lane mark as taught by Nagae. The combination of Shashua and Nagae is a combination of an adapted system to achieve the predictable result of navigating the vehicle accuracy.
With regard to claim 19, Shashua teaches that the system of claim 17, wherein the two or more location identifiers include locations in real world coordinates of points associated with the at least one lane mark (the processing unit 110 creates a projection of the detected segments from the image plane onto the real-world plane, see [0343]+).
With regard to claim 20, Shashua teaches that the system of claim 19, wherein the at least one lane mark is part of a dashed line marking a lane boundary, and the points associated with the at least one lane mark correspond to detected corners of the at least one lane mark (road segment 1130 includes a two-lane road, each lane being designated for a different direction of travel. The region 1111 includes other road features, such as a stop line 1132, a stop sign 1134 at the corner, see [0405]+).
With regard to claim 21, Shashua teaches that the system of claim 19, wherein the at least one lane mark is part of a continuous line marking a lane boundary, and the points associated with the at least one lane mark correspond to a detected edge of the at least one lane mark (the processing unit 110 considers the position and motion of the other vehicles, the detected road edges and barriers, and general road shape extracted from map data, see [0345+).
With regard to claims 22-23, Shashua teaches that the processing unit determines a local feature based on coefficients of curve fits to the dashed line spacing profile and dashed line length profile. Wherein the dashed line length could be a meter or five meter of the detected edge, see [0773]-[0776]+ that meets the scope of claims.
With regard to claim 24, Shashua teaches that the system of claim 19, wherein the points associated with the at least one lane mark corresponds to a centerline associated with the at least one lane mark (distance c sub.1-n represent middle of roadway 7900 dividing lanes 7910/7920, see [0934]+).
With regard to claim 25, Shashua teaches that the system of claim 19, wherein the points associated with the at least one lane mark corresponds to a vertex between two intersecting lane marks and at least one two other points associated with the intersecting lane marks (processing unit 110 determines directional indicators of landmarks, see [0670]+).
With regard to claim 26, Shashua discloses a method for autonomously navigating a host vehicle along a road segment, the method comprising:
receive from a server-based system an autonomous vehicle road navigation model, wherein the autonomous vehicle road navigation model includes a target trajectory for the host vehicle along the road segment and two or more location identifiers associated with at least one lane mark associated with the road segment (vehicle’s transceiver 172 receives a target trajectory for guiding navigation of the autonomous vehicle that will travel along the road segment. See [0404]+, the navigation model includes a plurality of target trajectories, lane mark associated with the road segment, [0017]+);
receive from an image capture device at least one image representative of an environment of the vehicle (receive from a camera, images of an environment of the vehicle, see [0330]+ and [0869]+);
determine a longitudinal position of the host vehicle along the target trajectory (vehicle 200 travels via previous locations 5922, 5924, 5926, … to current location 5934, see [0777]+);
determine an expected lateral distance to the at least one lane mark based on the determined longitudinal position of the host vehicle along the target trajectory and based on the two or more location identifiers associated with the at least one lane mark (radii of curvature C sub 1-subn segment of predetermined road model trajectory, see [0779]+);
analyze the at least one image to identify the at least one lane mark (processing 6000 determines a local feature based on the information received from imaging unit to identify a lane mark, and determine current distance between the vehicle and the landmark, see [0783]-[0784]+;
determine an actual lateral distance to the at least one lane mark based on analysis of the at least one image (the processing unit 110 determines current location 5934 of the vehicle 200 as a location 5970 when radius of curvature R sub t matches the radius of curvature R. sub. P, see [0779]+, or a lateral distance D, see [0655); and
determine an autonomous steering action for the host vehicle (the processing unit 110 determining an autonomous steering action for vehicle 200 based on a rotational angel between heading direction, see [0788]+).
Shashua fails to teach determine an autonomous steering action for the host vehicle based on a difference between the expected lateral distance to the at least one lane mark and the determined actual lateral distance to the at least one lane mark.
Nagae discloses a vehicle with a lane keep assist device (see the abstract). The vehicle comprises a LDA control unit 172 that calculate the current lateral position (lateral distance), an expected lateral velocity and departure angle, and sets an allowable departure distance of the vehicle from the traveling lane, see [0047]-[0048]+ which meets the scope of “determine an autonomous steering action for the host vehicle based on a difference between the expected lateral distance to the at least one lane mark and the determined actual lateral distance to the at least one lane mark”.
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Shashua by including determine an autonomous steering action for the host vehicle based on a difference between the expected lateral distance to the at least one lane mark and the determined actual lateral distance to the at least one lane mark as taught by Nagae. The combination of Shashua and Nagae is a combination of an adapted system to achieve the predictable result of navigating the vehicle accuracy.
With regard to claim 27, Shashua teaches that the method of claim 26, wherein the two or more location identifiers include locations in real world coordinates of points associated with the at least one lane mark (the processing unit 110 creates a projection of the detected segments from the image plane onto the real-world plane, see [0343]+).
Prior Arts Cited
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Zhang (20190156128) discloses a system for generating geometries for stripe-shaped objects which including lane lines for road edges or lanes of the roadway (see the abstract).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NGA X NGUYEN whose telephone number is (571)272-5217. The examiner can normally be reached M-F 5:30AM - 2:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JELANI SMITH can be reached at 571-270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NGA X. NGUYEN
Examiner
Art Unit 3662
/NGA X NGUYEN/Primary Examiner, Art Unit 3662