DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Pending
1-25
35 U.S.C. 112(b)
1-25
35 U.S.C. 101
1-25
35 U.S.C. 102
1-5, 8-17, 19-23
35 U.S.C. 103
6-7, 18, 24-25
Priority
Applicant’s indication of Domestic Benefit/National Stage information based on provisional application 63/424,024 filed 11/09/2022 is acknowledged.
Information Disclosure Statement
The information disclosure statement(s) (IDS(s)) submitted on 06/13/2024 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “at least generally similar” in claims 1, 9-13, 19, and 23 is a relative term which renders the claims indefinite. The term “at least generally similar” is not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claims 2-8, 14-18, 20-22, and 24-25 similarly do not further define “at least generally similar”, nor do they further provide a standard for ascertaining the requisite degree of this term. As such, these claims are rejected for the inclusion of a relative term that renders the claims indefinite by virtue of their dependency on claims that recite a relative term.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-25 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites:
“A non-transitory computer-readable medium comprising executable instructions, the executable instructions being executable by one or more processors to perform a method, the method comprising:
receiving multiple scenarios, each scenario of the multiple scenarios including one or more scenario road portions and one or more scenario hazards;
for each scenario of the multiple scenarios:
receiving human driver performance metrics, the human driver performance metrics based on driving performance of one or more human drivers on one or more road portions at least generally similar to the one or more scenario road portions when the one or more human drivers encountered one or more hazards at least generally similar to the one or more scenario hazards;
receiving simulation results data for one or more simulated autonomous vehicles driving on one or more simulated road portions at least generally similar to the one or more scenario road portions and encountering one or more simulated hazards at least generally similar to the one or more scenario hazards;
determining one or more autonomous vehicle performance metrics based on the simulation results data; and
generating one or more scenario autonomous vehicle performance assessments based on the human driver performance metrics and the one or more autonomous vehicle performance metrics;
generating one or more composite autonomous vehicle performance assessments based on the one or more scenario autonomous vehicle performance assessments generated for each scenario of the multiple scenarios; and
providing the one or more composite autonomous vehicle performance assessments.”
These limitations, as drafted, are simple processes that, under their broadest reasonable interpretation, cover performance of the mind, but for the recitation of “a non-transitory computer-readable medium comprising executable instructions, the executable instructions being executable by one or more processors to; receiving multiple scenarios, each scenario of the multiple scenarios including one or more scenario road portions and one or more scenario hazards; for each scenario of the multiple scenarios: receiving human driver performance metrics, the human driver performance metrics based on driving performance of one or more human drivers on one or more road portions at least generally similar to the one or more scenario road portions when the one or more human drivers encountered one or more hazards at least generally similar to the one or more scenario hazards; receiving simulation results data for one or more simulated autonomous vehicles driving on one or more simulated road portions at least generally similar to the one or more scenario road portions and encountering one or more simulated hazards at least generally similar to the one or more scenario hazards; providing the one or more composite autonomous vehicle performance assessments”. That is, other than reciting the underlined and italicized limitations above, nothing in the claim elements preclude the steps from being performed in the mind.
For example, a human can, in their mind, perform a method comprising: determining autonomous vehicle performance metrics based on the simulation results data, generating scenario autonomous vehicle performance assessments based on the human driver performance metrics and the autonomous vehicle performance metrics, and generating composite autonomous vehicle performance assessments based on the scenario autonomous vehicle performance assessments generated for each scenario.
This judicial exception is not integrated into a practical application. The claim recites the additional elements underlined and italicized above. The non-transitory computer-readable medium comprising executable instructions and one or more processors is/are recited at a high level of generality and merely link(s) the use of the abstract idea to a particular technological environment (see MPEP 2106.05(h)).
The receiving multiple scenarios, […], for each scenario […]: receiving human driver performance metrics, […]; receiving simulation results data […]; providing […] assessments is/are recited at a high level of generality and amounts to mere data gathering, manipulation, and transmission, which is a form of insignificant extra-solution activity (see MPEP 2106.05(g)). Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional element of non-transitory computer-readable medium comprising executable instructions and one or more processors is/are no more than mere generic linking of the abstract idea to a technological environment, which cannot provide an inventive concept.
The additional element of receiving multiple scenarios, […], for each scenario […]: receiving human driver performance metrics, […]; receiving simulation results data […]; providing […] assessments is/are mere data gathering, manipulation, and transmission, and is a well-understood, routine, and conventional function (see MPEP 2106.05(d) and see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93), and thus is/are no more than insignificant extra-solution activity (see MPEP 2106.05(g) and see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Thus, the limitations do not provide an inventive concept, and the claim contains ineligible subject matter.
Claim 9 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites:
“A method comprising:
receiving a scenario, the scenario including one or more scenario road portions;
receiving one or more human driver performance metrics, the one or more human driver performance metrics based on driving performance of one or more human drivers on one or more first road portions at least generally similar to the one or more scenario road portions;
receiving one or more autonomous vehicle performance metrics, the one or more autonomous vehicle performance metrics based on one or more autonomous vehicles driving on one or more second road portions at least generally similar to the one or more scenario road portions;
generating one or more scenario autonomous vehicle performance assessments based on the one or more human driver performance metrics and the one or more autonomous vehicle performance metrics; and
providing the one or more scenario autonomous vehicle performance assessments.”
These limitations, as drafted, are simple processes that, under their broadest reasonable interpretation, cover performance of the mind, but for the recitation of “receiving a scenario, the scenario including one or more scenario road portions; receiving one or more human driver performance metrics, the one or more human driver performance metrics based on driving performance of one or more human drivers on one or more first road portions at least generally similar to the one or more scenario road portions; receiving one or more autonomous vehicle performance metrics, the one or more autonomous vehicle performance metrics based on one or more autonomous vehicles driving on one or more second road portions at least generally similar to the one or more scenario road portions; providing the one or more scenario autonomous vehicle performance assessments”. That is, other than reciting the underlined limitations above, nothing in the claim elements preclude the steps from being performed in the mind.
For example, a human can, in their mind, perform a method comprising: generating scenario autonomous vehicle performance assessments based on the human driver performance metrics and the autonomous vehicle performance metrics.
This judicial exception is not integrated into a practical application. The claim recites the additional elements underlined above. The receiving a scenario, […]; receiving […] human driver performance metrics, […]; receiving […] autonomous vehicle performance metrics, […]; providing […] assessments is/are recited at a high level of generality and amounts to mere data gathering, manipulation, and transmission, which is a form of insignificant extra-solution activity (see MPEP 2106.05(g)). Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional element of receiving a scenario, […]; receiving […] human driver performance metrics, […]; receiving […] autonomous vehicle performance metrics, […]; providing […] assessments is/are mere data gathering, manipulation, and transmission, and is a well-understood, routine, and conventional function (see MPEP 2106.05(d) and see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93), and thus is/are no more than insignificant extra-solution activity (see MPEP 2106.05(g) and see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Thus, the limitations do not provide an inventive concept, and the claim contains ineligible subject matter.
Claim 19 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites:
“A system comprising at least one processor and memory containing executable instructions, the executable instructions being executable by the at least one processor to:
receive multiple scenarios, a scenario including one or more scenario road portions;
receive at least one human driver performance metric for each of the multiple scenarios, the at least one human driver performance metric based on driving performance of one or more human drivers on one or more first road portions at least generally similar to the one or more scenario road portions;
receive at least one autonomous vehicle performance metric for each of the multiple scenarios, the at least one autonomous vehicle performance metric based on one or more autonomous vehicles driving on one or more second road portions at least generally similar to the one or more scenario road portions;
generate at least one composite autonomous vehicle performance assessment for the multiple scenarios based on the at least one human driver performance metric for each of the multiple scenarios and the at least one autonomous vehicle performance metric for each of the multiple scenarios; and
provide the at least one composite autonomous vehicle performance assessment for the multiple scenarios.”
These limitations, as drafted, are simple processes that, under their broadest reasonable interpretation, cover performance of the mind, but for the recitation of “a system comprising at least one processor and memory containing executable instructions, the executable instructions being executable by the at least one processor to: receive multiple scenarios, a scenario including one or more scenario road portions; receive at least one human driver performance metric for each of the multiple scenarios, the at least one human driver performance metric based on driving performance of one or more human drivers on one or more first road portions at least generally similar to the one or more scenario road portions; receive at least one autonomous vehicle performance metric for each of the multiple scenarios, the at least one autonomous vehicle performance metric based on one or more autonomous vehicles driving on one or more second road portions at least generally similar to the one or more scenario road portions; provide the at least one composite autonomous vehicle performance assessment for the multiple scenarios”. That is, other than reciting the underlined and italicized limitations above, nothing in the claim elements preclude the steps from being performed in the mind.
For example, a human can, in their mind, generate composite autonomous vehicle performance assessment for the multiple scenarios based on the human driver performance metric for each of the multiple scenarios and the autonomous vehicle performance metric for each of the multiple scenarios.
This judicial exception is not integrated into a practical application. The claim recites the additional elements underlined and italicized above. The a system, at least one processor, memory with executable instructions, and instructions being executable by the processor is/are recited at a high level of generality and merely link(s) the use of the abstract idea to a particular technological environment (see MPEP 2106.05(h)).
The receive multiple scenarios, […]; receive […] human driver performance metric for […] multiple scenarios, […]; receive […] autonomous vehicle performance metric for […] multiple scenarios, […]; provide […] assessment for the multiple scenarios is/are recited at a high level of generality and amounts to mere data gathering, manipulation, and transmission, which is a form of insignificant extra-solution activity (see MPEP 2106.05(g)). Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional element of a system, at least one processor, memory with executable instructions, and instructions being executable by the processor is/are no more than mere generic linking of the abstract idea to a technological environment, which cannot provide an inventive concept.
The additional element of receive multiple scenarios, […]; receive […] human driver performance metric for […] multiple scenarios, […]; receive […] autonomous vehicle performance metric for […] multiple scenarios, […]; provide […] assessment for the multiple scenarios is/are mere data gathering, manipulation, and transmission, and is a well-understood, routine, and conventional function (see MPEP 2106.05(d) and see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93), and thus is/are no more than insignificant extra-solution activity (see MPEP 2106.05(g) and see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Thus, the limitations do not provide an inventive concept, and the claim contains ineligible subject matter.
Claim(s) 2, 5, 6, 8, 12, 13, 14 recite(s) limitations that are no more that the abstract idea recited in claim(s) 1 and 9. The claim(s) recite(s) identifying, determining, updating, generating, and selecting steps which can reasonably be performed in the human mind. The claim(s) recite(s) defining scenarios, receiving scenarios, tags, and objectives, storing tags, defining road portions and hazards, receiving driving data, providing assessments, receiving results data, defining performance metrics, and receiving metrics data steps which is/are mere data gathering, manipulation, and transmission, and is/are a well-understood, routine, and conventional function, and thus is/are no more than insignificant extra-solution activity. See MPEP 2106.05(g). Thus, the claim(s) contain(s) ineligible subject matter.
Claim(s) 3, 4, 10, 11, 15, 16, 21, 22 recite(s) limitations that are no more that the abstract idea recited in claim(s) 1, 9, and 19. The claim(s) recite(s) defining objectives, defining performance metrics, defining scenarios, and defining road portions steps which is/are mere data gathering, manipulation, and transmission, and is/are a well-understood, routine, and conventional function, and thus is/are no more than insignificant extra-solution activity. See MPEP 2106.05(g). Thus, the claim(s) contain(s) ineligible subject matter.
Claim(s) 7, 17, 18 recite(s) limitations that are no more that the abstract idea recited in claim(s) 1 and 9. The claim(s) recite(s) generating and determining steps which can reasonably be performed in the human mind. Thus, the claim(s) contain(s) ineligible subject matter.
Claim(s) 20, 23 recite(s) limitations that are no more that the abstract idea recited in claim(s) 19. The claim(s) recite(s) selecting and determining steps which can reasonably be performed in the human mind. The claim(s) recite(s) executable instructions being executable by the at least one processor at a high level of generality to generically link the use of the abstract idea in a particular technological environment. The claim(s) recite(s) receiving objectives and defining hazards, metrics, and road portions steps which is/are mere data gathering, manipulation, and transmission, and is/are a well-understood, routine, and conventional function, and thus is/are no more than insignificant extra-solution activity. See MPEP 2106.05(g). Thus, the claim(s) contain(s) ineligible subject matter.
Claim(s) 24, 25 recite(s) limitations that are no more that the abstract idea recited in claim(s) 19. The claim(s) recite(s) determining and generating steps which can reasonably be performed in the human mind. The claim(s) recite(s) executable instructions being executable by the at least one processor at a high level of generality to generically link the use of the abstract idea in a particular technological environment. Thus, the claim(s) contain(s) ineligible subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-5, 8-17, 19-23 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ross et al. (US 9,884,630 B1, hereinafter “Ross”).
Regarding claim 1: Ross teaches: A non-transitory computer-readable medium comprising executable instructions, the executable instructions being executable by one or more processors to perform a method, the method comprising (Col. 4, lines 4-12: methods, techniques, actions, computing device, method; computer executable instructions stored in memory; Col. 17, lines 35-50: processing resources, RAM, ROM, processor, instructions, storage):
receiving multiple scenarios, each scenario of the multiple scenarios including one or more scenario road portions and one or more scenario hazards (Col. 12, lines 3-18: detected event includes roadway condition, obstacle that is potential hazard, collision threat to vehicle, object in road, heavy traffic ahead, wetness, environmental conditions on road, potholes, debris, objects on collision path; detection allows control system to make evasive actions, plan for potential hazards; Col. 10, lines 50-59: sensors obtain view of vehicle, situational info and potential hazards proximate to vehicle);
for each scenario of the multiple scenarios (Col. 3, lines 24-51: snowy, nighttime conditions; Col. 6, lines 29-40: human metrics make safety, comfort ranges that SDVs must meet in multiple different conditions, environments, over various mileage):
receiving human driver performance metrics, the human driver performance metrics based on driving performance of one or more human drivers on one or more road portions at least generally similar to the one or more scenario road portions when the one or more human drivers encountered one or more hazards at least generally similar to the one or more scenario hazards (Col. 5, lines 5-21: measure various driver performance parameters; use human data to make set of metrics to compare with SDV data; metrics are subjective, objective values, ranges for comparison, safety, performance, comfort metrics; Col. 5, line 57-Col. 6, line 7: observational data, LIDAR, camera, GPS, IMU, store performance data (“data”) as data (human or SDV); collective human data analyzed to establish set of human metrics (“metrics”) with which SDV data is compared; Col. 6, lines 16-28: system interface transmits human data to performance optimization system; human data includes location, observational, IMU data (forces, data correlations) corresponding to any driving session for drivers in virtually any conditions and scenarios; human data to make metrics to compare SDV data; Col. 6, lines 29-40: human metrics make safety, comfort ranges that SDVs must meet in multiple different conditions, environments, over various mileage; Col. 13, lines 30-67: collect human control data from human drivers to determine set of human metrics with which to compare SDV data; metrics include acceleration, braking latency or intensity, compliance with stop signs, lights, traffic signs, regulations, condition-based performance (snow, rain), traffic law compliance, steering latencies (response time); use human data to build quantitative models to run subsequently acquired data through to determine SDV control vs idealized human driver);
receiving simulation results data for one or more simulated autonomous vehicles driving on one or more simulated road portions at least generally similar to the one or more scenario road portions and encountering one or more simulated hazards at least generally similar to the one or more scenario hazards (Col. 7, lines 1-26: data correlated with location, observational, IMU data; stored as SDV data, compiled over single and multiple autonomous driving sessions; before public use, SDV must meet safety, comfort standards that are set of “green ranges” of human metrics; SDV may perform better than 95% of human drivers across each human metric in multiple types of conditions, scenarios, over mileage; Col. 7, line 58 - Col. 8, line 7: location data (highway, in traffic, narrow roads); IMU data g-force signature; analysis of data; Col. 8, lines 8-25: find cause for poor performance under human metrics for each SDV metric outside proper range; analyze control data from SDV (commands of acceleration, braking, steering) in view of other data in SDV performance; make changes to control operations in SDV to fix metrics outside range; Col. 8, lines 26-37: transmit and process SDV system adjustments to fix deficiencies; SDV adjusts controls to resolve braking issues, increase following distance to increase reaction time, allow braking inputs to be tempered without affecting safety; Col. 8, lines 38-52: additional iterations of data gathering and processing; old SDV data saved, new SDV data collected, submitted for comparison against human metrics; more iterations completed until SDV exceeds established safety, comfort standards);
determining one or more autonomous vehicle performance metrics based on the simulation results data (Col. 2, lines 1-15: utilize data, set of quantitative metrics to determine overall safety, performance of SDV; Col. 3, lines 52-58: provide standardized performance, safety measurement, performance optimization system for AVs; AV data analyzed against ideal human driving standards, ensure safety, comfort; Col. 5, lines 5-21: utilize data of human drivers to establish set of metrics upon which to subsequently gauge SDV performance; metrics comprise subjective, objective values, ranges which SDV performance is compared, safety, performance, comfort metrics); and
generating one or more scenario autonomous vehicle performance assessments based on the human driver performance metrics and the one or more autonomous vehicle performance metrics (Col. 5, lines 5-21: measure various driver performance parameters; use human data to make set of metrics to compare with SDV data; metrics are subjective, objective values, ranges for comparison, safety, performance, comfort metrics; Col. 5, line 57-Col. 6, line 7: observational data, LIDAR, camera, GPS, IMU, store data as data (human or SDV); collective human data analyzed to establish set of human metrics with which SDV data is compared; Col. 7, lines 27-43: determine set of scores for SDV based on SDV data for each human metric; show rankings, percentiles of SDV compared to human drivers for each metric; SDV performs better than 97% of human drivers in ride comfort metrics; SDV performs within 90th percentile of human drivers across all traffic law compliance metrics);
generating one or more composite autonomous vehicle performance assessments based on the one or more scenario autonomous vehicle performance assessments generated for each scenario of the multiple scenarios (Col. 3, lines 3-23: get SDV data operating in region, analyze vs human metrics (wheel jerks, steering response latency, tailgating, etc.; for each metric, use data to make improvement plan for SDV targeting specific metrics to improve performance, safety, comfort of SDV in those metrics (traffic law compliance, braking latency, intensity, acceleration rate, steering latency); Col. 7, lines 27-43: determine set of scores for SDV based on SDV data for each human metric; show rankings, percentiles of SDV compared to human drivers for each metric; SDV performs better than 97% of human drivers in ride comfort metrics; SDV performs within 90th percentile of human drivers across all traffic law compliance metrics); and
providing the one or more composite autonomous vehicle performance assessments (Col. 3, lines 24-51: weigh metrics that are predictive of overall safety, comfort than others; make overall safety, performance score for SDV based on data; indicate whether SDV safe for public use, determine adjustments to fix any weaknesses; data for SDV operating in snow, nighttime conditions weighted specifically for those conditions; configure control parameters to improve safety of SDV for metrics; Col. 7, lines 44-57: SDV scores within 65th percentile for ride comfort metrics; Col. 13, lines 1-16: process data based on human metrics to determine overall performance of SDV; score, percentile for SDV for each metric to determine how well SDV is performing in relation to average, idealized human driver; Col. 15, lines 1-8: indicate whether SDV passed, failed for each metric).
Regarding claim 2: Ross further teaches: The non-transitory computer-readable medium of claim 1 wherein the multiple scenarios are multiple first scenarios, the one or more scenario road portions are one or more first scenario road portions, the one or more scenario hazards are one or more first scenario hazards, and wherein the method further comprises (Col. 7, line 58 - Col. 8, line 7: location data shows where braking events occur (highway, in city traffic, narrow roads), IMU shows g-force for events; analyze data to find cause for poor ride comfort; find overall issue in braking latency; while within safety compliance, ride comfort is bad; work to fix braking latency without affecting SDV safety performance; Col. 12, lines 3-18: detected event includes roadway condition, obstacle that is potential hazard, collision threat to vehicle, object in road, heavy traffic ahead, wetness, environmental conditions on road, potholes, debris, objects on collision path; detection allows control system to make evasive actions, plan for potential hazards; Col. 10, lines 50-59: sensors obtain view of vehicle, situational info and potential hazards proximate to vehicle):
receiving multiple second scenarios, each second scenario of the multiple second scenarios including one or more second scenario road portions and one or more second scenario hazards (Col. 7, lines 1-26: data correlated with location, observational, IMU data; stored as SDV data, compiled over single and multiple autonomous driving sessions; before public use, SDV must meet safety, comfort standards that are set of “green ranges” of human metrics; SDV may perform better than 95% of human drivers across each human metric in multiple types of conditions, scenarios, over mileage; Col. 15, lines 1-8: scores show SDV's performance compared to metrics; indicate if SDV passed or failed for each metric; scores are set of charts showing passable range for each metric, indicator of SDV's performance on each chart vs each metric);
receiving one or more tags, a tag including information usable for selection of a second scenario of the multiple second scenarios (Col. 8, lines 38-52: additional iterations of data gathering and processing; old SDV data saved, new SDV data collected, submitted for comparison against human metrics; more iterations completed until SDV exceeds established safety, comfort standards; Col. 11, lines 37-50: localization maps compared with sensor data to identify potential hazards while operating through region);
storing the one or more tags in association with one or more second scenarios of the multiple second scenarios (Col. 7, lines 1-26: data correlated with location, observational, IMU data; stored as SDV data, compiled over single and multiple autonomous driving sessions; before public use, SDV must meet safety, comfort standards that are set of “green ranges” of human metrics; SDV may perform better than 95% of human drivers across each human metric in multiple types of conditions, scenarios, over mileage; Col. 11, lines 37-50: localization maps compared with sensor data to identify potential hazards while operating through region);
receiving an objective, the objective indicating a purpose for the one or more composite autonomous vehicle performance assessments (Col. 8, lines 38-52: additional iterations of data gathering and processing; old SDV data saved, new SDV data collected, submitted for comparison against human metrics; more iterations completed until SDV exceeds established safety, comfort standards; Col. 9, lines 17-31: hard braking event (detect dog running across road); determine reaction and event are anomalous, conclude data from event not indicative of fundamental operation issue with SDV; Col. 16, lines 51-65: performance models for scoring SDV performance vs each metric; weight each metric; analyze data using models, generate score for SDV for each metric); and
identifying a subset of second scenarios of the multiple second scenarios based on the objective and the one or more tags associated with the one or more second scenarios to obtain the multiple first scenarios (Col. 8, lines 38-52: additional iterations of data gathering and processing; old SDV data saved, new SDV data collected, submitted for comparison against human metrics; more iterations completed until SDV exceeds established safety, comfort standards; Col. 9, lines 17-31: hard braking event (detect dog running across road); determine reaction and event are anomalous, conclude data from event not indicative of fundamental operation issue with SDV; Col. 11, lines 37-50: localization maps compared with sensor data to identify potential hazards while operating through region).
Regarding claim 3: Ross further teaches: The non-transitory computer-readable medium of claim 2 wherein the objective includes compliance with one or more laws or regulations and the purpose includes a demonstration of compliance with the one or more laws or regulations (Col. 2, lines 32-56: metrics include braking, acceleration, steering, traffic law compliance, running red light, double lane changes, failure to yield, illegal passing, turns, U-turns, driving without headlights, failure to signal; Col. 5, lines 22-37: each metric classified (performance, safety, comfort, traffic law compliance); Col. 7, lines 27-43: determine set of scores for SDV based on SDV data for each human metric; SDV performs within 90th percentile of human drivers across all traffic law compliance metrics).
Regarding claim 4: Ross further teaches: The non-transitory computer-readable medium of claim 1 wherein the human driver performance metrics include human perception and reaction times and the one or more autonomous vehicle performance metrics include one or more autonomous vehicle perception and reaction times (Col. 8, lines 26-37: SDV adjusts controls to resolve braking issues, increase following distance to increase reaction time, allow braking inputs to be tempered without affecting safety; Col. 15, lines 9-19: indicate settings of SDV that map SDV's behavior (braking strength, reaction time related to SDV's decision-making process, general acceleration settings); Col. 8, lines 53-67: any detected metrics deficiency fixed by corrective action by SDV; Col. 13, lines 30-46: collect human control data from human drivers to determine set of human metrics with which to compare SDV data; metrics include acceleration, braking latency or intensity; Col. 7, lines 44-57: SDV only scores within 65th percentile for ride comfort metrics associated with braking; Col. 13, lines 47-67: human metrics include overall braking comfort, overall acceleration, condition-based performance (snow, rain), speed control, traffic law compliance, steering latencies (response time); use human data to build quantitative models to run subsequently acquired data through to determine SDV control vs idealized human driver; Col. 14, lines 1-21: data from specific driver processed to find driver’s performance vs all other drivers for each metric; driver’s rating includes series of scores, percentiles showing driver is in certain percentile in overall traffic law compliance, ride comfort).
Regarding claim 5: Ross further teaches: The non-transitory computer-readable medium of claim 1, the method further comprising for each scenario of the multiple scenarios (Col. 3, lines 24-51: snowy conditions, nighttime conditions; Col. 6, lines 29-40: safety ranges SDVs must meet in multiple different conditions, environments over amount of mileage to be road-worthy):
determining human collision rates or collision risks based on the human driver performance metrics (Col. 8, lines 53-67: any detected metrics deficiency fixed by corrective action by SDV; Col. 13, lines 30-67: collect human control data from human drivers to determine set of human metrics with which to compare SDV data; metrics include acceleration, braking latency or intensity, lane positioning, maintaining distance from others, compliance with stop signs, lights, traffic signs, regulations, speed, steering, consistency, overall braking comfort, overall acceleration, condition-based performance (snow, rain), speed control, traffic law compliance, steering latencies (response time); use human data to build quantitative models to run subsequently acquired data through to determine SDV control vs idealized human driver; Col. 14, lines 1-21: run SDV data through quantitative model yields performance results, score for SDV in each metric; human data used to set ranges for each metric; data from specific driver processed to find driver’s performance vs all other drivers for each metric; driver’s rating includes series of scores, percentiles showing driver is in certain percentile in overall traffic law compliance, ride comfort); and
determining one or more autonomous vehicle collision rates or collision risks based on the one or more autonomous vehicle performance metrics (Col. 3, lines 24-51: weigh metrics that are predictive of overall safety, comfort more than others; make overall safety, performance score for SDV based on data; indicate whether SDV safe for public use; Col. 10, lines 50-59: sensors obtain view of vehicle, situational info and potential hazards; Col. 7, lines 1-26: before public use, SDV must meet safety, comfort standards that are set of “green ranges” of human metrics; SDV may perform better than 95% of human drivers across each human metric in multiple types of conditions, scenarios, over mileage; Col. 7, lines 27-43: determine set of scores for SDV based on SDV data for each human metric; show rankings, percentiles of SDV compared to human drivers for each metric; SDV performs better than 97% of human drivers in ride comfort metrics; SDV performs within 90th percentile of human drivers across all traffic law compliance metrics; Col. 7, lines 44-57: identify causes of weaknesses in SDV performance using SDV data; SDV only scores within 65th percentile for ride comfort metrics associated with braking; Col. 11, lines 37-50: identify potential hazards while operating through region; Col. 12, lines 3-18: detected event includes roadway condition, obstacle that is potential hazard, collision threat to vehicle, object in road, heavy traffic ahead, wetness, environmental conditions on road, potholes, debris, objects on collision path; detection allows control system to make evasive actions, plan for potential hazards; Col. 13, lines 1-16: data from various sensors stored in database; process data based on human metrics to find overall performance of SDV; determine score, percentile for SDV for each metric to determine how well SDV performs compared to average, idealized human driver).
Regarding claim 8: Ross further teaches: The non-transitory computer-readable medium of claim 1 wherein the one or more road portions are one or more first road portions, the one or more hazards are one or more first hazards, and wherein the method further comprises (Col. 7, line 58 - Col. 8, line 7: location data shows where braking events occur (highway, in city traffic, narrow roads), IMU shows g-force for events; analyze data to find cause for poor ride comfort; find overall issue in braking latency; while within safety compliance, ride comfort is bad; work to fix braking latency without affecting SDV safety performance; Col. 12, lines 3-18: detected event includes roadway condition, obstacle that is potential hazard, collision threat to vehicle, object in road, heavy traffic ahead, wetness, environmental conditions on road, potholes, debris, objects on collision path; detection allows control system to make evasive actions, plan for potential hazards; Col. 10, lines 50-59: sensors obtain view of vehicle, situational info and potential hazards proximate to vehicle):
receiving autonomous vehicle driving data for one or more autonomous vehicles driving on one or more second road portions and encountering one or more second hazards (Col. 7, line 58 - Col. 8, line 7: location data shows where braking events occur (highway, in city traffic, narrow roads), IMU shows g-force for events; analyze data to find cause for poor ride comfort; find overall issue in braking latency; while within safety compliance, ride comfort is bad; work to fix braking latency without affecting SDV safety performance);
identifying a particular scenario of the multiple scenarios based on the one or more second road portions and the one or more second hazards (Col. 7, line 58 - Col. 8, line 7: location data shows where braking events occur (highway, in city traffic, narrow roads), IMU shows g-force for events; analyze data to find cause for poor ride comfort; find overall issue in braking latency; while within safety compliance, ride comfort is bad; work to fix braking latency without affecting SDV safety performance);
based on the autonomous vehicle driving data, updating the one or more autonomous vehicle performance metrics for the particular scenario to obtain one or more updated autonomous vehicle performance metrics (Col. 8, lines 8-25: find cause for poor performance under human metrics for each SDV metric outside proper range; analyze control data from SDV (commands of acceleration, braking, steering) in view of other data in SDV performance; make changes to control operations in SDV to fix metrics outside range; Col. 8, lines 26-37: transmit and process SDV system adjustments to fix deficiencies; SDV adjusts controls to resolve braking issues, increase following distance to increase reaction time, allow braking inputs to be tempered without affecting safety; Col. 8, lines 38-52: additional iterations of data gathering and processing; old SDV data saved, new SDV data collected, submitted for comparison against human metrics; more iterations completed until SDV exceeds established safety, comfort standards);
updating the one or more scenario autonomous vehicle performance assessments based on the one or more updated autonomous vehicle performance metrics to obtain one or more updated scenario autonomous vehicle performance assessments (Col. 8, lines 8-25: find cause for poor performance under human metrics for each SDV metric outside proper range; analyze control data from SDV (commands of acceleration, braking, steering) in view of other data in SDV performance; make changes to control operations in SDV to fix metrics outside range; Col. 8, lines 26-37: transmit and process SDV system adjustments to fix deficiencies; SDV adjusts controls to resolve braking issues, increase following distance to increase reaction time, allow braking inputs to be tempered without affecting safety; Col. 8, lines 38-52: additional iterations of data gathering and processing; old SDV data saved, new SDV data collected, submitted for comparison against human metrics; more iterations completed until SDV exceeds established safety, comfort standards);
updating the one or more composite autonomous vehicle performance assessments based on the one or more updated scenario autonomous vehicle performance assessments to obtain one or more updated composite autonomous vehicle performance assessments (Col. 8, lines 26-37: transmit and process SDV system adjustments to fix deficiencies; SDV adjusts controls to resolve braking issues, increase following distance to increase reaction time, allow braking inputs to be tempered without affecting safety; Col. 8, lines 38-52: additional iterations of data gathering and processing; old SDV data saved, new SDV data collected, submitted for comparison against human metrics; more iterations completed until SDV exceeds established safety, comfort standards); and
providing the one or more updated composite autonomous vehicle performance assessments (Col. 8, lines 26-37: transmit and process SDV system adjustments to fix deficiencies; SDV adjusts controls to resolve braking issues, increase following distance to increase reaction time, allow braking inputs to be tempered without affecting safety; Col. 8, lines 38-52: additional iterations of data gathering and processing; old SDV data saved, new SDV data collected, submitted for comparison against human metrics; more iterations completed until SDV exceeds established safety, comfort standards; Col. 3, lines 24-51: attach weightings to certain metrics that may be more predictive of overall safety comfort; generate overall safety, performance score for SDV based on received data; Col. 7, lines 44-57: SDV scores within 65th percentile for ride comfort metrics associated with braking; Col. 13, lines 1-16: process data based on set of human metrics to determine overall performance of SDV; determine score, percentile for SDV for each metric to determine how well SDV is performing in relation to average, idealized human driver; Col. 15, lines 1-8: scores indicating SDV's performance with respect to metrics; scores can indicate whether SDV has passed, failed for each of metrics).
Regarding claim 9: Ross teaches: A method comprising (Col. 4, lines 4-12: methods, techniques, actions performed by computing device, computer-implemented method; use of code, computer-executable instructions stored in memory):
receiving a scenario, the scenario including one or more scenario road portions (Col. 12, lines 3-18: detected event include roadway condition, obstacle poses potential hazard, threat of collision, object in road, heavy traffic ahead, wetness, environmental conditions, potholes, debris, objects on collision trajectory; plan for potential hazards; Col. 10, lines 50-59: sensors obtain view of vehicle, situational info proximate to vehicle, potential hazards proximate to vehicle);
receiving one or more human driver performance metrics, the one or more human driver performance metrics based on driving performance of one or more human drivers on one or more first road portions at least generally similar to the one or more scenario road portions (Col. 5, lines 5-21: measure various driver performance parameters; utilize to establish set of metrics to gauge SDV performance; comprise subjective, objective values, ranges, safety, performance, comfort metrics; Col. 5, line 57-Col. 6, line 7: data from LIDARs, cameras, GPS, IMU, stored as data (human or SDV) in local database; collective human data analyzed to establish set of human metrics for SDV comparison; Col. 6, lines 16-28: human data: location, observational, IMU data provide specific forces, data correlations of driving session for driver in any conditions, scenarios; Col. 6, lines 29-40: in setting human metrics, establish sensible safety ranges that SDVs must meet (in multiple different conditions, environments); Col. 13, lines 30-67: collect human control data for set of human metrics for SDV comparison; acceleration, braking latency or intensity, lane position, distance from vehicles, pedestrians, compliance with stop signs, lights, traffic signs, regulations, speed control, steering smoothness, consistency, overall braking comfort, braking performance (reactive, anticipated), fuel consumption, performance efficiency, lateral g-forces, acceleration performance, ride smoothness, condition-based performance (snow, rain), cautiousness, slowing, stopping, lane changing, steering jitter, traffic law compliance, proximity to objects, vehicles, people, yield behavior, tailgating, latencies (response time), use headlights, turn signals; using human control data, system can build set of quantitative models to run subsequently acquired data through in order to gauge performance of SDV in relation to idealized human driver);
receiving one or more autonomous vehicle performance metrics, the one or more autonomous vehicl