DETAILED ACTION
This action is in response to the amendments filed on Dec. 2nd, 2022. A summary of this action:
Claims 1-7, 9-11, 13-15, 17-22, 24 have been presented for examination.
Claims 10-11 and 17 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite
Claims 2-4, 14-15, 19-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite
Claims 2-4, 14-15, 19-21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement
Claims 1-7, 9-11, 13-15, 17-22, 24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of a mental process without significantly more.
Claim(s) 1-2, 5, 15, 18-21, 24 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kabirzadeh, US 11,150,660.
Claim(s) 3-4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kabirzadeh, US 11,150,660 in view of Kakade, Hrishikesh, et al. "Autonomous Highway Overtaking." Han University of Applied Sciences. Master’s (2018).
Claim(s) 6-7, 9-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kabirzadeh, US 11,150,660 in view of Besse, Philippe C., et al. "Destination prediction by trajectory distribution-based model." IEEE Transactions on Intelligent Transportation Systems 19.8 (2017): 2470-2481.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kabirzadeh, US 11,150,660 in view of Besse, Philippe C., et al. "Destination prediction by trajectory distribution-based model." IEEE Transactions on Intelligent Transportation Systems 19.8 (2017): 2470-2481 in further view of Joseph, Joshua, et al. "A Bayesian nonparametric approach to modeling motion patterns." Autonomous Robots 31.4 (2011): 383-400.
Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kabirzadeh, US 11,150,660 in view of Caldwell et al., US 11,565,709.
This action is non-final
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Interpretation
With respect to the recitation of “closed loop”, this is given its BRI in view of page 4 last two paragraphs of the instant disclosure.
The term “stack” is interpreted in view of page 12, ¶¶ 2-3, including: “A stack can refer purely to software, i.e. one or more computer programs that can be executed on one or more general-purpose computer processors.”
Similar with the term ego agent, see page 12, last paragraph: “The ego agent is a real or simulated mobile robot that moves under the control of the stack under testing…”
Similar with the term trace, see page 14 ¶ 1: “A trace is a history of an agent's location and motion over the course of a scenario. There are many ways a trace can be represented…With regards to terminology, a "trace" and a "trajectory" may contain the same or similar types of information (such as a series of spatial and motion states over time). The term trajectory is generally favoured in the context of planning (and can refer to 10 future/predicted trajectories), whereas the term trace is generally favoured in relation to past behaviour in the context of testing/evaluation.”
Claim 18 recites the term substantially. This is interpreted in view of page 23 ¶ 2, and MPEP § 2173.05(b)(III)(D): “The term "substantially" is often used in conjunction with another term to describe a particular characteristic of the claimed invention. It is a broad term. In re Nehrenberg, 280 F.2d 161, 126 USPQ 383 (CCPA 1960). The court held that the limitation "to substantially increase the efficiency of the compound as a copper extractant" was definite in view of the general guidelines contained in the specification. In re Mattison, 509 F.2d 563, 184 USPQ 484 (CCPA 1975). The court held that the limitation "which produces substantially equal E and H plane illumination patterns" was definite because one of ordinary skill in the art would know what was meant by "substantially equal." Andrew Corp. v. Gabriel Electronics, 847 F.2d 819, 6 USPQ2d 2010 (Fed. Cir. 1988).”
Information Disclosure Statement
The listing of references in the specification is not a proper information disclosure statement. 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, "the list may not be incorporated into the specification but must be submitted in a separate paper." Therefore, unless the references have been cited by the examiner on form PTO-892, they have not been considered.
See page 59. See page 23 ¶ 3. Pages 29-30, paragraph split between pages. Page 2 ¶ 3. Pages 35-36 paragraph split between the pages. Numerous references are cited to in the disclosure, however no IDS has been provided for these references.
Claim Objections
Claims 1-7, 9-11, 13-15, 17-22, 24 are objected to because of the following informalities:
Claim 13 recites “possible goals or behaviors” – the Examiner objects to the use of the ambiguous term “possible”, as it requires these goals or behaviors to not be impossible, but no clear standard is provided in the specification for what is possible, and what is not. The Examiner interprets this in view of page 51, algorithm 1, # 1 as a set/plurality of goals
Claim 6 recites “available”, – page 6: “available (possible) goals”. Objected to under a similar rationale as claim 13
Claims 7 is objected to under a similar rationale as well
The claims have numerous issues with antecedent basis. The Examiner suggests amending the claims such that the first recitation of each distinct element uses articles such as “a”/”an”, later recitations referring back to the same distinct element uses articles such as “the”/”said”, to use disambiguating modifiers (e.g., first, second, etc.) when there are multiple distinct elements with the same base term, and that the use of modifiers for each distinct element is kept consistent. Below is a non-exhaustive list of examples of these issues:
Independents: “the defined road layout”, but the prior recitation was “a version of the road layout”
Claim 2: “the performance” was not previously recited
Claim 5: “to apply the agent decision logic”, however this step is previously recited. Examiner suggests “wherein the applying of the agent decision logic…” so as to further limit expressly the prior step
Claim 6: similar objection as claim 5 for the preamble
Claim 6: “the set…” however this was not previously recited
Claim 7: “ and comparing the observed trace of the real-world agent with the expected trajectory model for each of the available goals or behaviours, to determine a likelihood of that goal or behaviour, thus determining a distribution over the available goals or behaviours.” – “that” is not precise antecedent basis. The Examiner suggests amending to more clearly reflect there is a plurality of likelihoods determined, i.e. one likelihood for each corresponding pair of associated expected trajectories and available goals/behaviors. See page 50, subsection F to clarify, note the “i"
Claim 9 objected to under a similar rationale. Also, note page 42, second to last paragraph
Claim 9 – another “distribution” – the Examiner suggests disambiguating modifiers, e.g. first/second, e.g. a predicted trajectory distribution, etc.
Claim 11 – “those trajectory models” – objected to for similar reasons as above
Claim 13 – another singular goal/behavior (i.e. Examiner suggests express disambiguation between distinct elements)
Claim 14 – “the test oracle” but claim 14 does not depend on claim 2.
Appropriate correction is required.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 10-11 and 17 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The dependent claims inherit the deficiencies of the claims they depend upon.
MPEP § 2173.05(g): “A functional limitation is often used in association with an element, ingredient, or step of a process to define a particular capability or purpose that is served by the recited element, ingredient or step… Notwithstanding the permissible instances, the use of functional language in a claim may fail "to provide a clear-cut indication of the scope of the subject matter embraced by the claim" and thus be indefinite. In re Swinehart, 439 F.2d 210, 213 (CCPA 1971). For example, when claims merely recite a description of a problem to be solved or a function or result achieved by the invention, the boundaries of the claim scope may be unclear. Halliburton Energy Servs., Inc. v. M-I LLC, 514 F.3d 1244, 1255, 85 USPQ2d 1654, 1663 (Fed. Cir. 2008) (noting that the Supreme Court explained that a vice of functional claiming occurs "when the inventor is painstaking when he recites what has already been seen, and then uses conveniently functional language at the exact point of novelty") (quoting General Elec. Co. v. Wabash Appliance Corp., 304 U.S. 364, 371 (1938));… For instance, a single means claim covering every conceivable means for achieving the stated result was held to be invalid under 35 U.S.C. 112, first paragraph because the court recognized that the specification, which disclosed only those means known to the inventor, was not commensurate in scope with the claim. Hyatt, 708 F.2d at 714-715, 218 USPQ at 197.”
MPEP § 2173.05(b)(IV): “A claim term that requires the exercise of subjective judgment without restriction may render the claim indefinite. In re Musgrave, 431 F.2d 882, 893, 167 USPQ 280, 289 (CCPA 1970). Claim scope cannot depend solely on the unrestrained, subjective opinion of a particular individual purported to be practicing the invention. Datamize LLC v. Plumtree Software, Inc., 417 F.3d 1342, 1350, 75 USPQ2d 1801, 1807 (Fed. Cir. 2005));”
Claim 10 recites the phrase “best-available trajectory model”, wherein the term “best-available” is a subjective term that renders the claim indefinite because there is no standard provided in the instant disclosure (e.g. page 56 ¶ 2) for POSITA to ascertain the scope of the present claims without relying on their own unrestrained, subjective opinion when practicing the invention. Claim 11 inherits this deficiency.
To clarify, at issue is that “best” is subjective, i.e. each POSITA for the scope of claim 10 would be left to their own exercise of their own subjective opinion as to what plan was best.
To further clarify, the claim merely recites: “predict a best-available trajectory model for the goal or behaviour” – this is subjective, as POSITA would be left to their own exercise of their own subjective opinion as to what was required to be predicted.
Claim 17 recites:
The computer system of claim 1, wherein the one or more processors are configured to apply one or more non- real time perception algorithms to the real-world driving data, in order to extract the observed trace
This is indefinite due to the subjective nature of what is real-time and what is non-real time. See page 20-21, paragraph split between the pages, and page 21, ¶ 1.
At issue is that “A non-real time perception algorithm could be an algorithm that it would not be feasible to run in real time because of the computation or memory resources it requires.” – but the specification provides no objective standard on the amount of computation/memory resources that provide a clear line of demarcation between real-time and non real-time. Nor does the specification even provide a description of what resources would be in the AV, e.g. does the AV have 8kb, 8MB, or 8GB of memory? In other words, the specification provides no objective standard to ascertain even what AVs this is limited to be applied to so as to ascertain what is real-time in connection with the compute resources of the AV.
Claim Interpretation – 112(f)
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Claim 2: The computer system of claim 1, comprising a test oracle configured to evaluate the performance of the AV stack in the simulation, by receiving a simulated ego trace of the ego agent, as generated in the simulation, and scoring the simulated ego trace against a set of predetermined performance metrics.
In a similar manner, see claim 14: wherein the one or more processors are configured to generate a graphical user interface comprising an output provided by the test oracle for assessing the performance of the AV stack in the simulation.
Also see similar recitations in claims 20-21.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-4, 14-15, 19-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The dependent claims inherit the deficiencies of the claims they depend upon.
The above noted limitations invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function.
At issues is that the simulator and test oracles are set forth as structure separate as distinct from the processors executing instructions (claim 1 and dependents thereof) and the computer of claim 19 (and dependents thereof).
See fig. 5, # 252 to clarify. See fig. 2 for # 202 and # 252, as described on page 24.
Then see page 58: “References herein to components, functions, modules and the like, denote functional components of a computer system which may be implemented at the hardware level in various ways… The various components of Figure 2, such as the simulator 202 and the test oracle 252 may be similarly implemented in programmable and/or dedicated hardware.”
There is not sufficient structure disclosed for the present recitations, for there is not sufficient structure (i.e. “dedicated hardware” is insufficient) clearly linked to these elements to perform the entire claimed functionality.
The Examiner suggests amending the claims in a similar manner as claim 1 was amended during the preliminary amendment, i.e. remove the nonce terms, and have the processors/computer performing all steps of the claimed method.
Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 2-4, 14-15, 19-21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The dependent claims inherit the deficiencies of the claims they depend upon.
See the above § 112(f) invocation and corresponding § 112(b) rejection including the citations to the disclosure in the § 112(b) rejection. See MPEP §2181(IV): “When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under section 112(a).” and MPEP 2181(II)(B): “When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a). See MPEP § 2163.03, subsection VI.”
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-7, 9-11, 13-15, 17-22, 24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of both a mathematical concept and mental process without significantly more.
Step 1
Claim 19 is directed towards the statutory category of a process.
Claim 1 is directed towards the statutory category of an apparatus.
Claim 24 is directed towards the statutory category of an article of manufacture.
Claims 19 and 24, and the dependents thereof, are rejected under a similar rationale as representative claim 1, and the dependents thereof.
Step 2A – Prong 1
The claims recite an abstract idea of both a mental process and mathematical concept. Dependent claims add a math concept to the mental process of the independents.
See MPEP § 2106.04: “...In other claims, multiple abstract ideas, which may fall in the same or different groupings, or multiple laws of nature may be recited. In these cases, examiners should not parse the claim. For example, in a claim that includes a series of steps that recite mental steps as well as a mathematical calculation, an examiner should identify the claim as reciting both a mental process and a mathematical concept for Step 2A Prong One to make the analysis clear on the record.”
To clarify, see the USPTO 101 training examples, available at https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility.
The mental process recited in claim 1 is:
process real-world driving data to extract therefrom at least one observed trace of a real-world agent within a road layout, the observed trace having spatial and motion components; - see page 20: “Data 140 of a real-world run is passed to a 'ground-trothing' pipeline 142 (trace extraction component) for the purpose of generating scenario ground truth… The run data is processed within the ground truthing pipeline 142, in order to generate appropriate ground truth 144 (trace(s) and contextual data) for the real-world run. The ground truth of the real-world run 144 comprises an extracted ego trace of the ego agent and one or more extracted agent trace(s) of one or more other (non-ego) agents. As discussed, the ground-truthing process could be based on manual annotation of the 'raw' run data 142, or the process could be entirely automated (e.g. using offline perception method(s)), or a combination of manual and automated ground truthing could be used.”
See page 13 to clarify: “In a real-world scenario run, a "perfect" representation of the
scenario run does not exist in the same sense; nevertheless, suitably informative ground truth
can be obtained in numerous ways, e.g. based on manual annotation of on-board sensor data,”
Page 27, second to last paragraph: “For example, each trace 212a, 212b may take the form of a spatial path having motion data associated with points along the path such as speed, acceleration, jerk (rate of change of acceleration), snap (rate of change of jerk) etc.”
E.g. the mental process of this step would be a person simply observing measured data, e.g. video from a dash cam (or similar such camera), and mentally evaluating it to determine an approximate trajectory of another vehicle, e.g. observing police dash cam footage on the nightly news, and evaluating/judging that the other vehicle is about to go in a certain direction.
apply at least one of goal recognition and behaviour recognition to the observed trace, to infer a goal or behaviour of the real-world agent within the road layout, and extract a driving scenario defining a version of the road layout and at least one non-ego agent to be simulated, the non-ego agent associated with the inferred goal or behaviour for implementing in simulation within the defined road layout;
A mental process. See page 14 ¶ 1: “A trace is a history of an agent's location and motion over the course of a scenario.” See pages 22-23: “The present techniques can also be applied with deterministic goal recognition, in which case the goal sampling component 151 may be omitted, and deterministic goals may be inferred and provided directly to the agent decision logic 210… A goal generally refers to a relatively longer-term objective (e.g. which might remain fixed over the course of a simulated run), whilst a maneuver generally occurs over a relatively shorter time scale.”. Page 40, second to last paragraph: “A goal may for example be captured as a desired location (reference point) on a map, which
the ego vehicle is attempting to reach from a current location on the map. For example the
desired location may be defined in relation to a particular junction, lane layout, roundabout
exit etc. The map, in this context, refers to the static road layout of a scenario description
201a…” and page 41, last paragraph: “For example, given a set 25 of non-ego vehicles in the vicinity of a road junction, roundabout or other road layout indicated on the map (the driving context), suitable goals may be hypothesised from the road layout alone (without taking into account any observed historical behaviour of the agent). By way of example, if the other vehicle is currently driving on a multi-lane road, with no nearby junctions, the set of hypothesised goals may consist of "follow lane" and "switch lane". As 30 another example, with a set of non-ego agents in the vicinity of a left-tum junction, the hypothesised goals may be tum left and continue straight. As indicated, such goals are defined with reference to suitable reference points on the map.”
Page 51: “A heuristic function is used to generate a set of possible goals qi for vehicle i based on its location and context information such as road layout and traffic rules. The goal recognition component 156 defines one goal for the end of the vehicle's current road and goals for end of each reachable connecting road, bounded by the ego vehicle's view region. Infeasible goals, such as locations behind the vehicle, are not included.”
To clarify, given the generality recited herein, this is a mental process of a person mentally observing the trajectory of a vehicle, and simply observing a behavior, e.g. that the vehicle is driving aggressively/recklessly fast, or mentally judging when the vehicle is to go, e.g. observing that although the driver in front of them does not have their blinker on, the driver’s trajectory is indicating (e.g. by speeding up and starting to drift in a different lane) that they are going to do a lane change (a goal), or similarly observing a vehicle, e.g. an 18 wheeler, is moving on a trajectory to an off-ramp on I-70 in the middle of rural Kansas, with the other rest stop being a trucking friendly gas station, so the person infers that the goal of the 18-wheeler is to stop for gas at that gas station. Or similarly they observe an ambulance with its lights on speeding in a trajectory that would go in the general direction of the hospital, thereby they are readily able to infer its destination/goal is the hospital.
Or observing a mini-van on Saturday morning, driving on a trajectory down a road near the local athletic fields, and upon further observing the soccer team stickers on the back of the mini-van, inferring that the goal is likely the nearby soccer fields.
The extracting is merely the final part of this mental process, e.g. writing down various pieces of information, e.g. make/model of the car, location of the car (e.g. referenced to a mile marker in visible range), etc.
Under the broadest reasonable interpretation, these limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of physical aids but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the "Mental Process" grouping of abstract ideas. A person would readily be able to perform this process either mentally or with the assistance of physical aids. See MPEP § 2106.04(a)(2).
To clarify, see the USPTO 101 training examples, available at https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility. In particular, with respect to the physical aids, see example # 45, analysis of claim 1 under step 2A prong 1, including: “Note that even if most humans would use a physical aid (e.g., pen and paper, a slide rule, or a calculator) to help them complete the recited calculation, the use of such physical aid does not negate the mental nature of this limitation.”; also see example # 49, analysis of claim 1, under step 2A prong 1: “Moreover, the recited mathematical calculation is simple enough that it can be practically performed in the human mind. Even if most humans would use a physical aid, like a pen and paper or a calculator, to make such calculations, the use of a physical aid would not negate the mental nature of this limitation.”
As such, the claims recite an abstract idea of both a mental process and mathematical concept.
Step 2A, prong 2
The claimed invention does not recite any additional elements that integrate the judicial exception into a practical application. Refer to MPEP §2106.04(d).
The following limitations are merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f), including the “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more”:
Preambles of the independent claims recite mere instructions to do it on a computer with generic computer components
The following limitations are considered as generally linking to a particular technological environment of simulating the scenario, as mere instructions to “apply it” with results-oriented functional language that provides no details on how this simulation is to be performed in a technological manner nor how the agent decision logic is to actually do the implementing in a technological manner (do note: page 27, ¶ 1: “target speeds may be set along the path which the agent will seek to match, but the agent decision logic 210 might be permitted to reduce the speed of the external agent below the target at any point along the path in order to maintain a target headway from a forward vehicle.” - people routinely do this mentally and are legally required to when driving, i.e. maintaining a safe braking distance from the car in front of them); and furthermore this is an insignificant application/insignificant computer implementation of the abstract idea:
run a simulation based on the extracted driving scenario, in which an ego agent and the non-ego agent each exhibit closed-loop behaviour, wherein the closed-loop behaviour of the ego agent is determined by autonomous decisions taken in the AV stack under testing in response to simulated inputs, reactive to the non-ego agent;
and apply agent decision logic to determine the closed-loop behaviour of the non-ego agent in the simulation, by implementing the inferred goal or behaviour, reactive to the ego agent.
A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. See MPEP § 2106.04(d).
MPEP 2106.04(II)(A)(2) “…Instead, under Prong Two, a claim that recites a judicial exception is not directed to that judicial exception, if the claim as a whole integrates the recited judicial exception into a practical application of that exception. Prong Two thus distinguishes claims that are "directed to" the recited judicial exception from claims that are not "directed to" the recited judicial exception…Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"); Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (eligibility "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself."). For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, Alice Corp., 573 U.S. at 217, 110 USPQ2d at 1981, either at Prong Two or in Step 2B” and MPEP § 2106(I): “Mayo, 566 U.S. at 80, 84, 101 USPQ2dat 1969, 1971 (noting that the Court in Diamond v. Diehr found “the overall process patent eligible because of the way the additional steps of the process integrated the equation into the process as a whole,”” – and see MPEP § 2106.05(e).
To further clarify, MPEP § 2106.04(II)(A)(1): “Alice Corp., 573 U.S. at 216, 110 USPQ2d at 1980 (citing Mayo, 566 US at 71, 101 USPQ2d at 1965). Yet, the Court has explained that ‘‘[a]t some level, all inventions embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas,’’ and has cautioned ‘‘to tread carefully in construing this exclusionary principle lest it swallow all of patent law” See also Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335, 118 USPQ2d 1684, 1688 (Fed. Cir. 2016) ("The ‘directed to’ inquiry, therefore, cannot simply ask whether the claims involve a patent-ineligible concept, because essentially every routinely patent-eligible claim involving physical products and actions involves a law of nature and/or natural phenomenon").”
As a point of clarity, RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"); Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (eligibility "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself." discussed in MPEP § 2106.04(II)(A)(2) as well as MPEP § 2106.04(I): “Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151, 120 USPQ2d 1473, 1483 (Fed. Cir. 2016) ("a new abstract idea is still an abstract idea") (emphasis in original).
The claimed invention does not recite any additional elements that integrate the judicial exception into a practical application. Refer to MPEP §2106.04(d).
Step 2B
The claimed invention does not recite any additional elements/limitations that amount to significantly more.
The following limitations are merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f), including the “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more”:
Preambles of the independent claims recite mere instructions to do it on a computer with generic computer components, and the simulator of claim 19.
The following limitations are considered as generally linking to a particular technological environment of simulating the scenario, as mere instructions to “apply it” with results-oriented functional language that provides no details on how this simulation is to be performed in a technological manner nor how the agent decision logic is to actually do the implementing in a technological manner (do note: page 27, ¶ 1: “target speeds may be set along the path which the agent will seek to match, but the agent decision logic 210 might be permitted to reduce the speed of the external agent below the target at any point along the path in order to maintain a target headway from a forward vehicle.” - people routinely do this mentally and are legally required to when driving, i.e. maintaining a safe braking distance from the car in front of them); and furthermore this is an insignificant application/insignificant computer implementation of the abstract idea:
run a simulation based on the extracted driving scenario, in which an ego agent and the non-ego agent each exhibit closed-loop behaviour, wherein the closed-loop behaviour of the ego agent is determined by autonomous decisions taken in the AV stack under testing in response to simulated inputs, reactive to the non-ego agent;
and apply agent decision logic to determine the closed-loop behaviour of the non-ego agent in the simulation, by implementing the inferred goal or behaviour, reactive to the ego agent.
In addition, the above insignificant extra-solution activities are also considered as well-understood, routine, and conventional activities, as discussed in MPEP § 2106.05(d):
Curiel-Ramirez, Luis A., et al. "Hardware in the loop framework proposal for a semi-autonomous car architecture in a closed route environment." International Journal on Interactive Design and Manufacturing (IJIDeM) 13.4 (2019): 1647-1658. § 2, incl. § 2.1: “The creation of simulation tools have become essential for the development of this type of technologies for autonomous vehicles (AV). These simulators have been created with different approaches and objectives, such as for the training of machine learning systems applied to control or vision systems; others for the mapping, location, path planning and sensor fusion of the vehicle. In this section we will present the state of the art of some of them, which were used for the development of certain implementations of the work. The simulation tools of autonomous vehicles have become very important in recent years. These simulators allowto simplify and optimize the systems of vision, control, mapping, location and other blocks that compose the autonomous vehicles. In the same way there are simulators of a more general use that allow to visualize and manage part of the systems that make up the autonomous vehicles” – then, see the other subsections of section 2 which discuss these various conventional simulators, e.g. § 2.1.1: “CARLA™ [9] is an open-source simulator for autonomous driving research. CARLA™ have been developed with the objective of working as a development, training, and validation of autonomous urban driving systems (Fig. 3). In addition to open-source code and protocols, CARLA™ provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. CARLA™ have been used to study the performance of three approaches to autonomous driving: a classic modular pipeline, an end-to end model trained via imitation learning, and an end-to-end model trained via reinforcement learning”, e.g. § 2.1.2: “…Developers can create accurate, detailed models of both systems and environments, providing them with intelligence Fig. 3 CARLA™ autonomous vehicles simulator [17] by using methods such as deep learning, imitation learning and reinforcement learning. Tools such as Bonsai [6] can be used to train the models across a variety of environmental conditions and vehicle scenarios in the cloud, on Microsoft Azure, much faster and safer than is feasible in the real world. After training is complete, designers can deploy these trained models onto actual hardware [34].” – etc. see the other subsections for further clarification. See § 2.2 for more clarification as well.
Kakade, Hrishikesh, et al. "Autonomous Highway Overtaking." Han University of Applied Sciences. Master’s (2018). § 2.5, followed by § 5.1.
Codevilla, Felipe, et al. "Exploring the limitations of behavior cloning for autonomous driving." Proceedings of the IEEE/CVF international conference on computer vision. 2019. Abstract and § 1, including the third to last paragraph, then see § 2.
Feng, Shuo, et al. "Testing scenario library generation for connected and automated vehicles, part I: Methodology." IEEE Transactions on Intelligent Transportation Systems 22.3 (2020): 1573-1582. § I ¶¶ 1-4
Feng, Shuo, et al. "Testing scenario library generation for connected and automated vehicles, part II: Case studies." IEEE Transactions on Intelligent Transportation Systems 22.9 (2020): 5635-5647.§ I, then see § III.C, then see page 11, col. 1, ¶ 2.
Fremont, Daniel J., et al. "Formal scenario-based testing of autonomous vehicles: From simulation to the real world." 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2020. § I incl. subsection related work. § II.C.
Huang, Xin, et al. "Online risk-bounded motion planning for autonomous vehicles in dynamic environments." Proceedings of the International Conference on Automated Planning and Scheduling. Vol. 29. 2019. Sections on “Approach” and “Intention-Aware Risk-Bounded Motion Planning” and algorithms 1-2. Then see the “Experiments” section on page 219, and fig. 3 and 5.
The claimed invention is directed towards an abstract idea of both a mathematical concept and a mental process without significantly more.
Regarding the dependent claims
Claim 2 is adding more mere instructions to do an abstract idea on a computer (the test oracle), with a mere data gathering step (the receiving step), and a mental process given the generality of the other steps recited, i.e. observe a trajectory/trace, and mentally compared (by judgement/evaluation) it against performance metrics to score it, e.g. observe the “acceleration” merely output from the simulation (page 31, last paragraph), and compare it to some mental threshold for the max comfortable acceleration, e.g. a numerical threshold value such as 3.5 m/s^2. To clarify, page 31, second to last paragraph: “The performance metrics 254 can be based on various factors, such as distance speed etc. In the described system, these can mirror a set of applicable road rules, such as the Highway Code applicable to road users in the United Kingdom.” – see MPEP § 2106.04(a)(2)(III)(C): “Another example is FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016). The patentee in FairWarning claimed a system and method of detecting fraud and/or misuse in a computer environment, in which information regarding accesses of a patient’s personal health information was analyzed according to one of several rules (i.e., related to accesses in excess of a specific volume, accesses during a pre-determined time interval, or accesses by a specific user) to determine if the activity indicates improper access. 839 F.3d. at 1092, 120 USPQ2d at 1294. The court determined that these claims were directed to a mental process of detecting misuse, and that the claimed rules here were "the same questions (though perhaps phrased with different words) that humans in analogous situations detecting fraud have asked for decades, if not centuries." 839 F.3d. at 1094-95, 120 USPQ2d at 1296.”
Claim 3 – mere data outputting that is WURC in view of example 46, claim 1 for its displaying step WURC analysis; also see additional evidence in MPEP § 2106.05(d)(II)
Claim 4 – merely further limiting the mental process, and adding another step in it (e.g. observe a chart of data, and observe where values of the chart exceed a threshold value)
Claim 5 – rejected under a similar rationale as the applying limitation as discussed above, i.e. this is further merely expressing a desired result in purely functional language with no recitation of how to achieve this result
Claim 6 – adding a math concept of math calculations in textual form. Akin to MPEP § 2106.04(a)(2)(I)(C): “i. performing a resampled statistical analysis to generate a resampled distribution, SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161, 1163-65, 127 USPQ2d 1597, 1598-1600 (Fed. Cir. 2018), modifying SAP America, Inc. v. InvestPic, LLC, 890 F.3d 1016, 126 USPQ2d 1638 (Fed. Cir. 2018);” – to clarify, page 22 of the instant disclosure: “….To this end, a goal sampling component 150 is provided that samples a goal for each autonomous non-ego agent from a goal distribution inferred for that agent…” – also, page 38, ¶ 2: “For example, the goal recognition component 156 may predict a probability distribution P (GI 0) over a set of available goals G.”
Claim 7 – the “determining…” is a mental process (e.g. page 41, last paragraph: “…suitable goals may be hypothesised from the road layout alone (without taking into account any observed historical behaviour of the agent)….By way of example, if the other vehicle is currently driving on a multi-lane road, with no nearby junctions, the set of hypothesised goals may consist of "follow lane" and "switch lane".”) akin to the ones in the independent claims. Should this be amended to add in historical information, see MPEP § 2106.05(a)(I): “Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality… vii. Providing historical usage information to users while they are inputting data, in order to improve the quality and organization of information added to a database, because "an improvement to the information stored by a database is not equivalent to an improvement in the database’s functionality," BSG Tech LLC v. Buyseasons, Inc., 899 F.3d 1281, 1287-88, 127 USPQ2d 1688, 1693-94 (Fed. Cir. 2018); and” – followed by both a mental process and math concept of mathematical relationships in statistics of the expected trajectory model (instant disclosure, page 42, ¶¶ 4-5: “The expected trajectory model may simply be a (single) predicted for a given goal [a mental judgement/evaluation], but in the present examples it takes the form of a predicted trajectory distribution [a math concept that is simple enough for a person to mentally to do it as well] for the goal in question.” – see fig. 8A and 8B to clarify; and subsection F starting on page 50 to clarify on the trajectory distribution being a math concept expressed as mathematical equations, in particular eq. 6 and its later definition of “L”, followed by another mental process and math concept (the “determine a likelihood” is a math calculation in textual form), to clarify on the comparing with this – page 44 ¶ 2: “In other words, the goal recognition component 156 predicts, for each of the hypothesized 5 goals, a set of one or more possible trajectories that the other vehicle might have taken in the time interval 11t and a likelihood of each trajectory, on the assumption that the other vehicle was executing that goal during that time period (i.e. what the other vehicle might have done during time interval 11t had it been executing that goal).This is then compared with the actual trace of the other vehicle within that time period (i.e. what the other vehicle actually 10 did), to determine a likelihood of each goal for the time period 11” – in view of pages 41-42 as cited above), i.e. this is a mental step, i.e. two goals (page 41: "follow lane" and "switch lane"), each goal with one possible trajectory, so observe the real trajectory (e.g. the vehicle is moving into the other lane of traffic) and thus determine the likelihood of it going to that goal is 1 (also, provide more information, e.g. more observed trajectories in this case, and it would likely merely go down to a coin flip, i.e. a 50/50 likelihood of one of the two goals).
Claim 9 – rejected under similar rationale as claim 8 above
Claim 10 – rejected under a similar rationale as claim 8 above
Claim 11 – adding a math calculation in textual form, and mentally comparing the results of the calculation. See page 49, last two paragraphs to clarify.
Claim 13 – math calculations in textual form, for similar reasons as discussed above in view of MPEP § 2106.04(a)(2)(III)(C) for SAP v. InvestPic, followed by merely adding this information to later be used in the token post-solution activity (see above discussion in independents for the agent decision logic)
Claim 14 – mere data outputting with generic computer technology (the GUI) recited in a high level of generality – WURC in view of example 46, claim 1, step 2B for its displaying step and MPEP § 2106.05(d)(II)
Claim 15 merely further limiting the mental process as discussed above
Claim 17 – mere instructions to do it on a computer, in view of MPEP § 2106.05(f): “Other examples where the courts have found the additional elements to be mere instructions to apply an exception, because they do no more than merely invoke computers or machinery as a tool to perform an existing process include: i. A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015);”
Claim 18 – purely results oriented, so see the rationale above for the agent decision logic
Claims 20 – rejected under a similar rationale as its parallel claims above
Claim 21 – a mental process. Page 19, ¶ 2: “The output of the test oracle 252 is informative to an expert 122 (team or individual), allowing them to identify issues in the stack 100 and modify the stack 100 to mitigate those issues (S 124). The results also assist the expert 122 in selecting further 20 scenarios for testing (S 126), and the process continues, repeatedly modifying, testing and evaluating the performance of the stack 100 in simulation”
Claim 22 – generally linking to the technological environment of machine learning, and an insignificant application WUC in view of the above evidence. To clarify, the repeat evaluations is a mental process in view of Page 19, ¶ 2: “The output of the test oracle 252 is informative to an expert 122 (team or individual), allowing them to identify issues in the stack 100 and modify the stack 100 to mitigate those issues (S 124). The results also assist the expert 122 in selecting further 20 scenarios for testing (S 126), and the process continues, repeatedly modifying, testing and evaluating the performance of the stack 100 in simulation. The improved stack 100 is eventually incorporated (S 125) in a real-world AV 101, equipped with a sensor system 110 and an actor system 112.”
The claimed invention is directed towards an abstract idea of both a mathematical concept and a mental process without significantly more.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 5, 15, 18-21, 24 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kabirzadeh, US 11,150,660.
Regarding Claim 1
Kabirzadeh teaches:
A computer system for testing an autonomous vehicle (AV) stack in simulation, the computer system comprising: memory configured to store computer-readable instructions; and one or more hardware processors coupled to the memory and configured to execute the computer-readable instructions, which upon execution cause the computer system to: (Kabirzadeh, abstract and cf. 1-3 along with accompanying description)
process real-world driving data to extract therefrom at least one observed trace of a real-world agent within a road layout, the observed trace having spatial and motion components; (Kabirzadeh, as cited above, e.g. cf. 3, # 304, # 310, and # 308, see fig. 5 # 502 and # 510 to clarify, then cf. 7 # 702-706, see col. 7-8 to clarify incl.: “The vehicle(s) 104 can include a computing device that includes a perception engine and/or a planner and perform operations such as detecting, identifying, segmenting, classifying, and/or tracking objects from sensor data collected from the environment 102… The vehicle computing device can use the sensor data to
generate a trajectory [example of a trace] for the vehicle(s) 104…” – see the paragraph split between the columns in particular, and col. 8 ¶ 2
apply at least one of goal recognition and behaviour recognition to the observed trace, to infer a goal or behaviour of the real-world agent within the road layout, and extract a driving scenario defining a version of the road layout and at least one non-ego agent to be simulated, the non-ego agent associated with the inferred goal or behaviour for implementing in simulation within the defined road layout; (Kabirzadeh, cf. 3, # 324-326 – to clarify, col. 13, ln. 55-65: “For the purpose of this discussion, a route can be a sequence of waypoints for traveling between two locations. As non-limiting examples, waypoints include streets, intersections, global positioning system (GPS) coordinates, etc.”, and col. 4-5 paragraph split between: “…In some examples, the SES can determine waypoint(s) [example of a goal inferred] and/or paths associated with a simulated object. For purposes of illustration only such waypoints or paths can be based at least in part on log data representing the object corresponding to the simulated object. In some examples, waypoints can be determined based on a curvature of a path segment or can be added manually in a user interface, discussed herein. Such waypoints or paths can be associated with various costs or weights that influence a behavior [example of behavior inferred] of the simulated object in the simulated environment… Such waypoints or paths can be associated with events or actions, such as a lane change action. For example, the log data can indicate that an object performed a lane change action that placed the object to the right of the vehicle. Additionally, the log data can indicate that the vehicle attempted to perform a lane change action to the right but did not complete the action due to the object. In order to preserve this interaction in the simulated scenario, the SES can assign a waypoint associated with the trajectory of the simulated object such that the simulated object performs the lane change action and remains on the right side of the simulated vehicle.”
To further clarify, see col. 15-16, incl.: “The waypoint attributes component 242 can determine attributes associated with a waypoint… The waypoint attributes component 242 can determine attributes such as, for example, a speed, a steering angle, or a time associated with the waypoint (e.g., to enforce the simulated object to simulate a speed and/or a steering angle at a waypoint in the simulated environment, to arrive at a waypoint at a specified time in the simulated scenario ( or within a period of time), and/or to perform a specified action at a specified time in the simulated scenario ( or within a period of time)). In another example, a waypoint may enforce a location of a simulated object at a particular time in the simulated data. Of course, such waypoints may be associated with any one or more additional parameters, such as, for example, yaw rates, accelerations, predicted trajectories, uncertainties, and any other data associated with entities in the data. In some examples, a waypoint can be associated with various weight(s) that represent an "importance" of enforcing a behavior of the simulated object with respect to the waypoint. In some examples, the weight(s) associated with a waypoint can influence a cost associated with the simulated object deviating from or adhering to the waypoint behavior… For purposes of illustration only, the log data can indicate that the object traversed a trajectory that included executing a turn at 5 mph and with a steering angle of30 degrees. The waypoint attributes component 242 can determine a waypoint associated with the simulated object that has attributes of a speed of 5 mph and a steering angle of 30 degrees at the location as represented in the log data. As the computing device(s) 232 executes the simulated scenario, the waypoints can guide the simulated object in the simulated environment by applying the attributes associated with the waypoints to the simulated object…”
Col. 20, ¶ 4 starting at ln. 31 to further clarify: “The waypoints attributes component 242 can determine the first waypoint 324 at location G) and the second waypoint 326 at location@. When the computing device(s) 312 executes the simulated scenario, the computing device(s) 35 312 can generate a trajectory that respects or otherwise represents the attributes associated with the waypoints 324 and/or 326 (e.g., by minimizing a difference between a trajectory of the object in simulation and the observed trajectory of the object). For example, the simulated object 40 318 can traverse the simulated environment 316 such that, when the simulated object 318 reaches the first waypoint 324, the behavior of the simulated object 318 is substantially similar (at least in part and within a threshold) to the attributes associated with the first waypoint 324…”
To clarify, the Examiner notes the ego vehicle of Kabirzadeh is the “Simulated vehicle”, e.g. # 416, corresponding to the “vehicle”, e.g. # 408, and the non-ego vehicle is called the “object” (# 410)/”Simulated object” (# 418)
To clarify on behavior, also see col. 17, last paragraph: “In some instances, the scenario component 246 can determine, based on behavior data in the log data, that an object as an aggressive object, a passive object, a neutral object, and/or other types of behaviors and apply behavior instructions associated with the behavior (e.g., a 65 passive behavior, a cautious behavior, a neutral behavior, and/or an aggressive behavior) to the simulated object”
run a simulation based on the extracted driving scenario, in which an ego agent and the non-ego agent each exhibit closed-loop behaviour, wherein the closed-loop behaviour of the ego agent is determined by autonomous decisions taken in the AV stack under testing in response to simulated inputs, reactive to the non-ego agent; and apply agent decision logic to determine the closed-loop behaviour of the non-ego agent in the simulation, by implementing the inferred goal or behaviour, reactive to the ego agent. (Kabirzadeh, as was cited above, then see:
col. 4, ¶ 4: “For purposes of illustration only, in a scenario where a simulated object is following a simulated vehicle, if the simulated vehicle begins braking sooner or more aggressively than the braking behavior represented in the log data, the simulated object may modify its trajectory by braking sooner or changing lanes, for example, to avoid a collision or near-collision with the simulated vehicle” and col. 17, ¶ 3: “…In some instances, the scenario
component 246 can identify a simulated pedestrian model and apply it to the simulated objects associated with pedestrians. In some instances, the simulated object models can 45 use controllers that allow the simulated objects to react to the simulated environment and other simulated objects (e.g., modeling physics-based behaviors and incorporating collision checking). For example, a simulated vehicle object can stop at a crosswalk if a simulated pedestrian crosses the 50 crosswalk as to prevent the simulated vehicle from colliding with the simulated pedestrian.”, e.g. col. 21 ¶¶ 1-2: “…The simulated 15 vehicle 416, however, can be configured to use a controller that is different than the controller used in vehicle 408. Therefore, at time T 2B in the simulated environment 412, the simulated vehicle 416 can come to a stop at a position (e.g., not within the intersection) that is farther away from the crosswalk and the simulated object 414. Additionally, time T 2B depicts the simulated object 418 coming to a stop such that it does not collide with simulated vehicle 416. By way of example, if the motion of simulated object 418 relied solely on the log data 402, the simulated object 418 would have collided with the simulated vehicle 416. However, the simulated model applied to the simulated object 418 allows the simulation component 314 to determine that the simulated vehicle 416 has come to a stop [i.e. reactive to the ego vehicle] and prevent a collision by stopping at an earlier point than indicated by the log data 402… Additionally, the simulation component 314 can determine a cost associated with the simulated vehicle 416 applying a minimum braking force, which may result in a collision or near-collision with the simulated object 418. The simulation component 314 can, using cost minimization algorithms (e.g., gradient descent), determine a trajectory to perform the stop in a safe manner while minimizing or optimizing costs.” – to clarify on this implementing the waypoints, see waypoints # 1-3 in fig. 5, and col. 22 ¶¶ 2-3: “The waypoint attributes component 242 can identify three
different waypoints e.g., (1, 2, 3) and the scenario component 246 can generate three different simulated environments 514, 516, and 518… The simulated object 520(1)-(3) traverses trajectory 522 (1)-(3) to avoid simulated vehicle 524(1)-(3). Each trajectory 522(1)-(3) can be associated with a cost and the controller associated with the simulated object 520(1)-(3) 25 can determine the trajectory to minimize the cost”
To clarify, see the abstract: “The scenarios can be used for testing and validating interactions and responses of a vehicle controller within a simulated environment” and col. 3, ¶ 1: “The simulated vehicle can represent an autonomous vehicle that is controlled by an autonomous controller that can determine a trajectory, based at least in part, on the simulated environment”, e.g. col. 4, ¶¶ 3-4: “For purposes of illustration only, a simulated controller may represent different planning algorithms or driving behaviors ( e.g., different acceleration or braking profiled) that may introduce differences between the log data and simulated data…. In some instances, the SES can associate a controller with the simulated object, where the controller can determine trajectories for the object that deviate from the log data. Thus, the simulated object can perform actions associated with the object as represented in the log data and/or perform actions that deviate from the object as represented in the log data. Thus, the simulated object can intelligently react to deviations in the behavior of the simulated vehicle…”
Regarding Claim 2
Kabirzadeh teaches:
The computer system of claim 1, comprising a test oracle configured to evaluate the performance of the AV stack in the simulation, by receiving a simulated ego trace of the ego agent, as generated in the simulation, and scoring the simulated ego trace against a set of predetermined performance metrics.
Kabirzadeh, col. 18 last paragraph to col. 19 ¶ 3: “Additionally, the simulation component 252 can determine an outcome for the simulated scenario. For example, the simulation component 252 can execute the scenario for use in a simulation for testing and validation. The simulation component 252 generate the simulation data indicating how the autonomous controller performed (e.g., responded) and can compare the simulation data to a predetermined outcome and/or determine if any predetermined rules/assertions were broken/triggered. In some instances, the predetermined rules/assertions can be based on the simulated scenario (e.g., traffic rules regarding crosswalks can be enabled based on a crosswalk scenario or traffic rules regarding crossing a lane marker can be disabled for a stalled vehicle scenario). In some instances, the simulation component 252 can enable and disable rules/ assertions dynamically as the simulation progresses. For example, as a simulated object approaches a school zone, rules/assertions related to school zones can be enabled and disabled as the simulated object departs from the school zone. In some instances, the rules/assertions can include comfort metrics that relate to, for example, how quickly an object can accelerate given the simulated scenario… Successful validation of a proposed controller system may subsequently be downloaded by ( or otherwise transferred to) a vehicle for further vehicle control and operation.”
Then, see col. 21: “In conducting the stop as depicted at time T 2B, the simulated vehicle 416 can perform a cost analysis. For example, the simulation component 314 can determine a cost associated with the simulated vehicle 416 applying a maximum braking (e.g., an emergency stop) which may result in excessive or uncomfortable accelerations for passengers represented in the simulated object 418. Additionally, the simulation component 314 can determine a cost associated with the simulated vehicle 416 applying a minimum braking force, which may result in a collision or near-collision with the simulated object 418. The simulation component 314 can, using cost minimization algorithms (e.g., gradient descent), determine a trajectory to perform the stop in a safe manner while minimizing or optimizing costs.” – i.e. there was a plurality of predetermined “comfort metrics” such as ones associated with the acceleration of the simulated vehicle that the simulated was scored with for whether or not is passed/failed, wherein these were also in dependence upon the simulated non-ego trajectory (e.g. so as to “stop in a safe manner”)
Regarding Claim 5
Kabirzadeh teaches:
The computer system of claim 1, wherein the one or more processors are configured to apply the agent decision logic to determine the closed-loop behaviour of the non-ego agent with the aim of matching target motion values along a spatial agent path, but with deviation from the target motion values permitted in reaction to the ego agent, the spatial agent path associated with the inferred goal or behaviour. (Kabirzadeh, as was cited above for the applying limitation, e.g. col 4, lines 45-55: “For purposes of illustration only, in a scenario where a simulated object is following a simulated vehicle, if the simulated vehicle begins braking sooner or more aggressively than the braking behavior represented in the log data, the simulated object may modify its trajectory by braking sooner or changing lanes, for example, to avoid a collision or near-collision with the simulated vehicle.” – see above citations to clarify)
Regarding Claim 15.
Rejected under a similar rationale as claim 2 above.
Regarding Claim 18.
Kabirzadeh teaches:
The computer system of claim 1, wherein the agent decision logic is tuned so as to cause the non-ego agent to realize a trajectory that substantially corresponds to the observed trace in the event the behaviour of the ego agent in the simulation substantially matches the behaviour of a real ego agent in the real-world driving data. (Kabirzadeh, as cited above for the applying step, then see col. 2, last paragraph: “The simulated scenario can be identical to the captured environment or deviate from the captured environment” and col.5 ¶ 1: “Additionally, the log data can indicate that the vehicle attempted to perform a lane change action to the right but did not complete the action due to the object. In order to preserve this interaction in the simulated scenario, the SES can assign a waypoint associated with the trajectory of the simulated object such that the simulated object performs the lane change action and remains on the right side of the simulated vehicle Controllers associated with the simulated vehicle may then, in such examples, determine controls for the simulated vehicle which adapt to changes in the scenario while minimizing deviations from the paths as originally taken”
Regarding Claim 19.
Rejected under a similar rationale as claim 1 above.
Regarding Claim 20.
Rejected under a similar rationale as claim 2 above.
Regarding Claim 21.
Rejected under a similar rationale as claim 2 above, in particular note col. 19 ¶ 3: “Successful validation of a proposed controller system may subsequently be downloaded by ( or otherwise transferred to) a vehicle for further vehicle control and operation.” – in view of col. 4, ¶ 3: “In some instances, the simulated vehicle can be controlled using a controller that is the same as or that is different than the controller used by the vehicle that generated the log data… For purposes of illustration only, a simulated controller may represent different planning algorithms or driving behaviors ( e.g., different acceleration or braking profiled) that may
introduce differences between the log data and simulated data.” – and col. 3, ¶ 2, and col. 21, ¶ 1: “By way of example, if the motion of simulated object 418 relied solely on the log data 402, the simulated object 418 would have collided with the simulated vehicle 416. However, the simulated model applied to the simulated object 418 allows the simulation component 314 to determine that the simulated vehicle 416 has come to a stop and prevent a collision by stopping at an earlier point than indicated by the log data 402.”
Regarding Claim 24.
Rejected under a similar rationale as claim 1 above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3-4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kabirzadeh, US 11,150,660 in view of Kakade, Hrishikesh, et al. "Autonomous Highway Overtaking." Han University of Applied Sciences.Master’s (2018).
Regarding Claim 3
While Kabirzadeh alone does not explicitly teach the following, Kabirzadeh in view of Kakade teaches:
The computer system of claim 2, wherein the test oracle is configured to provide an output comprising a score-time plot for each performance metric. (Kabirzadeh as cited above for claim 2 incl. col. 2 ¶ 2 incl.: “…the rules/assertions can include comfort metrics that relate to, for example, how quickly an object can accelerate given the simulated scenario.”
Taken in view of Kakade, abstract last two paragraphs, in particular: “The scope of this thesis is to develop the test protocol and automated driving system for the accelerative/normal and flying overtaking maneuver in a highway context. Firstly, the test protocol for the overtaking maneuver is developed which is in accordance with the ISO standards (ACC, LCA, BSM, LKA, and LDW), Euro NCAP (safety assist protocols), and various rules and regulations set by few governments (The Netherlands, UK, and Province of Alberta) for overtaking maneuver… Finally, the developed test protocol is verified and validated analytically (using mathematical equations) as well as virtually (automated driving system using PreScan) to check if the host vehicle performs the overtaking maneuver fail-safely… The sensor modeling, ADAS implementation, and the development of the complete automated driving system for the overtaking maneuver is carried out using PreScan and MATLAB/Simulink software.”
Then see § 7.1 for the “Scenario evaluation”, in particular in “Use case 1” in § 7.1.1 see the subsection on “Simulation results” incl. # 1 and # 2, incl.: “The speed profile of the host vehicle during the overtaking maneuver is shown in the MATLAB figure 7.7.
Host vehicle speed starts decreasing with ACC at t = 22.3 s and becomes 100 km/h at t = 24.5 s. At this point, the overtaking headway threshold condition is reached and the left lane change phase starts while the speed of host vehicle also starts increasing gradually. Host vehicle achieves its initial set speed at the end of the passing phase (t = 34.8 s).” and # 3: “The acceleration profile of the host vehicle during the overtaking maneuver is shown in the MATLAB figure 7.8. The maximum deceleration and acceleration during the maneuver is limited to -3.1 and 1.8 m/s2 respectively which is within the permissible limits as discussed in section 4.8.” – and see fig. 7-8, i.e. this is an example of a score-time plot, wherein this is within the thresholds specified in § 4.8; see § 7.1.2 # 2-3 as well- see the figures in particular
To clarify, see § 4.8, incl. # 3: “The maximum lateral acceleration, lateral deceleration, and axial acceleration was limited to 1.1, 0.41, 2.5 m/s2 respectively in [48]. As per the ISO standard 15622 (ACC), the average automatic deceleration of ACC systems shall not exceed 3.5 m/s2 (average over 2 s), the average rate of change of automatic deceleration (negative jerk) shall not exceed 2.5 m/s3 (average over 1 s). Automatic acceleration of ACC systems shall not exceed 2 m/s2 [67]. Thus, the longitudinal acceleration ranges between 2.5 and 7 m/s2 while lateral acceleration ranges between 1.1 and 2.9m/s2 from the above investigation. Hence, the maximum longitudinal acceleration of 3.5 m/s2 is considered from the comfort aspects while developing test protocol for the highway overtaking maneuver.”
Regarding Claim 4
Rejected under a similar rationale as claim 3 above.
Regarding Claim 14.
Rejected under a similar rationale as claim 3 above.
Claim(s) 6-7, 9-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kabirzadeh, US 11,150,660 in view of Besse, Philippe C., et al. "Destination prediction by trajectory distribution-based model." IEEE Transactions on Intelligent Transportation Systems 19.8 (2017): 2470-2481.
Regarding Claim 6
While Kabirzadeh does not explicitly teach the following, Kabirzadeh in view of Besse teaches:
The computer system of claim 1, wherein the one or more processors are configured to infer the goal or behaviour probabilistically as a distribution over available goals or behaviours. (Kabirzadeh, as was cited above, wherein this first determines trajectories from log data, and then determines waypoints for the trajectories as goals for the later simulated trajectories to reach
As taken in further view of Besse:
Besse, abstract: “In this paper, we propose a new method to predict the final destination of vehicle trips based on their initial partial trajectories… We present how this model can be used to predict the final destination of a new trajectory based on their first locations using a two-step procedure: we first assign the new trajectory to the clusters it most likely belongs. Second, we use characteristics from trajectories inside these clusters to predict the final destination.” – then see § I, second to last paragraph to clarify, followed by § III and § IV, then see § IV.A: “For a new trajectory, we want to be able to predict its final destination. We only observe the beginning of its path, which is represented as a succession of locations in R2, and we want to evaluate its probability to belong to the each cluster of trajectory…. The complete set of trajectories is then modelled by the set of K GMM’s, one for each set of points, Pm. Each of these sets has been partitioned into km groups: C(Pm) = {Pm 1 , . . . ,Pm km }. Using this modelling procedures, we obtain several cluster of locations, each one corresponding to a mode of the estimated Gaussian mixture distribution… Now that we have described the space, we want to use the model to predict the final destination of a new trajectory in progress:… For that, we want to be able to assign the new trajectory to the cluster of trajectories it most likely belongs. For this purpose we compute the simple score, sm(T c) for all the GMMs m. The score is the value of the likelihood function of m given the points that compose the trajectory T c. It represents how likely the trajectory T c belongs to the cluster m…” – i.e. the probability of the trajectory belonging to each cluster [example of a distribution, to clarify see definitions 4-5; to further clarify § VII ¶ 1: “Then, it models main traffic flow patterns within each trajectory’s cluster by a mixture of 2d-Gaussian [probability] distributions.”] is used to infer/predict the final destination
See definition 6 and eq. 4-5 to further clarify: “In this way, we can assigned the trajectory to the cluster with the highest simple score…” – see subsection §IV.B for more details on the scoring technique, incl.: “The likelihood score as defined in Definition 6 does not take into account contextual information. However, we can assume that prior knowledge may help to discriminate the trajectories. Indeed, a path may more likely be taken than an other at a given hour of the day or day of the week. We look forward to verifying this hypothesis by including auxiliary weights. For this we define a new complete score taking into account the following weights.” – then see subsection §IV.C: “We present here how our model can be used to predict the final destination of the user trips. We have defined, in the previous sections, a simple score and a complete score for each trajectory to belong to a cluster of trajectory. Hence we can assign the new trajetory to the clusters it most likely belongs. We can then use the information from the trajectories that compose these clusters to predict the final destination of the new trajectory. From this, we define two different methods for predicting final destination…
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings from Kabirzadeh on which determined waypoints associated with a trajectory with the teachings from Besse on a system which predicted the end destination (the final waypoint) based on partial trajectories. The motivation to combine would have been that § VI.A: “First of all, if we prepare our learning datasets to match most of the the trajectories within the test dataset, as the other methods did, we proved that our method is very competitive with a ranking among the first deciles…. However, once our model has been learned, our methodology can take into account new location of the test trajectories during its completion without needing to produce a new learning, which it is not the case withe the wining solution. Secondly, our method also produced good results considering that the learning dataset has not been developed to match the test dataset, which is the case in real applications where we cannot afford to fit the learning dataset to the trajectory we want to predict. Hence our model can be re-used directly for a different test dataset, and can also be used to predict the destination within the same trajectory, without requiring a new training, something which other methods do not allow.”” – also, see § VI.B: “Our method was designed to provide a forecast based on rigid models learnt using clusters of trajectories. It provides a deep understanding of the main streams and paths of vehicles in a city that reflect the behavior of drivers… Yet our forecast can be easily used to provide models of road behavior that explain the prediction obtained, as we have shown in the different Figures, Section V…. The probability of different final destination points for a trajectory at 6 different rates of trip completion is displayed in this figure. At each moment, we can observe the probability of belonging to each cluster of trajectories and their corresponding final destinations…”
Regarding Claim 7
Kabirzadeh, in view of Besse, teaches:
The computer system of claim 1, wherein the one or more processors are configured to infer the goal or behaviour probabilistically by: determining the set of available goals or behaviours for the real-world agent; for each of the available goals or behaviours, determining an expected trajectory model; and comparing the observed trace of the real-world agent with the expected trajectory model for each of the available goals or behaviours, to determine a likelihood of that goal or behaviour, thus determining a distribution over the available goals or behaviours. (Kabirzadeh, as was taken in view of Besse above,
e.g. eq. 7 of Besse: “where dm is the mean of the locations of all final destinations of the trajectories in cluster T m,” – i.e. there is a set of destinations available for the real-world agent
e.g. Besse, as discussed in more detail above but see eq. 7-8, i.e. each destination is associated with “trajectories in [each] cluster” – i.e. § VI.B ¶ 1: “The probability of different final destination points for a trajectory at 6 different rates of trip completion is displayed in this figure. At each moment, we can observe the probability of belonging to each cluster of trajectories and their corresponding final destinations. The more likely that the trajectory belongs to a cluster, the more visibly this cluster is displayed on the plot” – wherein, as discussed above, Besse is comparing the observed partial trajectories with the clusters of trajectories (expected trajectory model) to determine the likelihood/probability of belong to a particular cluster [distribution, as clarified on above] and their corresponding “final destinations” [note the plural, note eq. 7, i.e. each trajectory in each cluster is associated with a corresponding final destination, and the “dm” in eq. 7 is “the mean of the locations of all final destinations”]
Rationale to combine is the same as discussed above for claim 6.
Regarding Claim 9
Kabirzadeh, in view of Besse, teaches:
The computer system of claim 7, wherein the expected trajectory model is a single predicted trajectory associated with that goal or behaviour or a distribution of predicted trajectories associated with that goal or behaviour. (Kabirzadeh, as was taken in view of Besse as discussed above for claims 6-7).
Regarding Claim 10.
Kabirzadeh, in view of Besse, teaches:
The computer system of claim 7, wherein the one or more processors are configured to use the observed trace to predict a best-available trajectory model for the goal or behaviour, said comparison comprising comparing the best-available trajectory model with the expected trajectory model. (Kabirzadeh, in view of Besse, teaches this for the reasons as discussed above, i.e. its finding the morel likely cluster the observed trajectory belongs to by comparing it with each cluster (see the scoring in eq. 7-8 and the like) to find the one with the highest score
Regarding Claim 11.
Kabirzadeh, in view of Besse, teaches:
The computer system of claim 10, wherein a defined reward function is applied to both the expected trajectory model and the best-available trajectory model for each goal, to determine respective rewards of those trajectory models, wherein said comparison comprises comparing those rewards. (Kabirzadeh, as was taken in view of Besse, as discussed above, in particular see the scoring functions [each an example of reward function] that was applied to the trajectories (and clusters thereof)
Regarding Claim 17.
Kabirzadeh in view of Besse teaches:
The computer system of claim 1, wherein the one or more processors are configured to apply one or more non- real time perception algorithms to the real-world driving data, in order to extract the observed trace. (To clarify on the BRI, see page 20-21, paragraph split between the pages, also page 13 last paragraph
See Kabirzadeh, as was cited above for the waypoints based on the trajectory which was based on sensor data [real-world driving data] but this does not teach applying a non-real time perception algorithm to the sensor data for extracting the trajectory
However, this would have been obvious when taken in view of Besse, as cited above for claim 6, with the same rationale to combine – to clarify, Besse as cited above is using partial trajectories obtained from sensor logs and processing them to obtain predictions of future likely trajectories, followed by the likely destinations of corresponding with those trajectories - - see § III, including definitinon 2, then § IV: “After obtaining clusters of trajectories that discriminate the main patterns of the traffic flow in the city, we aim to predict the final destination of a vehicle for which we only observe the beginning of its path. Hence, We observe a succession of locations in R2…” – e.g. fig. 6, as discussed in subsection C on page 2476, definition 11: “…For San-Francisco, we can observe that the second method gives best results especially at the beginning of the trajectories where Qpred is 400 meters better using pred2. As the trajectories progress, the results continue to be better with pred2 but the difference between the two methods decreases and after 50% of trajectory completion, the difference is less than 50 meters….” – see § VI subsection B to further clarify, ¶¶ 1-2 in particular: “Our method was designed to provide a forecast based on rigid models learnt using clusters of trajectories. It provides a deep understanding of the main streams and paths of vehicles in a city that reflect the behavior of drivers… At each moment, we can observe the probability of belonging to each cluster of trajectories and their corresponding final destinations. The more likely that the trajectory belongs to a cluster, the more visibly this cluster is displayed on the plot. This Figure shows that at each trip completion, we are able to associate a trajectory, to the group of trajectories it resembles. We have used these associations to predict the final destination of the trajectory, but many others features can be used for different objectives.” And § VIII: “This prediction is based on the initial location of the trajectory. Since we model the whole path [including the future trajectories to the final destination], the prediction can be accomplished at any time during trajectory completion”
Then see § VI subsection C which discusses the substantial computational time, i.e. it is not a real-time/offline algorithm (i.e. “that can feasibly be implemented within an AV stack 100 to facilitate real-time planning/decision making.” Per instant disclosure pages 20-21, paragraph split between the pages)
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kabirzadeh, US 11,150,660 in view of Besse, Philippe C., et al. "Destination prediction by trajectory distribution-based model." IEEE Transactions on Intelligent Transportation Systems 19.8 (2017): 2470-2481 in further view of Joseph, Joshua, et al. "A Bayesian nonparametric approach to modeling motion patterns." Autonomous Robots 31.4 (2011): 383-400.
Regarding Claim 13.
While Kabirzadeh in view of Besse, does not explicitly teach the following feature, this would have been obvious when Kabirzadeh was taken in view of Besse and in further view of Joseph:
The computer system of, claim 6, wherein the one or more processors are configured to sample a goal or behaviour from the distribution over the possible goals or behaviours, the agent decision logic configured to determine the closed-loop behaviour of the non-ego agent based on the sampled goal or behaviour. (Kabirzadeh for applying the agent decision logic as discussed above based on the waypoints, as was taken in view of Besse as discussed above – to clarify, see Besse, eq. 7-8, wherein this is using the “mean of the locations of all final destinations of the trajectories in cluster T m”, i.e. “Both final destination formulas are computed using the mean of the locations of all final destinations dm which is a constant, and the simple score, sm(T c), which, as we have seen Equation 6”
However, this does not teach the sampling, but rather is taking a mean/average of the distribution, but this distinction would have been obvious when taken in further view of Joesph, abstract: “…We propose modeling target motion patterns as a mixture of Gaussian processes (GP) with a Dirichlet process (DP) prior over mixture weights…” then, see § 4.2: “Since our agent sees a target’s location only when the target is within a given observation radius, the target trajectory that the agent observes will often be disjoint sections of the target’s full trajectory. Fortunately, the Gaussian process does not require continuous trajectories to be trained, and the Dirichlet process mixture model can be used to classify partial paths that contain gaps during which the vehicle was not in sight. In this sense, the inference approach for the full information case (Sect. 3.2) also applies to the partial information case. However, using only the observed locations ignores a key piece of information: whenever the agent does not see the target, it knows that the target is not nearby. In this way, the lack of observations actually provides (negative) information about the target’s location. To leverage this information, we use Gibbs sampling to sample the unobserved target locations as well as the trajectory clusterings. Once the partially observed trajectories are completed, inference proceeds exactly as in the full information case. Specifically, we alternate resampling the cluster parameters (Sect. 3.2) with resampling the unobserved parts of each target’s trajectory. Given all of the other trajectories in an incomplete trajectory’s cluster, we can sample the missing sections using the prediction approach in Sect. 3.2.2; this approach also ensures that the filled in trajectories connect to observed segments smoothly. If the sampled trajectory crosses a region where the agent could have observed it—but did not—then that sample is rejected, and we sample a new trajectory completion. This rejection sampling approach ensures that we draw motion patterns consistent with all of the available information (see Algorithm 2). To predict future target positions, several of the sampled trajectory completions are retained and averaged to produce a final prediction. Each trajectory completion suggests a different Gaussian process motion model, and is weighted using Bayesian model-averaging. Using the final velocity prediction, computed as the weighted average of individual model predictions, we can then apply the prediction and classification approach in Sect. 3.2.2 for intercepting and tracking new targets.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings from Kabirzadeh, as was taken in view of Besse above, on the use of log data from sensors to determine trajectories and waypoints (Kabirzadeh) as was combined with the particular technique of Besse as cited above with the teachings from Joesph on using sampling from the distributions for the mean average prediction. The motivation to combine would have been that “Since our agent sees a target’s location only when the target is within a given observation radius, the target trajectory that the agent observes will often be disjoint sections of the target’s full trajectory. Fortunately, the Gaussian process does not require continuous trajectories to be trained, and the Dirichlet process mixture model can be used to classify partial paths that contain gaps during which the vehicle was not in sight…. This rejection sampling approach ensures that we draw motion patterns consistent with all of the available information (see Algorithm 2)…” (Joesph, § 4.2 as cited above).
Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kabirzadeh, US 11,150,660 in view of Caldwell et al., US 11,565,709.
Regarding Claim 22.
Kabirzadeh in view of Caldwell teaches:
The method of claim 19, wherein the AV stack comprises at least one trainable machine learning component, the method performed multiple times as part of a structured training process, wherein the performance of the AV stack is evaluated in each simulation, and that evaluation is used to train parameters of the machine learning component.(Kabirzadeh, as cited above for claim 2, then see Caldwell, abstract: “…The computing system may calculate performance metrics associated with the actions performed by the vehicle in the simulation as directed by the autonomous controller. The computing system may utilize the performance metrics to verify parameters of the autonomous controller (e.g., validate the autonomous controller) and/or to train the autonomous controller utilizing machine learning techniques to bias toward preferred actions.
Then see Caldwell, fig. 3, for the “Metrics” boxes, and do note the “Comfort metric”, along with “Safety” and “Time to Destination” – clarified on in col. 4, ¶ 2 and col. 13, ¶¶ 2-4, and col. 20, ¶¶ 2-3 Then see col. 31, second to last paragraph: “Based on a determination that the performance metric is associated with the threshold level ("Yes" at operation 510), the process, at operation 512, the process may include training (e.g., modifying) the autonomous controller to bias 40 toward the vehicle action. In various examples, training the autonomous controller may include modifying one or more parameters associated therewith. In some examples, the training may be performed based on a cost associated with a vehicle action. In such examples, the computing system 45 may train autonomous controller to bias toward low cost vehicle actions. In various examples, one or more other vehicles may be controlled based on user input, such as via a user interface. In such examples, the training of the autonomous controller may be performed utilizing user 50 input. In some examples, the training may be performed utilizing machine learning techniques. In such examples, the data associated with the scenario (e.g., object position, object movement, vehicle position, vehicle action, etc.) may be utilized as training data to bias the autonomous controller 55 to perform the action in a similar scenario. In at least some examples, such machine learning techniques may comprise reinforcement learning.”
As to the doing multiple times, see the flowchart in fig. 5, i.e. it does this loop iteratively until the simulation results are evaluated to be successful (the “Yes” portions of fig. 5, see accompanying descriptions to clarify)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings from Kabirzadeh on the AV simulation system from Zoox, Inc. which had comfort metrics it scored simulations with with the teachings from Caldwell on a similar AV simulation system from Zoox, Inc. wherein this includes “The computing system may calculate performance metrics associated with the actions performed by the vehicle in the simulation as directed by the autonomous controller. The computing system may utilize the performance metrics to verify parameters of the autonomous controller (e.g., validate the autonomous controller) and/or to train the autonomous controller utilizing machine learning techniques to bias toward preferred actions.” (Caldwell, abstract). The motivation to combine would have been that “Thus, the techniques described herein may significantly improve the performance of the autonomous controller and greatly improve the safety of vehicle operations” (Caldwell, col. 2, ¶ 1).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Birek, Lech, et al. "A novel Big Data analytics and intelligent technique to predict driver's intent." Computers in Industry 99 (2018): 226-240. Abstract and §§ 2-3
Krumm, John, and Eric Horvitz. "Predestination: Inferring destinations from partial trajectories." International Conference on Ubiquitous Computing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. Abstract and § 4.
Panahandeh, Ghazaleh. "Driver route and destination prediction." 2017 IEEE intelligent vehicles symposium (IV). IEEE, 2017. Abstract and § III.
Rehder, Tobias, et al. "Lane change intention awareness for assisted and automated driving on highways." IEEE Transactions on Intelligent Vehicles 4.2 (2019): 265-276. Abstract and §§ II-III
Schreier, Matthias, Volker Willert, and Jürgen Adamy. "An integrated approach to maneuver-based trajectory prediction and criticality assessment in arbitrary road environments." IEEE Transactions on Intelligent Transportation Systems 17.10 (2016): 2751-2766. Abstract and § III.
Song, Weilong, Guangming Xiong, and Huiyan Chen. "Intention‐aware autonomous driving decision‐making in an uncontrolled intersection." Mathematical Problems in Engineering 2016.1 (2016): 1025349. Abstract and § III.
Li et al., US 10,031,526, abstract and cf. 4-5.
Davis et al., US 11,086,318, abstract and fig. 1 and 4
O’Malley, US 11,526,721, abstract and cf. 1-8.
Tebbens et al., US 11,681,296, abstract and cf. 4-14.
Reschka et al., US 11,814,059, abstract and cf. 1-5
Sun et al., US 2019/0129436, abstract and cf. 2-3.
Li et al., US 2019/0163182, abstract and cf. 1-2
O’Malley, US 2020/0410063, abstract and cf. 1-7
Narayanan et al., US 2021/0148727, abstract and cf. 1-4 and 6-8
Bagschik et al., US 2021/0347372, abstract and cf. 1-10
Bagschik et al., US 2021/0370972, abstract and cf. 1-6.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID A. HOPKINS whose telephone number is (571)272-0537. The examiner can normally be reached Monday to Friday, 10AM to 7 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/David A Hopkins/ Primary Examiner, Art Unit 2188