DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-19 are presented for examination.
Claims 1-19 are rejected.
Response to Arguments
Applicant's arguments filed 02/03/2026 have been fully considered but they are not persuasive.
The applicants argued that the prior art on record, i.e., Fairley, did not teach the “receiving, from a network entity, an indication of a traffic-rule model to be used by an ego vehicle, wherein the traffic rule model is at least one of time-bounded or of a static memory allocation”. The examiner would like the Applicants’ attention to the following citations of Fairley where Fairley teaches “As shown in FIG. 1, a system 100 for dynamic policy curation includes a computing system and interfaces with an autonomous agent. The system 100 can further include and/or interface with any or all of: a set of infrastructure devices, a communication interface, a teleoperator platform, a sensor system, a positioning system, a guidance system, and/or any other suitable components… The computing system preferably includes an onboard computing system arranged onboard (e.g., integrated within) the autonomous agent. Additionally or alternatively, the computing system can include any or all of: a remote computing system (e.g., cloud computing system, remote computing in communication with an onboard computing system, in place of an onboard computing system, etc.), a computing system integrated in a supplementary device (e.g., mobile device, user device, etc.), an edge device including mobile computing devices, and/or any other suitable computing systems and devices. In some variations, for instance, the autonomous agent is operable in communication with a remote or disparate computing system that may include a user device (e.g., a mobile phone, a laptop, etc.), a remote server, a cloud server, or any other suitable local and/or distributed computing system remote from the vehicle. The remote computing system can be connected to one or more systems of the autonomous agent through one or more data connections (e.g., channels), but can alternatively communicate with the vehicle system in any suitable manner.”, as taught in ¶ [0014], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0029], ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and exhibited in Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2.
Applicant(s) are reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claim. The Examiner is not limited to Applicant's definition, which is not specifically set forth in the claims, In re Tanaka et al, 193 USPQ 139, (CCPA) 1977. Therefore, the previous rejection is maintained with some elucidations to clarify the examiner’s position.
Concerning 35 U.S.C. 101, the arguments are not persuasive, as the amended claimed subject matter can still be performed in the human mind, as explained below:
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
101 Analysis – Step 1 – YES
Claim 1 is directed to “A method…”, claim 7 is directed to “An ego vehicle…”, and claim 13 is directed to “An apparatus…”. Therefore, claims 1, 7, and 13 are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. The other analogous claims 7, 13 are rejected for the same reasons as the representative claim 1 as discussed here.
Claim 1 recites:
“A method of determining acceptable actions for autonomous driving, the method comprising: receiving, from a network entity, an indication of a traffic-rule model to be used by an ego vehicle, wherein the traffic rule model is at least one of time-bounded or of a static memory allocation; obtaining, at the ego vehicle via one or more sensors, a set of input values indicative of a driving environment of the ego vehicle; and evaluating, by the ego vehicle, the set of input values with the traffic-rule model to determine an answer set of one or more indications of whether one or more corresponding driving actions are acceptable in view of the set of input values and an applicable set of traffic rules corresponding to the driving environment.”
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “evaluating…to determine…” steps in the context of the claims encompasses a driver, an operator, or a person observing, checking, examining, analyzing, determining, evaluating, and judging, calculating performance of vehicles on roads.
Examiner would also note MPEP 2106.04(a)(2)(III): The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. Here, the determination is a form of making evaluation and judgement based on observation by a driver, an operator, or a bystander.
Accordingly, the claim 1 recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
“A method of determining acceptable actions for autonomous driving, the method comprising: receiving, from a network entity, an indication of a traffic-rule model to be used by an ego vehicle, wherein the traffic rule model is at least one of time-bounded or of a static memory allocation; obtaining, at the ego vehicle via one or more sensors, a set of input values indicative of a driving environment of the ego vehicle; and evaluating, by the ego vehicle, the set of input values with the traffic-rule model to determine an answer set of one or more indications of whether one or more corresponding driving actions are acceptable in view of the set of input values and an applicable set of traffic rules corresponding to the driving environment.”
These “obtaining, at an ego vehicle, a set of input values indicative of a driving environment of the ego vehicle” steps are insignificant extra-solution activities that merely use a processor to perform the process. In particular, the “wherein the traffic-rule model is at least one of time-bounded or of a static memory allocation…” step amounts to mere post solution activities and/or instructions to apply the recited abstract ideas (e.g., making evaluation and judgement based on observation by a driver, an operator, or a bystander). Lastly, “vehicle”, i.e. with sensors, ECUs, controllers, and processors merely describes how to generally “apply” the otherwise mental judgements in a generic or general-purpose computer environment, where processor is recited as generic processor performing a generic computer function of processing data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component and merely automates a determining step.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impost any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using “…an ego vehicle…”, “at least one transceiver; at least one memory; and at least one processor communicatively coupled to the at least one transceiver and the at least one memory…”, “…An apparatus comprising: at least one memory; and at least one processor communicatively coupled to the at least one memory…” amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept.
Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The additional limitations “…an ego vehicle…”, “at least one transceiver; at least one memory; and at least one processor communicatively coupled to the at least one transceiver and the at least one memory…”, “…An apparatus comprising: at least one memory; and at least one processor communicatively coupled to the at least one memory…” are well-understood, routine, and conventional activities using conventional sensors. As explained, the additional elements are recited at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. See, e.g., MPEP §2106.05; Alice Corp. v. CLS Bank, 573 U.S., 208,223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention”). Electric Power Group, LLC v, Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016) (Selecting information for collection, analysis and display constitute insignificant extra-solution activity). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016)( wherein the vehicular sensing system tracks a moving detected object as it enters an occlusion region and predicts where the moving detected object will exit from the occlusion region.). Hence, the claim 1 is not patent eligible.
Dependent Claims
Dependent claims 2-6, 8-12, and 14-19 do not recite any further limitations that causes the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Dependent claims 2-6, 8-12, and 14-19 recite the limitation of “…wherein the traffic-rule model is both time-bounded and has a static memory allocation…” are furthered directed toward an abstract idea. The “…wherein the traffic-rule model comprises a decision tree…”, and “…further comprising updating the traffic-rule model based on model update information received by the ego vehicle from a network entity…” are furthered directed toward an insignificant extra-solution activities. Therefore, dependent claims 2-6, 8-12, and 14-19 are not patent eligible under the same rationale as provided in the rejection of independent claims 1, 7, and 13.
As such, claims 1-19 are rejected under 35 USC § 101 as being drawn to an abstract idea without significant more, and thus are ineligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fairley et al. (US Pub. No.: 2022/0001892 A1: hereinafter “Fairley”).
Consider claims 1, 7, and 13:
Fairley teaches an apparatus (Figs. 1-2 elements 100s, steps 200s), an ego vehicle (Figs. 1-2 elements 100s), a method of determining acceptable actions for autonomous driving (See Fairley, e.g., “…dynamic policy curation includes a computing system and interfaces with an autonomous agent. A method for dynamic policy curation includes collecting a set of inputs; processing the set of inputs; and determining a set of available policies based on processing the set of inputs. Additionally or alternatively, the method can include any or all of: selecting a policy; implementing a policy; and/or any other suitable processes..”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2), the method comprising: receiving, from a network entity, an indication of a traffic-rule model to be used by an ego vehicle (See Fairley, e.g., “…a system 100 for dynamic policy curation includes a computing system and interfaces with an autonomous agent. The system 100 can further include and/or interface with any or all of: a set of infrastructure devices, a communication interface, a teleoperator platform, a sensor system, a positioning system, a guidance system, and/or any other suitable components… The computing system preferably includes an onboard computing system arranged onboard (e.g., integrated within) the autonomous agent. Additionally or alternatively, the computing system can include any or all of: a remote computing system (e.g., cloud computing system, remote computing in communication with an onboard computing system, in place of an onboard computing system, etc.), a computing system integrated in a supplementary device (e.g., mobile device, user device, etc.)…The remote computing system can be connected to one or more systems of the autonomous agent through one or more data connections (e.g., channels), but can alternatively communicate with the vehicle system in any suitable manner…”, of ¶ [0014], ¶ [0029], ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2), wherein the traffic rule model is at least one of time-bounded or of a static memory allocation (See Fairley, e.g., “…S210 is preferably performed initially in the method 200, and optionally multiple times during operation of the autonomous agent, such as any or all of: continuously, at a predetermined frequency (e.g., at each election cycle), at a predetermined set of intervals, at a random set of intervals, and/or at any other times. Additionally or alternatively, S210 can be performed in response to a trigger, once during the method 200, in response to another process of the method 200, in parallel with another process of the method 200…”, of ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2); obtaining, at the ego vehicle, a set of input values indicative of a driving environment of the ego vehicle via one or more sensors (See Fairley, e.g., “…collecting a set of inputs S210; processing the set of inputs S220; and determining a set of available policies based on processing the set of inputs S230. Additionally or alternatively...dynamically determine which policies are available to an autonomous agent, which subsequently enables any or all of: reducing computing resources required for selecting a policy; ensuring that the policy elected by the vehicle is best suited for the environment of the autonomous agent; the availability of a diverse and numerous set of general policies available to an autonomous agent; and/or perform any other suitable functions…”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2); and evaluating, by the ego vehicle, the set of input values with the traffic-rule model to determine an answer set of one or more indications of whether one or more corresponding driving actions are acceptable in view of the set of input values and an applicable set of traffic rules corresponding to the driving environment (See Fairley, e.g., “…The available policies for each region can be determined based on any or all of: referencing a lookup table (e.g., which statically assigns a set of policies based on the vehicle being located in the region; which dynamically assigns a set of policies based on a particular location of the vehicle within the region and a set of rules; which assigns a set of rules and/or algorithms; etc.); evaluating a set of rules and/or algorithms (e.g., based on the set of inputs including a location of the vehicle; a hard coding of policies for each region of the map; etc.); evaluating a set of models (e.g., deep learning models); determining and/or adjusting a set of available policies based on an environmental and/or situational awareness of the agent; prompting, determining, and/or updating policies based on teleoperator input; and/or determining available policies in any other suitable ways…”, of ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Consider claims 2, 8, and 14:
Fairley teaches everything claimed as implemented above in the rejection of claims 1, 7, and 13. In addition, Fairley teaches wherein the traffic-rule model is both time-bounded and has the static memory allocation (See Fairley, e.g., “…S210 is preferably performed initially in the method 200, and optionally multiple times during operation of the autonomous agent, such as any or all of: continuously, at a predetermined frequency (e.g., at each election cycle), at a predetermined set of intervals, at a random set of intervals, and/or at any other times. Additionally or alternatively, S210 can be performed in response to a trigger, once during the method 200, in response to another process of the method 200, in parallel with another process of the method 200…The rules can include any or all of: a set of mappings (e.g., used in accordance with a set of lookup tables and/or databases), decision trees, equations, and/or any other components. Additionally or alternatively, inputs can be processed with a set of models (e.g., deep learning models, machine learning models, trained models, etc.), algorithms, and/or any other tools…”, of ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Consider claims 3, 9:
Fairley teaches everything claimed as implemented above in the rejection of claims 1, 7. In addition, Fairley teaches wherein the traffic-rule model comprises a decision tree (See Fairley, e.g., “…The rules can include any or all of: a set of mappings (e.g., used in accordance with a set of lookup tables and/or databases), decision trees, equations, and/or any other components. Additionally or alternatively, inputs can be processed with a set of models (e.g., deep learning models, machine learning models, trained models, etc.), algorithms, and/or any other tools…”, of ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Consider claims 4, 10:
Fairley teaches everything claimed as implemented above in the rejection of claims 1, 7. In addition, Fairley teaches wherein the traffic-rule model comprises a symbolic regression model (See Fairley, e.g., “…a scenario associated with the ego agent is determined with a classification module of the computing subsystem of the ego agent, wherein the classification module preferably includes one or more classifiers (e.g., perceptron classifier, Naïve Bayes classifier, decision tree classifier, logistic regression classifier, k-nearest neighbor classifier, neural network classifier, artificial neural network classifier, deep learning classifier, support vector machine classifier, etc.), further preferably one or more trained classifiers (e.g., machine learning classifier, deep learning classifier, etc.)…”, of ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Consider claims 5, 11:
Fairley teaches everything claimed as implemented above in the rejection of claims 4, 10. In addition, Fairley teaches wherein evaluating the set of input values with the traffic-rule model comprises evaluating a respective symbolic regression model mathematical expression for each of the one or more indications (See Fairley, e.g., “…a scenario associated with the ego agent is determined with a classification module of the computing subsystem of the ego agent, wherein the classification module preferably includes one or more classifiers (e.g., perceptron classifier, Naïve Bayes classifier, decision tree classifier, logistic regression classifier, k-nearest neighbor classifier, neural network classifier, artificial neural network classifier, deep learning classifier, support vector machine classifier, etc.), further preferably one or more trained classifiers (e.g., machine learning classifier, deep learning classifier, etc.)…”, of ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2), of the answer set, of whether the one or more corresponding driving actions are acceptable (See Fairley, e.g., “…The available policies for each region can be determined based on any or all of: referencing a lookup table (e.g., which statically assigns a set of policies based on the vehicle being located in the region; which dynamically assigns a set of policies based on a particular location of the vehicle within the region and a set of rules; which assigns a set of rules and/or algorithms; etc.); evaluating a set of rules and/or algorithms (e.g., based on the set of inputs including a location of the vehicle; a hard coding of policies for each region of the map; etc.); evaluating a set of models (e.g., deep learning models); determining and/or adjusting a set of available policies based on an environmental and/or situational awareness of the agent; prompting, determining, and/or updating policies based on teleoperator input; and/or determining available policies in any other suitable ways…”, of ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Consider claims 6, 12:
Fairley teaches everything claimed as implemented above in the rejection of claims 1, 7. In addition, Fairley teaches further comprising updating the traffic-rule model based on model update information received by the ego vehicle from the network entity (See Fairley, e.g., “…determining and/or adjusting a set of available policies based on an environmental and/or situational awareness of the agent; prompting, determining, and/or updating policies based on teleoperator input; and/or determining available policies in any other suitable ways…”, of ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Consider claim 15:
Fairley teaches everything claimed as implemented above in the rejection of claim 13. In addition, Fairley teaches wherein the traffic-rule model is a selected traffic-rule model (See Fairley, e.g., “…evaluating a set of rules and/or algorithms (e.g., based on the set of inputs including a location of the vehicle; a hard coding of policies for each region of the map; etc.); evaluating a set of models (e.g., deep learning models); determining and/or adjusting a set of available policies based on an environmental and/or situational awareness of the agent; prompting, determining, and/or updating policies based on teleoperator input…”, of ¶ [0051]-¶ [0082], ¶ [0095]-¶ [0102], ¶ [0114]-¶ [0139], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2), and wherein to determine the selected traffic-rule model the at least one processor is configured to: determine a first potential traffic-rule model by fitting a first model to the plurality of input data sets (See Fairley, e.g., “…collecting a set of inputs S210; processing the set of inputs S220; and determining a set of available policies based on processing the set of inputs S230. Additionally or alternatively...dynamically determine which policies are available to an autonomous agent, which subsequently enables any or all of: reducing computing resources required for selecting a policy; ensuring that the policy elected by the vehicle is best suited for the environment of the autonomous agent; the availability of a diverse and numerous set of general policies available to an autonomous agent; and/or perform any other suitable functions…”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2), the plurality of answer sets, and the set of traffic rules corresponding to each of the plurality of answer sets (See Fairley, e.g., “…determining a set of available policies based on processing the set of inputs S230...dynamically determine which policies are available to an autonomous agent, which subsequently enables any or all of: reducing computing resources required for selecting a policy; ensuring that the policy elected by the vehicle is best suited for the environment of the autonomous agent; the availability of a diverse and numerous set of general policies available to an autonomous agent; and/or perform any other suitable functions…”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2); determine a second potential traffic-rule model (e.g., “…S210 is preferably performed initially in the method 200, and optionally multiple times during operation of the autonomous agent, such as any or all of: continuously, at a predetermined frequency (e.g., at each election cycle), at a predetermined set of intervals, at a random set of intervals, and/or at any other times…” of Fig. 2 steps 200s, Fig. 4 elements multiple of policies) by fitting a second model to the plurality of input data sets (See Fairley, e.g., “…collecting a set of inputs S210; processing the set of inputs S220; and determining a set of available policies based on processing the set of inputs S230. Additionally or alternatively...dynamically determine which policies are available to an autonomous agent, which subsequently enables any or all of: reducing computing resources required for selecting a policy; ensuring that the policy elected by the vehicle is best suited for the environment of the autonomous agent; the availability of a diverse and numerous set of general policies available to an autonomous agent; and/or perform any other suitable functions…”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2), the plurality of answer sets, and the set of traffic rules corresponding to each of the plurality of answer sets (See Fairley, e.g., “…collecting a set of inputs S210; processing the set of inputs S220; and determining a set of available policies based on processing the set of inputs S230...dynamically determine which policies are available to an autonomous agent, which subsequently enables any or all of: reducing computing resources required for selecting a policy; ensuring that the policy elected by the vehicle is best suited for the environment of the autonomous agent; the availability of a diverse and numerous set of general policies available to an autonomous agent …”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2); and select one of the first potential traffic-rule model and the second potential traffic-rule model as the selected traffic-rule model (See Fairley, e.g., “…ensuring that the policy elected by the vehicle is best suited for the environment of the autonomous agent; the availability of a diverse and numerous set of general policies available to an autonomous agent; and/or perform any other suitable functions…”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Consider claim 16:
Fairley teaches everything claimed as implemented above in the rejection of claim 15. In addition, Fairley teaches wherein to select one of the first potential traffic-rule model and the second potential traffic-rule model as the selected traffic-rule model (See Fairley, e.g., “…collecting a set of inputs S210; processing the set of inputs S220; and determining a set of available policies based on processing the set of inputs S230...”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2) the at least one processor is configured to select as the selected traffic-rule model the model, of the first potential traffic-rule model and the second potential traffic-rule model, that has a lower corresponding static memory allocation (See Fairley, e.g., “…dynamically determine which policies are available to an autonomous agent, which subsequently enables any or all of: reducing computing resources required for selecting a policy; ensuring that the policy elected by the vehicle is best suited for the environment of the autonomous agent; the availability of a diverse and numerous set of general policies available to an autonomous agent …”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Consider claim 17:
Fairley teaches everything claimed as implemented above in the rejection of claim 13. In addition, Fairley teaches further comprising at least one transceiver communicatively coupled to the at least one processor (See Fairley, e.g., “…The onboard computing system is preferably connected to the Internet via a wireless connection (e.g., via a cellular link or connection)…The system 100 preferably includes a communication interface in communication with the computing system, which functions to enable information to be received…transmitted from the computing system…a wireless communication system (e.g., Wi-Fi, Bluetooth, cellular 3G, cellular 4G, cellular 5G, multiple-input multiple-output or MIMO, one or more radios, or any other suitable wireless communication system or protocol)…”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2), wherein the at least one processor is configured to transmit, via the at least one transceiver to the ego vehicle, the traffic-rule model (See Fairley, e.g., “…The set of inputs can additionally or alternatively include any or all of: inputs from one or more computing systems…historical information (e.g., learned fleet knowledge, aggregated information, etc.), information from one or more servers and/or databases (e.g., lookup tables), and/or any other suitable inputs…”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Consider claim 18:
Fairley teaches everything claimed as implemented above in the rejection of claim 13. In addition, Fairley teaches further comprising at least one transceiver communicatively coupled to the at least one processor (See Fairley, e.g., “…The onboard computing system is preferably connected to the Internet via a wireless connection (e.g., via a cellular link or connection)…The system 100 preferably includes a communication interface in communication with the computing system, which functions to enable information to be received…transmitted from the computing system…a wireless communication system (e.g., Wi-Fi, Bluetooth, cellular 3G, cellular 4G, cellular 5G, multiple-input multiple-output or MIMO, one or more radios, or any other suitable wireless communication system or protocol)…”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2), wherein the at least one processor is configured to transmit, via the at least one transceiver to the ego vehicle, traffic-model update information for updating the traffic-rule model (See Fairley, e.g., “…evaluating a set of models (e.g., deep learning models); determining and/or adjusting a set of available policies based on an environmental and/or situational awareness of the agent; prompting, determining, and/or updating policies based on teleoperator input; and/or determining available policies in any other suitable ways...Any or all of the environmental information can be dynamically determined (e.g., based on a dynamically updated lookup table, based on a dynamically updated database, 3.sup.rd party website and/or sensors and/or database, based on sensor information, etc.), predetermined (e.g., based on a predetermined lookup table, dataset, and/or set of rules; etc.)…”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Consider claim 19:
Fairley teaches everything claimed as implemented above in the rejection of claim 13. In addition, Fairley teaches wherein to determine the plurality of answer sets for the plurality of input data sets the at least one processor is configured to apply a satisfiability solver to each of the plurality of input data sets (See Fairley, e.g., “…evaluating a set of models (e.g., deep learning models); determining and/or adjusting a set of available policies based on an environmental and/or situational awareness of the agent; prompting, determining, and/or updating policies based on teleoperator input; and/or determining available policies in any other suitable ways...”, of Abstract, ¶ [0023]-¶ [0039], ¶ [0023]-¶ [0039], ¶ [0041]-¶ [0049], ¶ [0051]-¶ [0082], and Fig. 1 elements 100s, Fig. 2 steps 200s, Fig. 4 elements multiple of policies, Figs. 5A-B elements A-G, Figs. 6A-B, Figs. 7-8 elements Policies 1-2).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
PHAM (US Pub. No.: 2025/0264337 A1) teaches “Apparatuses, systems, and methods relate to technology to identify travel data associated with a vehicle. The technology further identifies a travel route associated with the vehicle based on the travel data, identifies a selected rule from a plurality of rules based on one or more of a first characteristic of the travel route or a second characteristic of the vehicle, and determines a depletion mileage amount for the vehicle based on the selected rule.”
XU et al. (EP 4328765 A1) teaches “This application provides a vehicle driving policy recommendation method, including: collecting environment information; determining a driving scenario based on the collected environment information; then matching the driving scenario with a to-be-recommended policy library, where the to-be-recommended policy library may include a correspondence between a driving scenario and a to-be-recommended driving policy; and when the to-be-recommended driving policy is matched, displaying the to-be-recommended driving policy, so that a terminal device can execute the to-be-recommended driving policy according to a user instruction input by a user.”
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BABAR SARWAR whose telephone number is (571)270-5584. The examiner can normally be reached on Mon-Fri 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris S. Almatrahi can be reached on (313)446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free)? If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BABAR SARWAR/Primary Examiner, Art Unit 3667