Prosecution Insights
Last updated: April 19, 2026
Application No. 17/638,829

Classification of AI Modules

Non-Final OA §101§102§103§112
Filed
Feb 27, 2022
Examiner
HAEFNER, KAITLYN RENEE
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Volkswagen Aktiengesellschaft
OA Round
3 (Non-Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
4y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
2 granted / 4 resolved
-5.0% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
32 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
31.1%
-8.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is in response to the amendment filed 12/01/2025. Claims 1-2, 4-6, 8-10, 12-17, and 20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/01/2025 has been entered. Specification The abstract of the disclosure is objected to because “and also relates to a classifier provided using such a method..” should read “and also relates to a classifier provided using such a method.”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Objections Claim 10 is objected to because of the following informalities: Regarding claim 10, lines 13-14, “at least one of the contextual parameter” should read “at least one of the contextual parameters”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: the plurality of AI modules in claim 13. Regarding the AI module, in the 2nd paragraph on page 20, the specification states: FIG. 8 schematically shows a system diagram of a solution for providing a classifier K for an AI module NNi for a processing of input data provided by a sensor system of a motor vehicle. The classification system uses a set of AI modules NNi, e.g., trained neural networks, as candidates for later execution in a specific environment. The AI modules NNi are provided for the same task, e.g., an object recognition or a semantic segmentation, but differ in terms of architecture, training data, and training parameters. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 4 recites the limitation "the output of the plurality of AI modules" in line 2. There is insufficient antecedent basis for this limitation in the claim. Specifically, it is unclear as to whether this output is referring the one output or the one or more outputs recited in claim 1. For purposes of examination, Examiner has interpreted this output to be the one or more outputs recited in claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to a storage medium (e.g. see claim 9, line 1), which includes a signal based on the broadest reasonable interpretation (i.e. The ordinary and customary meaning of a storage medium that includes signals per se). While the Specification discloses a computer-readable storage medium (see page 6), the Specification is not limiting the storage medium to only a non-transitory embodiment. A computer readable storage medium, or the like, that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C 101 by adding the limitation “non-transitory” to the claim and positively reciting that the computer readable medium is a non-transitory computer readable medium. See also In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Dir. 2007) (transitory embodiments are not directed to statutory subject matter). Examiner notes that if Applicant amends to overcome the signals per se rejection, claim 9 will still be rejected under 35 U.S.C. 101. Claims 1-2, 4-6, 8-10, 12-17, and 20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Subject Matter Eligibility Analysis Step 1: Claim 1 recites a method and is thus a process, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 1 recites determining a functional quality for each of the two or more data points by comparing the one or more outputs of the plurality of AI modules for the data point with the associated ground truth, wherein the functional quality describes the quality of a given one of the plurality of AI module with respect to the object detection or the semantic segmentation (This limitation is a mental process as it encompasses a human mentally determining a functional quality by comparing outputs.) associating at least a first of the plurality of AI modules with at least one predefined driving condition (This limitation is a mental process as it encompasses a human mentally associating an AI module with a driving condition.) Therefore, claim 1 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 1 further recites additional elements of A method of automated driving using a library of a plurality of AI modules for processing input data provided by a sensor system of a motor vehicle (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) The processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Determining one or more outputs of the plurality of AI modules by applying the one or more AI modules to two or more data points from a test data set, wherein associated ground truths and contextual parameters are known for the two or more data points (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) Providing and training a classifier for the plurality of AI modules, wherein the classifier outputs an expected functional quality for at least one of the contextual parameters, with the contextual parameters and the functional quality of each of the two or more data points (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) Selectively using the first AI module of the plurality of AI modules for environment recognition during automated driving of the motor vehicle when a current driving condition of the motor vehicle matches the at least one predefined driving condition (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 1 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 1 do not provide significantly more than the abstract idea itself, taken alone and in combination because A method of automated driving using a library of a plurality of AI modules for processing input data provided by a sensor system of a motor vehicle is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). The processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Determining one or more outputs of the plurality of AI modules by applying the one or more AI modules to two or more data points from a test data set, wherein associated ground truths and contextual parameters are known for the two or more data points is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). Providing and training a classifier for the plurality of AI modules, wherein the classifier outputs an expected functional quality for at least one of the contextual parameters, with the contextual parameters and the functional quality of each of the two or more data points is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). Selectively using the first AI module of the plurality of AI modules for environment recognition during automated driving of the motor vehicle when a current driving condition of the motor vehicle matches the at least one predefined driving condition uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 1 is subject-matter ineligible. Regarding Claim 2: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 2 recites the same abstract ideas as claim 1. Therefore, claim 2 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 2 further recites an additional element of wherein at least one of the plurality of AI modules realizes an AI model or a family of AI models in the sense of an ensemble (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Therefore, claim 2 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional element of claim 2 does not provide significantly more than the abstract idea itself, taken alone and in combination because wherein at least one of the plurality of AI modules realizes an AI model or a family of AI models in the sense of an ensemble specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Therefore, claim 2 is subject-matter ineligible. Regarding Claim 4: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 4 recites wherein an IoU metric is used for comparing the output of the plurality of AI modules for the data point with the associated ground truth (This limitation is a mental process as it encompasses a human mentally comparing the output using an IoU metric.) Therefore, claim 4 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 4 does not recite any additional elements, and therefore, claim 4 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 4 does not recite additional elements and therefore, does not provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 4 is subject-matter ineligible. Regarding Claim 5: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 5 recites the same abstract ideas as claim 1. Therefore, claim 5 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 5 further recites an additional element of wherein the contextual parameters comprise properties in the context of the data points or properties of an architecture of the plurality AI modules (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Therefore, claim 5 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional element of claim 5 does not provide significantly more than the abstract idea itself, taken alone and in combination because wherein the contextual parameters comprise properties in the context of the data points or properties of an architecture of the plurality AI modules specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Therefore, claim 5 is subject-matter ineligible. Regarding Claim 6: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 6 recites the same abstract ideas as claim 1. Therefore, claim 6 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 6 further recites an additional element of wherein the classifier is formed by a neural network (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Therefore, claim 6 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional element of claim 6 does not provide significantly more than the abstract idea itself, taken alone and in combination because wherein the classifier is formed by a neural network specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Therefore, claim 6 is subject-matter ineligible. Regarding Claim 8: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 8 recites the same abstract ideas as claim 1. Therefore, claim 8 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 8 further recites an additional element of wherein different AI modules of the plurality of AI modules are adapted to different lighting conditions, different speeds, different vehicle environments, different driving situations, different environmental conditions, different driving conditions, or different objectives (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Therefore, claim 8 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional element of claim 8 does not provide significantly more than the abstract idea itself, taken alone and in combination because wherein different AI modules of the plurality of AI modules are adapted to different lighting conditions, different speeds, different vehicle environments, different driving situations, different environmental conditions, different driving conditions, or different objectives specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Therefore, claim 8 is subject-matter ineligible. Regarding Claim 9: Subject Matter Eligibility Analysis Step 1: Claim 9 recites a storage medium and is thus not one of the four statutory categories (see above 101 rejection). However, if Applicant is to amend claim 9 to be in one of the four statutory categories, claim 9 would be rejected through the analysis as follows. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 9 recites determine a functional quality for each of the two or more data points by comparing the one or more outputs of the plurality of AI modules for the data point with the associated ground truth, wherein the functional quality describes the quality of a given one of the plurality of AI module with respect to the object detection or the semantic segmentation (This limitation is a mental process as it encompasses a human mentally determining a functional quality by comparing outputs.) associate at least a first of the plurality of AI modules with at least one predefined driving condition (This limitation is a mental process as it encompasses a human mentally associating an AI module with a driving condition.) Therefore, claim 1 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 1 further recites additional elements of A storage medium comprising instructions which, when executed by a computer, cause the computer to: (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).) Determine one or more outputs of the plurality of AI modules by applying the one or more AI modules to two or more data points from a test data set, wherein associated ground truths and contextual parameters are known for the two or more data points (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) Provide and train a classifier for the plurality of AI modules, wherein the classifier outputs an expected functional quality for at least one of the contextual parameters, with the contextual parameters and the functional quality of each of the two or more data points (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) Selectively use the first AI module of the plurality of AI modules for environment recognition during automated driving of the motor vehicle when a current driving condition of the motor vehicle matches the at least one predefined driving condition (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 9 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 9 do not provide significantly more than the abstract idea itself, taken alone and in combination because A storage medium comprising instructions which, when executed by a computer, cause the computer to: (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).) Determine one or more outputs of the plurality of AI modules by applying the one or more AI modules to two or more data points from a test data set, wherein associated ground truths and contextual parameters are known for the two or more data points is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). Provide and train a classifier for the plurality of AI modules, wherein the classifier outputs an expected functional quality for at least one of the contextual parameters, with the contextual parameters and the functional quality of each of the two or more data points is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). Selectively use the first AI module of the plurality of AI modules for environment recognition during automated driving of the motor vehicle when a current driving condition of the motor vehicle matches the at least one predefined driving condition uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 9 is subject-matter ineligible. Regarding Claim 10: Subject Matter Eligibility Analysis Step 1: Claim 10 recites a device and is thus a product, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 10 recites determining a functional quality for each of the two or more data points by comparing the one or more outputs of the plurality of AI modules for the data point with the associated ground truth, wherein the functional quality describes the quality of a given one of the plurality of AI module with respect to the object detection or the semantic segmentation (This limitation is a mental process as it encompasses a human mentally determining a functional quality by comparing outputs.) associating at least a first of the plurality of AI modules with at least one predefined driving condition (This limitation is a mental process as it encompasses a human mentally associating an AI module with a driving condition.) Therefore, claim 1 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 10 further recites additional elements of a device for providing a classifier for a plurality of AI modules for processing input data provided by a sensor system of a motor vehicle during automated driving (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) The processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) A test circuit for determining one or more outputs of the plurality of AI modules by applying the one or more AI modules to two or more data points from a test data set, wherein associated ground truths and contextual parameters are known for the two or more data points (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) An evaluation circuit for… (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) training a classifier for the plurality of AI modules, wherein the classifier outputs an expected functional quality for at least one of the contextual parameters, with the contextual parameters and the functional quality of each of the two or more data points (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 10 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 10 do not provide significantly more than the abstract idea itself, taken alone and in combination because a device for providing a classifier for a plurality of AI modules for processing input data provided by a sensor system of a motor vehicle during automated driving is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). The processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). A test circuit for determining one or more outputs of the plurality of AI modules by applying the one or more AI modules to two or more data points from a test data set, wherein associated ground truths and contextual parameters are known for the two or more data points is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). An evaluation circuit for… uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). training a classifier for the plurality of AI modules, wherein the classifier outputs an expected functional quality for at least one of the contextual parameters, with the contextual parameters and the functional quality of each of the two or more data points uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 10 is subject-matter ineligible. Regarding Claim 12: Subject Matter Eligibility Analysis Step 1: Claim 12 recites a method and is thus a process, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 12 recites determining a current driving condition of the motor vehicle (This limitation is a mental process as it encompasses a human mentally determining a driving condition.) selecting a first AI module from the plurality of AI modules to be used for the input data or a combination of first AI modules and associated weights to be used for the automated driving function of the motor vehicle in dependence of whether the current driving condition corresponds to a predefined driving condition, associated with the first AI module (This limitation is a mental process as it encompasses a human mentally selecting an AI module.) Therefore, claim 12 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 12 further recites additional elements of a method for configuring a control system of an at least partially automated motor vehicle with a library of a plurality of AI modules for processing input data provided by a sensor system of the motor vehicle (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) the processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) acquiring input data to be processed by at least one of the plurality of AI modules (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) Therefore, claim 12 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 12 do not provide significantly more than the abstract idea itself, taken alone and in combination because a method for configuring a control system of an at least partially automated motor vehicle with a library of a plurality of AI modules for processing input data provided by a sensor system of the motor vehicle is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). the processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). acquiring input data to be processed by at least one of the plurality of AI modules is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). Therefore, claim 12 is subject-matter ineligible. Regarding Claim 13: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 13 recites the same abstract ideas as claim 1. Therefore, claim 13 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 13 further recites additional elements of Wherein the AI modules are set up to perform an environment detection for the automatic driving function of a motor vehicle (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Therefore, claim 13 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 13 do not provide significantly more than the abstract idea itself, taken alone and in combination because Wherein the AI modules are set up to perform an environment detection for the automatic driving function of a motor vehicle specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Therefore, claim 13 is subject-matter ineligible. Regarding Claim 14: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 14 recites the same abstract ideas as claim 12. Therefore, claim 14 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 14 further recites an additional element of wherein different AI modules of the plurality of AI modules are adapted to different lighting conditions, different speeds, different vehicle environments, different driving situations, different environmental conditions, different driving conditions, or different objectives (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Therefore, claim 14 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional element of claim 14 does not provide significantly more than the abstract idea itself, taken alone and in combination because wherein different AI modules of the plurality of AI modules are adapted to different lighting conditions, different speeds, different vehicle environments, different driving situations, different environmental conditions, different driving conditions, or different objectives specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Therefore, claim 14 is subject-matter ineligible. Regarding Claim 15: Subject Matter Eligibility Analysis Step 1: Claim 15 recites a non-transitory storage medium and is thus an article of manufacture, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 15 recites determine a current driving condition of the motor vehicle (This limitation is a mental process as it encompasses a human mentally determining a driving condition.) select a first AI module from the plurality of AI modules to be used for the input data or a combination of first AI modules and associated weights to be used for the automated driving function of the motor vehicle in dependence of whether the current driving condition corresponds to a predefined driving condition, associated with the first AI module (This limitation is a mental process as it encompasses a human mentally selecting an AI module.) Therefore, claim 15 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 15 further recites additional elements of a non-transitory storage medium comprising instructions for configuring a control system of an at least partially automated motor vehicle for processing input data provided by a sensor system of the motor vehicle (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) the processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) acquire input data to be processed by at least one of the plurality of AI modules (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) Therefore, claim 15 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 15 do not provide significantly more than the abstract idea itself, taken alone and in combination because a non-transitory storage medium comprising instructions for configuring a control system of an at least partially automated motor vehicle for processing input data provided by a sensor system of the motor uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). the processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). acquire input data to be processed by at least one of the plurality of AI modules is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). Therefore, claim 15 is subject-matter ineligible. Regarding Claim 16: Subject Matter Eligibility Analysis Step 1: Claim 16 recites a device and is thus an apparatus, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 16 recites determine a current driving condition of the motor vehicle (This limitation is a mental process as it encompasses a human mentally determining a driving condition.) selecting a first AI module from the plurality of AI modules to be used for the input data or a combination of first AI modules and associated weights to be used for the automated driving function of the motor vehicle in dependence of whether the current driving condition corresponds to a predefined driving condition, associated with the first AI module (This limitation is a mental process as it encompasses a human mentally selecting an AI module.) Therefore, claim 16 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 16 further recites additional elements of a device for configuring a control system of an at least partially automated motor vehicle with a library of a plurality of AI modules for processing input data provided by a sensor system of the motor vehicle (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) the processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) a data circuit for capturing input data to be processed by at least one of the plurality of AI modules (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) a sensor to…(This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) an evaluation circuit for…(This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 16 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 16 do not provide significantly more than the abstract idea itself, taken alone and in combination because a device for configuring a control system of an at least partially automated motor vehicle with a library of a plurality of AI modules for processing input data provided by a sensor system of the motor vehicle is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). the processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). a data circuit for capturing input data to be processed by at least one of the plurality of AI modules is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). a sensor to… uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). an evaluation circuit for… uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 16 is subject-matter ineligible. Regarding Claim 17: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 17 recites the same abstract ideas as claim 16. Therefore, claim 17 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 17 further recites an additional element of A motor vehicle, wherein the motor vehicle comprises a device according to claim 16 (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Therefore, claim 17 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional element of claim 17 does not provide significantly more than the abstract idea itself, taken alone and in combination because A motor vehicle, wherein the motor vehicle comprises a device according to claim 16 specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Therefore, claim 17 is subject-matter ineligible. Regarding Claim 20: Subject Matter Eligibility Analysis Step 1: Claim 20 recites a motor vehicle and is thus an article of manufacture, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 20 recites determine a current driving condition of the motor vehicle (This limitation is a mental process as it encompasses a human mentally determining a driving condition.) select a first AI module from the plurality of AI modules to be used for the input data or a combination of first AI modules and associated weights to be used for the automated driving function of the motor vehicle in dependence of whether the current driving condition corresponds to a predefined driving condition, associated with the first AI module (This limitation is a mental process as it encompasses a human mentally selecting an AI module.) Therefore, claim 20 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 20 further recites additional elements of a motor vehicle, wherein the motor vehicle is set up to: (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) acquire input data to be processed by at least one of the plurality of AI modules (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) Therefore, claim 20 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 20 do not provide significantly more than the abstract idea itself, taken alone and in combination because a motor vehicle, wherein the motor vehicle is set up to: uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). acquire input data to be processed by at least one of the plurality of AI modules is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). Therefore, claim 20 is subject-matter ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 5-6, 8-10, 12-17 and 20 and is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dronen et al. (US 2019/0295003 A1) (hereafter referred to as Dronen). Regarding claim 1, Dronen teaches A method of automated driving using a library of a plurality of AI modules for processing input data provided by a sensor system of a motor vehicle, the processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle (Dronen, page 10, paragraph 0003, “According to one embodiment, a computer-implemented method comprises processing, by an in-vehicle feature detection device, sensor data collected by a vehicle to output a detected feature and a confidence metric for the detected feature” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the feature detection device is the object detection and the library of a plurality of AI modules is the one or more feature detection models.), comprising: determining one or more outputs of the plurality of AI modules by applying the one or more AI modules to two or more data points from a test data set, wherein associated ground truths and contextual parameters are known for the two or more data points (Dronen, page 15, paragraph 0057, “during training, the mapping platform 109 uses a learner module that feeds feature sets from the labeled sensor data set into the feature detection model to compute a predicted matching feature using an initial set of model parameters. The learner module then compares the predicted matching probability and the predicted feature to the ground truth data … in the labeled sensor data set” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the labeled sensor data set is the two or more data points from a test data set and the initial set of model parameters are the contextual parameters. Examiner further notes that the plurality of AI modules is the one or more feature detection models.) determining a functional quality for each of the two or more data points by comparing the one or more outputs of the plurality of AI modules for the data point with the associated ground truth, wherein the functional quality describes the quality of a given one of the plurality of AI modules with respect to the object detection or the semantic segmentation (Dronen, page 15, paragraph 0057, “during training, the mapping platform 109 uses a learner module that feeds feature sets from the labeled sensor data into the feature detection model to compute a predicted matching feature using an initial set of model parameters. The learner module then compares the predicted matching probability and the predicted feature to the ground truth data…in the labeled sensor data set. The learner module then computes an accuracy of the predictions for the initial set of model parameters” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predicted matching probability and the predicted feature are the outputs. Examiner further notes that the labeled sensor data set is the two or more data points and the functional quality is the accuracy of the predictions.); and providing and training a classifier for the plurality of AI modules, wherein the classifier outputs an expected functional quality for at least one of the contextual parameters, with the contextual parameters and the functional quality of each of the two or more data points (Dronen, page 15, paragraph 0057, “During training, the mapping platform 109 uses a learner module that feeds feature sets from the labeled sensor data set into the feature detection model to compute a predicted matching feature using an initial set of model parameters. The learner module then compares the predicted matching probability and the predicted feature to the ground truth data…in the labeled sensor data set. The learner module then computes an accuracy of the predictions for the initial set of model parameters” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the learner module, or classifier, outputs the accuracy which is the expected functional quality for the feature sets or the contextual parameters. Examiner further notes that the labeled sensor data set is the two or more data points from a test data set and the initial set of model parameters are the contextual parameters.) associating at least a first of the plurality of AI modules with at least one predefined driving condition (Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predefined driving condition is when privacy laws are in place.); selectively using the first AI module of the plurality of AI modules for environment recognition during automated driving of the motor vehicle when a current driving condition of the motor vehicle matches at least one predefined driving condition (Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predefined driving condition is when privacy laws are in place. Examiner further notes that the first AI module is used selectively used to avoid breaking the privacy laws when the driving condition of privacy laws and policies are met. Examiner notes that using alternative environmental data when the confidence threshold was not met is selectively using the AI module for environmental recognition.). Regarding claim 2, Dronen teaches The method of claim 1, wherein at least one of the AI modules realizes an AI model or a family of AI models in the sense of an ensemble (Dronen, page 15, paragraph 0057, “the mapping platform 109 can incorporate a supervised learning model (e.g., a logistic regression model, RandomForest model, and/or any equivalent model) to provide feature matching probabilities or statistical patterns that are learned from the labeled sensor data set. For example, during training, the mapping platform 109 uses a learner module that feeds feature sets from the labeled sensor data into the feature detection model to compute a predicted matching feature using an initial set of model parameters.” Examiner notes that the learner module is a supervised learning model or AI model.). Regarding claim 5, Dronen teaches The method of claim 1, wherein the contextual parameters comprise properties in the context of the data points or properties of an architecture of the plurality of AI modules (Dronen, page 15, paragraph 0057, “during training, the mapping platform 109 uses a learner module that feeds feature sets from the labeled sensor data set into the feature detection model to compute a predicted matching feature using an initial set of model parameters. The learner module then compares the predicted matching probability and the predicted feature to the ground truth data … in the labeled sensor data set” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that initial set of model parameters are properties of an architecture of the AI module.) Regarding claim 6, Dronen teaches The method of claim 1, wherein the classifier is formed by a neural network (Dronen, page 12, paragraph 0032, “In one embodiment, the feature detection device 105 uses a trained feature detection model (e.g., SVM, neural network, etc.) to process the sensor input 201 to detect the road object 103.” Examiner notes that the classifier is the feature detection model.) Regarding claim 8, Dronen teaches The method of claim 1, wherein different AI modules of the plurality of AI modules are adapted to different lighting conditions, different speeds, different vehicle environments, different driving situations, different environmental conditions, different driving conditions, or different objective (Dronen, page 15, paragraph 0058, “After the feature detection model is created or re-trained, the mapping platform 109 deploys the feature detection model to the in-vehicle feature detection device 105 of the vehicle 101 to replace an initial feature detection model used by the in-vehicle feature detection device 105” where “the system 100 of FIG. 1 introduces a capability to determine which data captured by a vehicle is most likely to improve the feature detection devices 105 that can potentially be fundamental to the creation and maintenance of a map of the driving environment” (Dronen, page 12, paragraph 0030). Examiner notes that the feature detection models which are parts of the AI modules, each learn different data for different vehicle environments. Examiner also notes that the re-trained feature detection model is a different model using different data.). Regarding claim 9, Dronen teaches A storage medium comprising instructions which, when executed by a computer, cause the computer to(Dronen, page 10, paragraph 0005, “According to another embodiment, a non-transitory computer-readable storage medium carries one or more sequences of one or more instructions which, when executed by one or more processors, cause, at least in part, an apparatus to process, by an in-vehicle feature detection device, sensor data collected by a vehicle to output a detected feature and a confidence metric for the detected feature.”): Determine one or more outputs of the plurality of AI modules by applying the one or more AI modules to two or more data points from a test data set, wherein associated ground truths and contextual parameters are known for the two or more data points (Dronen, page 15, paragraph 0057, “during training, the mapping platform 109 uses a learner module that feeds feature sets from the labeled sensor data set into the feature detection model to compute a predicted matching feature using an initial set of model parameters. The learner module then compares the predicted matching probability and the predicted feature to the ground truth data … in the labeled sensor data set” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the labeled sensor data set is the two or more data points from a test data set and the initial set of model parameters are the contextual parameters. Examiner further notes that the plurality of AI modules is the one or more feature detection models.) determine a functional quality for each of the two or more data points by comparing the one or more outputs of the plurality of AI modules for the data point with the associated ground truth, wherein the functional quality describes the quality of a given one of the plurality of AI modules with respect to the object detection or the semantic segmentation (Dronen, page 15, paragraph 0057, “during training, the mapping platform 109 uses a learner module that feeds feature sets from the labeled sensor data into the feature detection model to compute a predicted matching feature using an initial set of model parameters. The learner module then compares the predicted matching probability and the predicted feature to the ground truth data…in the labeled sensor data set. The learner module then computes an accuracy of the predictions for the initial set of model parameters” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predicted matching probability and the predicted feature are the outputs. Examiner further notes that the labeled sensor data set is the two or more data points and the functional quality is the accuracy of the predictions.); and provide and train a classifier for the plurality of AI modules, wherein the classifier outputs an expected functional quality for at least one of the contextual parameters, with the contextual parameters and the functional quality of each of the two or more data points (Dronen, page 15, paragraph 0057, “During training, the mapping platform 109 uses a learner module that feeds feature sets from the labeled sensor data set into the feature detection model to compute a predicted matching feature using an initial set of model parameters. The learner module then compares the predicted matching probability and the predicted feature to the ground truth data…in the labeled sensor data set. The learner module then computes an accuracy of the predictions for the initial set of model parameters” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the learner module, or classifier, outputs the accuracy which is the expected functional quality for the feature sets or the contextual parameters. Examiner further notes that the labeled sensor data set is the two or more data points from a test data set and the initial set of model parameters are the contextual parameters.) associate at least a first of the plurality of AI modules with at least one predefined driving condition (Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predefined driving condition is when privacy laws are in place.); selectively use the first AI module of the plurality of AI modules for environment recognition during automated driving of the motor vehicle when a current driving condition of the motor vehicle matches at least one predefined driving condition (Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predefined driving condition is when privacy laws are in place. Examiner further notes that the first AI module is used selectively used to avoid breaking the privacy laws when the driving condition of privacy laws and policies are met. Examiner notes that using alternative environmental data when the confidence threshold was not met is selectively using the AI module for environmental recognition.). Regarding claim 10, Dronen teaches A device for providing a classifier for a plurality of AI modules for processing input data provided by a sensor system of a motor vehicle during automated driving (Dronen, page 10, paragraph 0003, “According to one embodiment, a computer-implemented method comprises processing, by an in-vehicle feature detection device, sensor data collected by a vehicle to output a detected feature and a confidence metric for the detected feature” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059) and “This can make a feature detection device 105 useful for automating the creation and maintenance of environment models for automated driving systems”(Dronen, page 12, paragraph 0028). Examiner notes that the feature detection is the object detection and the plurality of AI modules is the one or more feature detection models.), comprising: A test circuit for determining one or more outputs of the AI module by causing the AI module to be applied to two or more data points from a test data set, wherein associated ground truths and contextual parameters are known for the two or more data points(Dronen, page 15, paragraph 0057, “during training, the mapping platform 109 uses a learner module that feeds feature sets from the labeled sensor data set into the feature detection model to compute a predicted matching feature using an initial set of model parameters. The learner module then compares the predicted matching probability and the predicted feature to the ground truth data … in the labeled sensor data set” where “The processes described herein for providing in-vehicle data selection for feature detection model creation and maintenance may be advantageously implemented via software, hardware (e.g., general processor, digital signal processing (DSP) chip, an Application Specific Integrated Circuit (ASIC), Field Programmable Gate Arrays (FPGAs) etc.), firmware or a combination thereof” (Dronen, page 19, paragraph 0092) where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the labeled sensor data set is the two or more data points and the initial set of model parameters are the contextual parameters. Examiner further notes that the plurality of AI modules is the one or more feature detection models.) ; and An evaluation circuit for determining a functional quality for each of the two or more data points by comparing the one or more outputs of the plurality of AI modules for each of the two or more data points associated ground truth, wherein the functional quality describes the quality of a given one of the plurality of AI modules with respect to the object detection or the semantic segmentation (Dronen, page 15, paragraph 0057, “the learner module then compares the predicted matching probability and the predicted feature to the ground truth data…in the labeled sensor data set. The learner module then computes an accuracy of the predictions for the initial set of model parameters” and “The processor 703 may also be accompanied with one or more specialized components to perform certain processing functions and tasks such as one or more digital signal processors (DSP) 707, or one or more application-specific integrated circuits” (Dronen, page 21, paragraph 0102) where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predicted matching probability and the predicted feature are the outputs. Examiner further notes that the labeled sensor data set is the two or more data points and the functional quality is the accuracy of the predictions.) for training the classifier for the plurality of AI modules, wherein the classifier outputs an expected functional quality for at least one of the contextual parameter and for at least one AI module of the plurality of AI modules, with the contextual parameters and the determined functional quality of each of the two or more data points (Dronen, page 15, paragraph 0057, “During training, the mapping platform 109 uses a learner module that feeds feature sets from the labeled sensor data set into the feature detection model to compute a predicted matching feature using an initial set of model parameters. The learner module then compares the predicted matching probability and the predicted feature to the ground truth data…in the labeled sensor data set. The learner module then computes an accuracy of the predictions for the initial set of model parameters” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the learner module, or classifier, outputs the accuracy which is the expected functional quality for the feature sets or the contextual parameters. Examiner further notes that the labeled sensor data set is the two or more data points from a test data set and the initial set of model parameters are the contextual parameters.) and for associating at least a first of the plurality of AI modules with at least one predefined driving condition (Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predefined driving condition is when privacy laws are in place.). Regarding claim 12, Dronen teaches A method for configuring a control system of an at least partially automated motor vehicle with a library of a plurality of AI modules for processing input data provided by a sensor system of the motor vehicle the processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle (Dronen, page 13, paragraph 0039, “To facilitate this process, a vehicle 101 can not only collect data for real-time sensing for its own purposes, but also can contribute the data to the sensor database 113 so that improved feature detection models can be created or maintained” where “As the sensor data is collected, the feature detection device 105 (e.g., an in-vehicle feature detector of a vehicle 101) processes sensor data collected by the vehicle 101 to output a detected feature and a confidence metric for the detected feature. In one embodiment, the feature detection device 105 uses a feature detection model to detect features and associated confidence metric. By way of example, feature detection models can include, but are not limited to, SVM or neural networks” (Dronen, page 14, paragraph 0041) where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059) and “This can make a feature detection device 105 useful for automating the creation and maintenance of environment models for automated driving systems”(Dronen, page 12, paragraph 0028). Examiner notes that the feature detection is the object detection and the plurality of AI modules is the one or more feature detection models. ) comprising: Acquiring input data to be processed by at least one of the AI modules (Dronen, page 14, paragraph 0041, “As the sensor data is collected, the feature detection device 105 (e.g., an in-vehicle feature detector of a vehicle 101) processes sensor data collected by the vehicle 101 to output a detected feature and a confidence metric for the detected feature. In one embodiment, the feature detection device 105 uses a feature detection model to detect features and associated confidence metric. By way of example, feature detection models can include, but are not limited to, SVM or neural networks” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059) and “This can make a feature detection device 105 useful for automating the creation and maintenance of environment models for automated driving systems”(Dronen, page 12, paragraph 0028). Examiner notes that the feature detection is the object detection and the plurality of AI modules is the one or more feature detection models.); Determining a current driving condition of the motor vehicle (Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the current driving condition is whether privacy laws are in place.) selecting a first AI module from the plurality of AI modules to be used for the input data or a combination of first AI modules and associated weights to be used for the automated driving function of the motor vehicle in dependence of whether the current driving condition corresponds to a predefined driving condition associated with the first AI module(Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predefined driving condition is when privacy laws are in place. Examiner further notes that the first AI module is selected by either retraining or creating the model used to avoid breaking the privacy laws when the driving condition of privacy laws and policies are met. Examiner further notes that the input data is the sensor data.). Regarding claim 13, Dronen teaches The method of claim 12, wherein the plurality of AI modules are set up to perform an environment detection for the automatic driving function of a motor vehicle (Dronen, page 11, paragraph 0028, “A prerequisite of the high automation of a vehicle’s driving function (e.g. or a vehicle 101 as shown in FIG. 1) is that the vehicle has an accurate model of its environment” and “ real-time sensing of the environment provides information about potential obstacles, the behavior of others on the road, and safe, drivable areas” (Dronen, page 13, paragraph 0038) where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059).) Regarding claim 14, Dronen teaches The method of claim 12, wherein different AI modules of the plurality of AI modules are adapted to different lighting conditions, different speeds, different vehicle environments, different driving situations, different environmental conditions, different driving conditions, or different objectives (Dronen, page 15, paragraph 0058, “After the feature detection model is created or re-trained, the mapping platform 109 deploys the feature detection model to the in-vehicle feature detection device 105 of the vehicle 101 to replace an initial feature detection model used by the in-vehicle feature detection device 105” where “the system 100 of FIG. 1 introduces a capability to determine which data captured by a vehicle is most likely to improve the feature detection devices 105 that can potentially be fundamental to the creation and maintenance of a map of the driving environment” (Dronen, page 12, paragraph 0030). Examiner notes that the feature detection models which are parts of the AI modules, each learn different data for different vehicle environments. Examiner also notes that the re-trained feature detection model is a different model using different data.). Regarding claim 15, Dronen teaches A non-transitory storage medium comprising instructions for configuring a control system of an at least partially automated motor vehicle for processing input data provided by a sensor system of the motor vehicle, the processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle (Dronen, page 10, paragraph 0005, “According to another embodiment, a non-transitory computer-readable storage medium carries one or more sequences of one or more instructions which, when executed by one or more processors, cause, at least in part, an apparatus to process, by an in-vehicle feature detection device, sensor data collected by a vehicle to output a detected feature and a confidence metric for the detected feature” where “As the sensor data is collected, the feature detection device 105 (e.g., an in-vehicle feature detector of a vehicle 101) processes sensor data collected by the vehicle 101 to output a detected feature and a confidence metric for the detected feature. In one embodiment, the feature detection device 105 uses a feature detection model to detect features and associated confidence metric. By way of example, feature detection models can include, but are not limited to, SVM or neural networks” (Dronen, page 14, paragraph 0041) where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059) and “This can make a feature detection device 105 useful for automating the creation and maintenance of environment models for automated driving systems”(Dronen, page 12, paragraph 0028). Examiner notes that the feature detection is the object detection and the plurality of AI modules is the one or more feature detection models.) the instructions that, when executed by a computer, cause the computer to: Acquire input data to be processed by at least one of the AI modules (Dronen, page 14, paragraph 0041, “As the sensor data is collected, the feature detection device 105 (e.g., an in-vehicle feature detector of a vehicle 101) processes sensor data collected by the vehicle 101 to output a detected feature and a confidence metric for the detected feature. In one embodiment, the feature detection device 105 uses a feature detection model to detect features and associated confidence metric. By way of example, feature detection models can include, but are not limited to, SVM or neural networks” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059) and “This can make a feature detection device 105 useful for automating the creation and maintenance of environment models for automated driving systems”(Dronen, page 12, paragraph 0028). Examiner notes that the feature detection is the object detection and the plurality of AI modules is the one or more feature detection models.); Determine a current driving condition of the motor vehicle (Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the current driving condition is whether privacy laws are in place.) select a first AI module from the plurality of AI modules to be used for the input data or a combination of first AI modules and associated weights to be used for the automated driving function of the motor vehicle in dependence of whether the current driving condition corresponds to a predefined driving condition associated with the first AI module(Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predefined driving condition is when privacy laws are in place. Examiner further notes that the first AI module is selected by either retraining or creating the model used to avoid breaking the privacy laws when the driving condition of privacy laws and policies are met. Examiner further notes that the input data is the sensor data.). Regarding claim 16, Dronen teaches A device for configuring a control system of an at least partially automated motor vehicle with a library of a plurality of AI modules for processing input data provided by a sensor system of the motor vehicle, the processing comprising one or more of an object detection and a semantic segmentation for an automated driving function of the motor vehicle (Dronen, page 13, paragraph 0039, “To facilitate this process, a vehicle 101 can not only collect data for real-time sensing for its own purposes, but also can contribute the data to the sensor database 113 so that improved feature detection models can be created or maintained” where “As the sensor data is collected, the feature detection device 105 (e.g., an in-vehicle feature detector of a vehicle 101) processes sensor data collected by the vehicle 101 to output a detected feature and a confidence metric for the detected feature. In one embodiment, the feature detection device 105 uses a feature detection model to detect features and associated confidence metric. By way of example, feature detection models can include, but are not limited to, SVM or neural networks” (Dronen, page 14, paragraph 0041) where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059) and “This can make a feature detection device 105 useful for automating the creation and maintenance of environment models for automated driving systems”(Dronen, page 12, paragraph 0028). Examiner notes that the feature detection is the object detection and the plurality of AI modules is the one or more feature detection models.) comprising: A data circuit for capturing input data to be processed by at least one of the plurality of AI modules (Dronen, page 14, paragraph 0041, “As the sensor data is collected, the feature detection device 105 (e.g., an in-vehicle feature detector of a vehicle 101) processes sensor data collected by the vehicle 101 to output a detected feature and a confidence metric for the detected feature. In one embodiment, the feature detection device 105 uses a feature detection model to detect features and associated confidence metric. By way of example, feature detection models can include, but are not limited to, SVM or neural networks” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059) and “This can make a feature detection device 105 useful for automating the creation and maintenance of environment models for automated driving systems”(Dronen, page 12, paragraph 0028). Examiner notes that the feature detection is the object detection and the plurality of AI modules is the one or more feature detection models.); A sensor to determine a current driving condition of the motor vehicle (Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the current driving condition is whether privacy laws are in place. Examiner further notes that the sensor is the vehicle.) An evaluation circuit for selecting a first AI module from the plurality of AI modules to be used for the input data or a combination of first AI modules and associated weights to be used for the automated driving function of the motor vehicle selecting a first AI module from the plurality of AI modules to be used for the input data or a combination of first AI modules and associated weights to be used for the automated driving function of the motor vehicle in dependence of whether the current driving condition corresponds to a predefined driving condition associated with the first AI module(Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predefined driving condition is when privacy laws are in place. Examiner further notes that the first AI module is selected by either retraining or creating the model used to avoid breaking the privacy laws when the driving condition of privacy laws and policies are met. Examiner further notes that the input data is the sensor data.). Regarding claim 17, Dronen teaches A motor vehicle wherein the motor vehicle comprises a device according to claim 16 (Dronen, page 10, paragraph 0005, “According to another embodiment, a non-transitory computer-readable storage medium carries one or more sequences of one or more instructions which, when executed by one or more processors, cause, at least in part, an apparatus to process, by an in-vehicle feature detection device, sensor data collected by a vehicle to output a detected feature and a confidence metric for the detected feature.”). Regarding claim 20, Dronen teaches A motor vehicle (Dronen, page 10, paragraph 0005, “According to another embodiment, a non-transitory computer-readable storage medium carries one or more sequences of one or more instructions which, when executed by one or more processors, cause, at least in part, an apparatus to process, by an in-vehicle feature detection device, sensor data collected by a vehicle to output a detected feature and a confidence metric for the detected feature.”), wherein the motor is set up to: Acquire input data to be processed by at least one of the AI modules (Dronen, page 14, paragraph 0041, “As the sensor data is collected, the feature detection device 105 (e.g., an in-vehicle feature detector of a vehicle 101) processes sensor data collected by the vehicle 101 to output a detected feature and a confidence metric for the detected feature. In one embodiment, the feature detection device 105 uses a feature detection model to detect features and associated confidence metric. By way of example, feature detection models can include, but are not limited to, SVM or neural networks” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059) and “This can make a feature detection device 105 useful for automating the creation and maintenance of environment models for automated driving systems”(Dronen, page 12, paragraph 0028). Examiner notes that the feature detection is the object detection and the plurality of AI modules is the one or more feature detection models.); Determine a current driving condition of the motor vehicle (Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the current driving condition is whether privacy laws are in place.) select a first AI module from the plurality of AI modules to be used for the input data or a combination of first AI modules and associated weights to be used for the automated driving function of the motor vehicle in dependence of whether the current driving condition corresponds to a predefined driving condition associated with the first AI module(Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predefined driving condition is when privacy laws are in place. Examiner further notes that the first AI module is selected by either retraining or creating the model used to avoid breaking the privacy laws when the driving condition of privacy laws and policies are met. Examiner further notes that the input data is the sensor data.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dronen in view of Fidler et al. (US 2019/0294970 A1) (hereafter referred to as Fidler). Regarding claim 4, Dronen teaches the method of claim 3. Dronen does not teach, but Fidler does teach wherein an IoU metric is used for comparing the output of the plurality of AI module for the data point with the associated ground truth (Fidler, page 25, paragraph 0100, “Two quantitative measures are utilized to evaluate the model: 1) the intersection over union (‘IoU’) metric is used to evaluate the quality of the generated polygon” where “the performance is evaluated by computing Intersection-over-Union (IoU) of the predicted and ground truth masks” (Fidler, page 31, paragraph 0194) and where “we evaluate D8 and D9 by providing exact ground-truth boxes to their models”. Examiner notes that the output is the generated polygon.). Dronen and Fidler are analogous to the claimed invention because they teach devices to be used during automated driving which detects features in the surrounding environment. It would have been obvious to one having ordinary skill in the art before the effective filing date to have modified Dronen to use an IoU metric. Doing so allows for the model to “evaluate the quality of the generated polygon” (Fidler, page 25, paragraph 100). Response to Arguments The previous claim objections have been overcome in light of the instant amendments. Examiner notes, that new claim objections have been made in light of the instant amendments. The previous 112(b) rejections have been overcome in light of the instant amendments. Examiner notes, that new 112(b) rejections have been made in light of the instant amendments. On page 10-11, Applicant argues: As stated in the specification of the instant application, paragraph 33, when using AI modules or, it should be noted that there is a dependency between the functional quality of an AI model and the data it processes. This dependence ensures that AI models are not good or bad per se, but have an environment-dependent functional quality. By using the solution according to the teachings herein, a meaningful description of AI models and their capabilities can be created in a test phase. In doing so, a list of the most meaningful properties can be determined as a metric for the performance of an AI module. The AI models can thus not only be better understood, but also be used in more diverse ways, for example as ensembles of expert models. The claimed subject matter of the independent claims thus provides an improvement in the field of automated driving, namely in that the functional quality of an AI module is evaluated in advance so that an AI module for environment recognition can be easily selected that performs well during specific given driving conditions. Regarding the Applicants argument that the claims provide an improvement and thus a practical application, the Examiner respectfully disagrees. Specifically, Examiner respectfully notes that cited paragraph 33 (although paragraph numbers are not in the specification) sets forth a bare assertion of an improvement and therefore cannot provide an improvement (2106.04(d)(1)). On page 11-12, Applicant argues Applicant respectfully traverses and submits that Dronen does not anticipate the pending claims. To establish anticipation, each and every element in a claim, arranged as recited in the claim, must be found in a single prior art reference. Net Money IN, Inc. v. VeriSign, Inc., 545 F.3d 1359, 1369 (Fed. Cir. 2008). Each element of the challenged claim must be found, either expressly or inherently described, in a single prior art reference. Verdegaal Bros. v. Union Oil Co. of California, 814 F.2d 628, 631, 2 U.S.P.Q.2d 1051, 1053 (Fed. Cir. 1987). Furthermore, "the identical invention must be shown in as complete detail as is contained in the ... claim." Richardson v. Suzuki Motor Co. Ltd, 868 F.2d 1226, 1236, 9 U.S.P.Q.2d 1913, 1920 (Fed. Cir. 1989). Regarding the Applicant’s argument that the prior art does not anticipate the claims, the Examiner respectfully disagrees. Examiner respectfully notes that the prior art anticipates the newly amended pending claims. Specifically, Dronen teaches a plurality of AI modules because “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the feature detection device is the object detection and the library of a plurality of AI modules is the one or more feature detection models. Examiner respectfully refers the Applicant to the 102 Rejection section above. On page 12, Applicant argues Applicant respectfully traverses and submits the proposed combinations, even if proper, which Applicant does not concede, do not render the pending claims obvious. To establish a prima facie case of obviousness, the references cited by the Examiner must disclose all claimed limitations. In re Royka, 490 F.2d 981, 180 U.S.P.Q. 580 (C.C.P.A. 1974). Even if each limitation is disclosed in a combination of references, however, a claim composed of several elements is not proved obvious merely by demonstrating that each of its elements was, independently, known in the prior art. KSRint'l. Co. v. Teleflex Inc., 127 S.Ct. 1727, 1741 (2007). Rather, the Examiner must identify an apparent reason to combine the known elements in the fashion claimed. Id "Rejections on obviousness grounds cannot be sustained by mere conclusory statements; instead, there must be some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness." Id, citing In re Kahn, 441 F.3d 977, 988 (Fed. Cir. 2006). Regarding the Applicant’s argument that the combinations do not render the pending claims obvious, the Examiner respectfully disagrees. Examiner respectfully notes both Dronen and Fidler teach devices to be used during automated drive which detects features in the surrounding environment. It would have been obvious to one having ordinary skill in the art to have modified Dronen to use an IoU metric. Doing so allows for the model to “evaluate the quality of the generated polygon” (Fidler, page 25, paragraph 100). Examiner respectfully refers the Applicant to the above 103 rejection section. On page 13, Applicant argues Applicant respectfully submits that the cited prior art does not teach, nor fairly suggest the subject matter of the independent claims, which are thus patentable. The patentability of the dependent claims flows at least from the patentability of the independent claims. Specifically, Dronen in view of Fidler fails to teach or to fairly suggest at least the following limitations of claim 1: determining a functional quality for each of the two or more data points by comparing the one or more outputs of the plurality of AI modules for the data point with the associated ground truth, wherein the functional quality describes the quality of the plurality of AI modules with respect to the object detection or the semantic segmentation; associating at least a first of the plurality of AI modules with at least one predefined driving condition; and selectively using the first AI module of the plurality of AI modules for environment recognition during automated driving of the motor vehicle when a current driving condition of the motor vehicle matches the at least one predefined driving condition. Reconsideration of this application and allowance of all claims therein are respectfully requested. Regarding the Applicant’s argument that the prior art does not teach the independent claims, the Examiner respectfully disagrees. Examiner respectfully notes that the prior art teaches the independent claims. Specifically, Dronen teaches the newly amended claim limitations. Dronen teaches a plurality of AI modules because “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the feature detection device is the object detection and the library of a plurality of AI modules is the one or more feature detection models. Furthermore, Dronen teaches associating at least a first of the plurality of AI modules with at least one predefined driving condition (Dronen, page 14, paragraph 0046, “the feature detection device 105 can determine whether a privacy law or policy that restricts transmission of the sensor data is implemented at the vehicle 101 or the feature detection device 105 itself (step307). If such a privacy policy is implemented, the feature detection device 105 transmits other information associated with the sensor data from the vehicle to the external server in place of the sensor data based on determining that the confidence metric is below a confidence threshold. By way of example, the other information includes contextual information associated with a collection of the sensor data, the vehicle, an environment surrounding the vehicle, or a combination thereof (step 309). In other words, instead of transmitting raw sensor data that may be prohibited by applicable privacy policies or laws, the feature detection device 105 can record environment or other conditions associated with capturing sensor data that was below the confidence threshold. In this way, the mapping platform 109 can identify the types of conditions can lead to poor feature detection accuracy, and request sensor data (e.g., from other sources) falling within those conditions to create or re-train feature prediction models for the feature detection device” where “the feature detection device 105 and/or mapping platform 109 can include one or more statistical pattern matching or feature detection models such as, but not limited to, SVMs, neural networks, etc. to make feature predictions” (Dronen, page 16, paragraph 0059). Examiner notes that the predefined driving condition is when privacy laws are in place.). Regarding the Applicant’s argument that the dependent claims are allowable at least due in part to their dependency on the independent claims, the Examiner respectfully disagrees and notes the instant rejections and response to arguments regarding the independent claims above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Schläfer et al. (US 10,586,132 B2) also discusses a system for highly automated driving using sensor data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN R HAEFNER whose telephone number is (571)272-1429. The examiner can normally be reached Monday - Thursday: 7:15 am - 5:15 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.R.H./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Feb 27, 2022
Application Filed
Jun 04, 2025
Non-Final Rejection — §101, §102, §103
Aug 26, 2025
Response Filed
Oct 03, 2025
Final Rejection — §101, §102, §103
Dec 01, 2025
Response after Non-Final Action
Jan 07, 2026
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Jan 20, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602431
METHODS FOR PERFORMING INPUT-OUTPUT OPERATIONS IN A STORAGE SYSTEM USING ARTIFICIAL INTELLIGENCE AND DEVICES THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12572828
METHOD FOR INDUSTRY TEXT INCREMENT AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+66.7%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month