Prosecution Insights
Last updated: April 19, 2026
Application No. 18/142,429

RARE EXAMPLE MINING FOR AUTONOMOUS VEHICLES

Non-Final OA §101§102§103§112
Filed
May 02, 2023
Examiner
VO, STEVEN
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Waymo LLC
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
3 currently pending
Career history
3
Total Applications
across all art units

Statute-Specific Performance

§101
33.3%
-6.7% vs TC avg
§103
44.4%
+4.4% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is in response to the application files 05/02/2023. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 8 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 8 recites the limitation "the object detection neural network" in line 2. There is insufficient antecedent basis for this limitation in the claim. Based on claim 1, since claim 8 is dependent on claim 1, it is unclear what the object detection neural network is being referred to because claim 1 only claims an encoder neural network. Therefore, claim 8 fails to distinctly claim the subject matter and is indefinite. For purposes of examination, the object detection neural network will be interpreted as an object detection neural network. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 - 20 rejected under 35 U.S.C. 101 because they are directed to an abstract idea that does not amount to significantly more. Regarding Claim 1: Subject Matter of Eligibility Analysis Step 1: the claim recites a method and is directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A prong 1: The claim recites processing the sensor input … to generate one or more feature vectors for the sensor input (this limitation is a mental process since a human can mentally generate feature vectors from an input). The claim recites processing each of the one or more feature vectors … to generate a density score for the feature vector (this limitation is both a mathematical equation since the equation for the density estimation model is given with the specification. It is also a mental process since a human can use that equation to calculate the density score). The claim recites generating a rareness score for each of the one or more feature vectors from the density score, wherein the rareness score represents a degree to which a classification of an object depicted in the sensor input is rare relative to other objects (this limitation is both a mathematical equation since the equation for the density estimation model is given with the specification. It is also a mental process since a human can use that equation to calculate the density score). Therefore, claim 1 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A prong 2: The claim recites obtaining a sensor input (this limitation is merely data gathering, which is an insignificant extra-solution activity (see MPEP 2106.05(g))). The claim recites using an encoder network (this limitation is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction to apply” (see MPEP 2106.05(f))). The claim recites using a density estimation model (this limitation is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction to apply” (see MPEP 2106.05(f))). Therefore, claim 1 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2B: The additional elements in claim 1 do not provide significantly more than the abstract idea itself, taken alone and in combination because The method of obtaining a sensor input is well understood, routine, and conventional. The court has ruled that “Receiving or transmitting data over a network, e.g., using the Internet to gather data” is recognized as a computer function that is well‐understood, routine, and conventional (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014)). The method of using an encoder network is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction to apply” (see MPEP 2106.05(f))). The method of using a density estimation model is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction to apply” (see MPEP 2106.05(f))). Therefore, claim 1 is subject-matter ineligible. Regarding Claim 2: Subject Matter of Eligibility Analysis Step 1: Claim 2 recites a method and is directed to a process, which is one of the four statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A prong 1: The claim recites process the sensor input to generate an intermediate feature map (this limitation is a mental process since a human can mentally calculate a feature map from the sensor input, if an equation was given). The claim recites process the intermediate feature map to generate a prediction output for the sensor input, wherein the prediction output characterizes one or more of (i) one or more regions of the sensor data or (ii) one or more objects depicted in the one or more regions (this limitation is a mental process since a human can mentally organize the feature map into regions/objects of the sensor data). Therefore, claim 2 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A prong 2: The claim does not recite any additional elements. Therefore, claim 2 is not integrated into a practical application Subject Matter of Eligibility Analysis Step 2b: The claim does not recite any additional elements, thus claim 2 does not provide significantly more. Therefore, claim 2 is subject-matter ineligible. Regarding Claim 3: Subject Matter of Eligibility Analysis Step 1: The claim recites a method and directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: The claim recites generating the one or more feature vectors for the sensor input from the intermediate feature map generated by the prediction neural network (this limitation is a mental process since a person can mentally calculate features vectors from a feature map, if an equation was given). Therefore, claim 3 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A prong 2: The claim does not recite any additional elements. Therefore, claim 3 is not integrated into a practical application Subject Matter of Eligibility Analysis Step 2b: The claim does not recite any additional elements, thus claim 3 does not provide significantly more. Therefore, claim 3 is subject-matter ineligible. Regarding Claim 4: Subject Matter of Eligibility Analysis Step 1: The claim recites a method and directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: Since claim 4 is dependent on claim 2, the Subject Matter of Eligibility Analysis Step 2A Prong 1 from claim 2 can be applied here. Therefore, claim 4 recites an abstract idea Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim recites the prediction output for the sensor input comprises object detection prediction data for the sensor input that specifies one or more regions of the sensor data that are each predicted to depict a respective object (this limitation does not integrate the abstract idea into a practical application because it amounts to mere “instruction” (see MPEP 2106.05(f))). Therefore, claim 4 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2B: The additional elements in claim 4 do not provide significantly more than the abstract idea itself, taken alone and in combination because the prediction output for the sensor input comprises object detection prediction data for the sensor input that specifies one or more regions of the sensor data that are each predicted to depict a respective object is an instruction to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 4 is subject matter ineligible. Regarding Claim 5: Subject Matter of Eligibility Analysis Step 1: The claim recites a method and directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: Since claim 5 is dependent on claim 2, the Subject Matter of Eligibility Analysis Step 2A Prong 1 from claim 2 can be applied here. Therefore, claim 5 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim recites the prediction output for the sensor input comprises trajectory prediction data for the sensor input that characterizes a predicted future trajectory of a target agent (this limitation does not integrate the abstract idea into a practical application because it amounts to mere “instruction” (see MPEP 2106.05(f))). Therefore, claim 5 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2B: The additional elements in claim 5 do not provide significantly more than the abstract idea itself, taken alone and in combination because the prediction output for the sensor input comprises trajectory prediction data for the sensor input that characterizes a predicted future trajectory of a target agent is an instruction to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 5 is subject matter ineligible. Regarding Claim 6: Subject Matter of Eligibility Analysis Step 1: The claim recites a method and directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: The claim recites generating, from at least the sensor input, a training data set for a downstream task, the generating comprising selecting one or more feature vectors from at least the feature vectors generated from the sensor input based on the respective rareness scores for the feature vectors (this limitation is a mental process since a human can mentally categorize a dataset based on features, which are based on the rareness score). The claim recites for each selected feature vector, generating a training example that includes the sensor input from which the selected feature vector is generated and including the training example in the training data (this limitation is a mental process since a human can mentally generate training examples for each feature). Therefore, claim 6 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim recites training a downstream neural network on the training data for the downstream task (this limitation does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)))). Therefore, claim 6 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step B: The additional elements in claim 6 do not provide significantly more than the abstract idea itself, taken alone and in combination because training a downstream neural network on the training data for the downstream task uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 6 is subject matter ineligible. Regarding Claim 7: Subject Matter of Eligibility Analysis Step 1: The claim recites a method and directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: Since claim 7 is dependent on claim 6, the Subject Matter of Eligibility Analysis Step 2A Prong 1 from claim 6 can be applied here. Therefore, claim 7 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim recites the downstream task is three-dimensional object detection task (this limitation references a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction” (see MPEP 2106.05(f)))). The claim recites the downstream neural network is the same neural network as the prediction neural network (this limitation references a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction” (see MPEP 2106.05(f)))). Subject Matter of Eligibility Analysis Step 2B: The additional elements in claim 7 do not provide significantly more than the abstract idea itself, taken alone and in combination because the downstream task is three-dimensional object detection task is an instruction to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). the downstream neural network is the same neural network as the prediction neural network is an instruction to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 7 is subject matter ineligible. Regarding Claim 8: Subject Matter of Eligibility Analysis Step 1: The claim recites a method and directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: the claim recites generating the training set of feature vectors by generating, for each sensor input in a second set of sensor data and using the object detection neural network, a respective feature vector for each of one or more regions in the sensor input that are predicted by the trained object detection neural network to depict an object (this limitation is a mental process since a human can mentally categorize features based on the region(s) in a sensor input). Therefore, claim 8 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A Prong 2: the claim recites training the density estimation model on the training set of feature vectors to maximize an expected log density score of the feature vectors in the training set (this limitation does not integrate the abstract idea into a practical application because it amounts to mere “apply it” (see MPEP 2106.05(f)))). Therefore, claim 8 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2A Prong 2: The additional elements in claim 8 do not provide significantly more than the abstract idea itself, taken alone and in combination because training the density estimation model on the training set of feature vectors to maximize an expected log density score of the feature vectors in the training set uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 8 is subject matter ineligible. Regarding Claim 9: Subject Matter of Eligibility Analysis Step 1: The claim recites a method and directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: Since claim 9 is a dependent claim of claim 1, the Subject Matter of Eligibility Analysis Step 2A Prong 1 of claim 1 applies to this claim. Therefore, claim 9 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim recites the density estimation model is a normalizing flow model (this limitation is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction” (see MPEP 2106.05(f))). Therefore, claim 9 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2b: The additional elements in claim 6 do not provide significantly more than the abstract idea itself, taken alone and in combination because the density estimation model is a normalizing flow model is an instruction that performs an abstract idea and cannot provide significantly more. Therefore, claim 9 is subject-matter ineligible. Regarding Claim 10: Subject Matter of Eligibility Analysis Step 1: Step 1: the claim is directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: The claim recites the rareness score for the feature vector is inversely proportional to the density score for the feature vector (this limitation is mathematical concept since there is a mathematical relationship between the rareness score and density score). Therefore, claim 10 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim does not recite any additional elements. Therefore, claim 10 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2b: The claim does not recite any additional elements, thus claim 10 does not provide significantly more. Therefore, claim 10 is subject-matter ineligible. Regarding Claim 11: Subject Matter of Eligibility Analysis Step 1: The claim recites a method and is directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: The claim recites ranking respective feature vectors generated from the plurality of sensor inputs by rareness scores (this limitation is a mental process since a human can mentally rank the vectors based on a number, which is the rareness score). The claim recites selecting a proper subset of respective feature vectors having the highest rareness scores according to the ranking (this limitation is a mental process since a human can mentally choose vectors with the highest scores based on a ranking). Therefore, claim 11 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim does not recite any additional elements. Therefore, claim 11 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2b: The claim does not recite any additional elements, thus claim 11 does not provide significantly more. Therefore, claim 11 is subject-matter ineligible. Regarding Claim 12: Subject Matter of Eligibility Analysis Step 1: The claim recites a method and is directed to a process, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: The claim recites generating, from the sensor input and other sensor inputs, one or more test scripts for a software module (this limitation is a mental process since a human can mentally create test scripts). The claim recites evaluating a performance of the software module by using the software module to process the one or more test scripts (this limitation is a mental process since a human can mentally rate/evaluate a software from a test script). Therefore, claim 12 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim does not recite any additional elements. Therefore, claim 12 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2b: The claim does not recite any additional elements, thus claim 12 does not provide significantly more. Therefore, claim 12 is subject-matter ineligible. Regarding Claim 13: Claim 13 has the same exact wording as claim 1, except the rareness represents a predicted behavior instead of a classification of an object. Regardless, the Subject Matter of Eligibility Analysis from claim 1 can be applied to claim 13. Therefore, claim 13 is subject-matter ineligible. Regarding Claim 14: Subject Matter of Eligibility Analysis Step 1: the claim recites a system and is directed to a machine, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A prong 1: The claim recites processing the sensor input … to generate one or more feature vectors for the sensor input (this limitation is a mental process since a human can mentally generate feature vectors from an input). The claim recites processing each of the one or more feature vectors … to generate a density score for the feature vector (this limitation is both a mathematical equation since the equation for the density estimation model is given with the specification. It is also a mental process since a human can use that equation to calculate the density score). The claim recites generating a rareness score for each of the one or more feature vectors from the density score, wherein the rareness score represents a degree to which a classification of an object depicted in the sensor input is rare relative to other objects (this limitation is both a mathematical equation since the equation for the density estimation model is given with the specification. It is also a mental process since a human can use that equation to calculate the density score). Therefore, claim 14 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A prong 2: The claim recites A system comprising: one or more computers; and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform operations (This element amounts to mere “apply it on computer(s)” (see MPEP 2106.05(f))). The claim recites obtaining a sensor input (this limitation is merely data gathering, which is an insignificant extra-solution activity (see MPEP 2106.05(g))). The claim recites using an encoder network (this limitation is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction to apply” (see MPEP 2106.05(f))). The claim recites using a density estimation model (this limitation is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction to apply” (see MPEP 2106.05(f))). Therefore, claim 14 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2B: The additional elements in claim 14 do not provide significantly more than the abstract idea itself, taken alone and in combination because A system comprising: one or more computers; and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform operations is a mere “apply it on computer(s)” (see MPEP 2106.05(f))). The method of obtaining a sensor input is well understood, routine, and conventional. The court has ruled that “Receiving or transmitting data over a network, e.g., using the Internet to gather data” is recognized as a computer function that is well‐understood, routine, and conventional (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014)). The method of using an encoder network is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction to apply” (see MPEP 2106.05(f))). The method using a density estimation model is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction to apply” (see MPEP 2106.05(f))). Therefore, claim 14 is subject-matter ineligible. Regarding Claim 15: Subject Matter of Eligibility Analysis Step 1: Claim 15 recites a system and is directed to a machine, which is one of the four statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A prong 1: The claim recites process the sensor input to generate an intermediate feature map (this limitation is a mental process since a human can mentally calculate a feature map from the sensor input, if an equation was given). The claim recites process the intermediate feature map to generate a prediction output for the sensor input, wherein the prediction output characterizes one or more of (i) one or more regions of the sensor data or (ii) one or more objects depicted in the one or more regions (this limitation is a mental process since a human can mentally organize the feature map into regions/objects of the sensor data). Therefore, claim 15 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A prong 2: The claim does not recite any additional elements. Therefore, claim 15 is not integrated into a practical application Subject Matter of Eligibility Analysis Step 2b: The claim does not recite any additional elements, thus claim 15 does not provide significantly more. Therefore, claim 15 is subject-matter ineligible. Regarding Claim 16: Subject Matter of Eligibility Analysis Step 1: The claim recites a system and directed to a machine, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: The claim recites generating the one or more feature vectors for the sensor input from the intermediate feature map generated by the prediction neural network (this limitation is a mental process since a person can mentally calculate features vectors from a feature map, if an equation was given). Therefore, claim 16 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A prong 2: The claim does not recite any additional elements. Therefore, claim 16 is not integrated into a practical application Subject Matter of Eligibility Analysis Step 2b: The claim does not recite any additional elements, thus claim 16 does not provide significantly more. Therefore, claim 16 is subject-matter ineligible. Regarding Claim 17: Subject Matter of Eligibility Analysis Step 1: The claim recites a system and directed to a machine, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: Since claim 17 is dependent on claim 15, the Subject Matter of Eligibility Analysis Step 2A Prong 1 from claim 15 can be applied here. Therefore, claim 17 recites an abstract idea Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim recites the prediction output for the sensor input comprises object detection prediction data for the sensor input that specifies one or more regions of the sensor data that are each predicted to depict a respective object (this limitation does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)))). Therefore, claim 17 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2B: The additional elements in claim 17 do not provide significantly more than the abstract idea itself, taken alone and in combination because the prediction output for the sensor input comprises object detection prediction data for the sensor input that specifies one or more regions of the sensor data that are each predicted to depict a respective object uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 4 is subject matter ineligible. Regarding Claim 18: Subject Matter of Eligibility Analysis Step 1: The claim recites a system and directed to a machine, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: Since claim 18 is dependent on claim 15, the Subject Matter of Eligibility Analysis Step 2A Prong 1 from claim 15 can be applied here. Therefore, claim 18 recites an abstract idea Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim recites the prediction output for the sensor input comprises trajectory prediction data for the sensor input that characterizes a predicted future trajectory of a target agent (this limitation does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)))). Therefore, claim 18 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2B: The additional elements in claim 18 do not provide significantly more than the abstract idea itself, taken alone and in combination because the prediction output for the sensor input comprises trajectory prediction data for the sensor input that characterizes a predicted future trajectory of a target agent uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 18 is subject matter ineligible. Regarding Claim 19: Subject Matter of Eligibility Analysis Step 1: The claim recites a system and directed to a machine, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A Prong 1: The claim recites generating, from at least the sensor input, a training data set for a downstream task, the generating comprising selecting one or more feature vectors from at least the feature vectors generated from the sensor input based on the respective rareness scores for the feature vectors (this limitation is a mental process since a human can mentally categorize a dataset based on features, which are based on the rareness score). The claim recites for each selected feature vector, generating a training example that includes the sensor input from which the selected feature vector is generated and including the training example in the training data (this limitation is a mental process since a human can mentally generate training examples for each feature). Therefore, claim 19 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A Prong 2: The claim recites training a downstream neural network on the training data for the downstream task (this limitation does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)))). Therefore, claim 19 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step B: The additional elements in claim 19 do not provide significantly more than the abstract idea itself, taken alone and in combination because training a downstream neural network on the training data for the downstream task uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 19 is subject matter ineligible. Regarding Claim 20: Subject Matter of Eligibility Analysis Step 1: the claim recites a computer program stored in a non-transitory computer-readable storage medium and is directed to an article of manufacture, which is one of the statutory categories of invention. Subject Matter of Eligibility Analysis Step 2A prong 1: The claim recites processing the sensor input … to generate one or more feature vectors for the sensor input (this limitation is a mental process since a human can mentally generate feature vectors from an input). The claim recites processing each of the one or more feature vectors … to generate a density score for the feature vector (this limitation is both a mathematical equation since the equation for the density estimation model is given with the specification. It is also a mental process since a human can use that equation to calculate the density score). The claim recites generating a rareness score for each of the one or more feature vectors from the density score, wherein the rareness score represents a degree to which a classification of an object depicted in the sensor input is rare relative to other objects (this limitation is both a mathematical equation since the equation for the density estimation model is given with the specification. It is also a mental process since a human can use that equation to calculate the density score). Therefore, claim 20 recites an abstract idea. Subject Matter of Eligibility Analysis Step 2A prong 2: The claim recites One or more non-transitory computer-readable storage media storing instructions (This element recites a generic computing component on which to perform the abstract idea (see MPEP 2106.05(f))). The claim recites obtaining a sensor input (this limitation is merely data gathering, which is an insignificant extra-solution activity (see MPEP 2106.05(g))). The claim recites using an encoder network (this limitation is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction” (see MPEP 2106.05(f))). The claim recites using a density estimation model (this limitation is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction” (see MPEP 2106.05(f))). Therefore, claim 20 is not integrated into a practical application. Subject Matter of Eligibility Analysis Step 2B: The additional elements in claim 1 do not provide significantly more than the abstract idea itself, taken alone and in combination because One or more non-transitory computer-readable storage media storing instructions uses computer(s) as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))). The method of obtaining a sensor input is well understood, routine, and conventional. The court has ruled that “Receiving or transmitting data over a network, e.g., using the Internet to gather data” is recognized as a computer function that is well‐understood, routine, and conventional (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014)). The method of using an encoder network is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction to apply” (see MPEP 2106.05(f))). The method using a density estimation model is referencing a generic term, which does not integrate the abstract idea into a practical application because it amounts to mere “instruction to apply” (see MPEP 2106.05(f))). Therefore, claim 20 is subject-matter ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-6, 13-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cobb et al. (US 20110052068 A1) (hereafter referred to as Cobb). Regarding claim 1, Cobb teaches Obtaining a sensor input, (Cobb, page 3, paragraph 0030, “Network 110 receives video data (e.g., video stream(s), video images, or the like) from the video input source 105. The video input source 105 may be a video camera, a VCR, DVR, DVD, computer, web-cam device, or the like. For example, the video input source 105 may be a stationary video camera aimed at a certain area (e.g., a subway station, a parking lot, a building entry/exit, etc.), which records the events taking place therein”). processing the sensor input using an encoder neural network to generate one or more feature vectors for the sensor input, (Cobb, page 4, paragraph 0037, “The context processor component 220 may receive the output from other stages of the pipeline (i.e., the tracked objects and the background and foreground models). Using this information, the context processor 220 may be configured to generate a stream of micro-feature vectors corresponding to foreground patches tracked (by tracker component 210) Examiner notes that context processor 220 is being mapped to the encoder neural network). processing each of the one or more feature vectors using a density estimation model to generate a density score for the feature vector, (Cobb, page 6, paragraph 0047, “The anomaly detection component 322 is configured to compute a probability density function based on the existing clusters in the ART 325 and compute a probability density value for the micro-feature vector”). and generating a rareness score for each of the one or more feature vectors from the density score, wherein the rareness score represents a degree to which a classification of an object depicted in the sensor input is rare relative to other objects, (Cobb, page 8, paragraph 0064, “At step 476 the anomaly detection component 322 determines a rareness measure for the micro-feature vector. That is, the anomaly detection component 322 estimates a measure of the likelihood of observing the particular micro-feature vector, based on the probability density function and the probability micro-feature vector”). Regarding Claim 2, Cobb teaches all the elements of claim 1 as shown above, Cobb also teaches: process the sensor input to generate an intermediate feature map, (Cobb, page 4, paragraph 0037, “The context processor component 220 may receive the output from other stages of the pipeline (i.e., the tracked objects and the background and foreground models). Using this information, the context processor 220 may be configured to generate a stream of micro-feature vectors corresponding to foreground patches tracked (by tracker component 210)” Examiner notes that micro-feature vectors can be mapped to feature maps). and process the intermediate feature map to generate a prediction output for the sensor input, wherein the prediction output characterizes one or more of (i) one or more regions of the sensor data or (ii) one or more objects depicted in the one or more regions, (Cobb, page 4, paragraph 0036, “ the context processor component 220 may evaluate a foreground patch from frame-to-frame and output micro-feature vectors including values representing the foreground patch's hue entropy, magnitude-saturation ratio, orientation angle, pixel area, aspect ratio, groupiness (based on the pixel-level spatial distribution), legged-ness (based on a number of potential legs), verticality (based on per-pixel gradients), motion vector orientation, rigidity/animateness, periodicity of motion, etc.” and the context processor component 220 may output a stream of context events describing that foreground patch's height, width (in pixels), position (as a 2D coordinate in the scene), acceleration, velocity, orientation angle, etc.”) Examiner notes that the prior art anticipates both (i) and (ii) sections of the limitation. Regarding Claim 3, Cobb teaches all the elements of claims 1 and 2 as shown above, Cobb also teaches: generating one or more feature vectors for the sensor input from the intermediate feature map generated by the prediction neural network, (Cobb, page 4, paragraph 0039, “the computer vision engine 135 shown in FIG. 2, the classification of objects is performed by the micro-feature classifier 221 in the machine learning engine 140 using the micro-feature vectors that are produced by the computer vision engine 135”). Regarding Claim 4, Cobb teaches all the elements of claims 1 and 2 as shown above, Cobb also teaches: the prediction output for the sensor input comprises object detection prediction data for the sensor input that specifies one or more regions of the sensor data that are each predicted to depict a respective object, (Cobb, page 4, paragraph 0038, “the computer vision engine is configured to classify each tracked object as being one of a known category of objects using training data that defines a plurality of object types” and “the classification of "other" represents an affirmative assertion that the object is neither a "person" nor a "vehicle." Additionally, the estimator/identifier component may identify characteristics of the tracked object, e.g., for a person, a prediction of gender, an estimation of a pose (e.g., standing or sitting) or an indication of whether the person is carrying an object”). Regarding Claim 5, Cobb teaches all the elements of claims 1 and 2 as shown above, Cobb also teaches: the prediction output for the sensor input comprises trajectory prediction data for the sensor input that characterizes a predicted future trajectory of a target agent, (Cobb, page 5, paragraph 0040, “ the primitive event detector 212 may be configured to receive the output of the computer vision engine 135 (i.e., the video images, the micro-feature vectors, and context event stream) and generate a sequence of primitive events--labeling the observed actions or behaviors in the video with semantic meaning” and “for example, a sequence of primitive events related to observations of the computer vision engine 135 occurring at a parking lot could include language semantic vectors representing the following: "vehicle appears in scene," "vehicle moves to a given location," "vehicle stops moving," "person appears proximate to vehicle," "person moves," person leaves scene" "person appears in scene," "person moves proximate to vehicle," "person disappears," "vehicle starts moving," and "vehicle disappears."” Examiner notes that some of the language semantic vectors are data that describes a predicted future trajectory). Regarding Claim 6, Cobb teaches all the elements of claim 1 as shown above, Cobb also teaches: generating, from at least the sensor input, a training data set for a downstream task, the generating comprising selecting one or more feature vectors from at least the feature vectors generated from the sensor input based on the respective rareness scores for the feature vectors, (Cobb, page 5, paragraph 0045, “The micro-feature classifier 221 includes a learning component 310, a SOM-ART network component 340, and a classification component 320. The SOM-ART network component 340 includes a SOM 315 and an ART 325. The SOM-ART network component 340 provides a specialized neural network configured to create object type clusters from a group of inputs, e.g., micro-feature vectors”). for each selected feature vector, generating a training example that includes the sensor input form which the selected feature vector is generated and including the training example in the training data (Cobb, page 6, paragraph 0047, “The learning component 310 organizes the micro-feature vector elements in the SOM 315 neurons”). training a downstream neural network on the training data for the downstream task, (Cobb, page 6, paragraph 0047, “The learning component 310 may update the SOM-ART component 340 incrementally, i.e. as each micro-feature vector is received. Alternatively, the learning component 310 may collect a batch (predetermined number) of micro-feature vectors and update the SOM-ART component 340 periodically” Examiner notes that updating the SOM-ART component is the same as training a neural network). Regarding claim 13, Cobb teaches Obtaining a sensor input, (Cobb, page 3, paragraph 0030, “Network 110 receives video data (e.g., video stream(s), video images, or the like) from the video input source 105. The video input source 105 may be a video camera, a VCR, DVR, DVD, computer, web-cam device, or the like. For example, the video input source 105 may be a stationary video camera aimed at a certain area (e.g., a subway station, a parking lot, a building entry/exit, etc.), which records the events taking place therein”). processing the sensor input using an encoder neural network to generate one or more feature vectors for the sensor input, (Cobb, page 4, paragraph 0037, “The context processor component 220 may receive the output from other stages of the pipeline (i.e., the tracked objects and the background and foreground models). Using this information, the context processor 220 may be configured to generate a stream of micro-feature vectors corresponding to foreground patches tracked (by tracker component 210) Examiner notes that context processor 220 is being mapped to the encoder neural network). processing each of the one or more feature vectors using a density estimation model to generate a density score for the feature vector, (Cobb, page 6, paragraph 0047, “The anomaly detection component 322 is configured to compute a probability density function based on the existing clusters in the ART 325 and compute a probability density value for the micro-feature vector”). and generating a rareness score for each of the one or more feature vectors from the density score, wherein the rareness score represents a degree to which a predicted behavior of an object depicted in the sensor input is rare relative to other objects, (Cobb, page 8, paragraph 0064, “At step 476 the anomaly detection component 322 determines a rareness measure for the micro-feature vector. That is, the anomaly detection component 322 estimates a measure of the likelihood of observing the particular micro-feature vector, based on the probability density function and the probability micro-feature vector”). Regarding Claim 14, Cobb teaches: One or more computers; and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform operations (Cobb, page 3, paragraph 0027, “One embodiment of the invention is implemented as a program product for use with a computer system. The program(s) of the program product defines functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media”). obtaining a sensor input, (Cobb, page 3, paragraph 0030, “Network 110 receives video data (e.g., video stream(s), video images, or the like) from the video input source 105. The video input source 105 may be a video camera, a VCR, DVR, DVD, computer, web-cam device, or the like. For example, the video input source 105 may be a stationary video camera aimed at a certain area (e.g., a subway station, a parking lot, a building entry/exit, etc.), which records the events taking place therein”). processing the sensor input using an encoder neural network to generate one or more feature vectors for the sensor input, (Cobb, page 4, paragraph 0037, “The context processor component 220 may receive the output from other stages of the pipeline (i.e., the tracked objects and the background and foreground models). Using this information, the context processor 220 may be configured to generate a stream of micro-feature vectors corresponding to foreground patches tracked (by tracker component 210) Examiner notes that context processor 220 is being mapped to the encoder neural network). processing each of the one or more feature vectors using a density estimation model to generate a density score for the feature vector, (Cobb, page 6, paragraph 0047, “The anomaly detection component 322 is configured to compute a probability density function based on the existing clusters in the ART 325 and compute a probability density value for the micro-feature vector”). and generating a rareness score for each of the one or more feature vectors from the density score, wherein the rareness score represents a degree to which a classification of an object depicted in the sensor input is rare relative to other objects, (Cobb, page 8, paragraph 0064, “At step 476 the anomaly detection component 322 determines a rareness measure for the micro-feature vector. That is, the anomaly detection component 322 estimates a measure of the likelihood of observing the particular micro-feature vector, based on the probability density function and the probability micro-feature vector”). Regarding Claim 15, Cobb teaches all the elements of claim 14 as shown above, Cobb also teaches: process the sensor input to generate an intermediate feature map, (Cobb, page 4, paragraph 0037, “The context processor component 220 may receive the output from other stages of the pipeline (i.e., the tracked objects and the background and foreground models). Using this information, the context processor 220 may be configured to generate a stream of micro-feature vectors corresponding to foreground patches tracked (by tracker component 210)” Examiner notes that micro-feature vectors can be mapped to feature maps). and process the intermediate feature map to generate a prediction output for the sensor input, wherein the prediction output characterizes one or more of (i) one or more regions of the sensor data or (ii) one or more objects depicted in the one or more regions, (Cobb, page 4, paragraph 0036, “ the context processor component 220 may evaluate a foreground patch from frame-to-frame and output micro-feature vectors including values representing the foreground patch's hue entropy, magnitude-saturation ratio, orientation angle, pixel area, aspect ratio, groupiness (based on the pixel-level spatial distribution), legged-ness (based on a number of potential legs), verticality (based on per-pixel gradients), motion vector orientation, rigidity/animateness, periodicity of motion, etc.” and the context processor component 220 may output a stream of context events describing that foreground patch's height, width (in pixels), position (as a 2D coordinate in the scene), acceleration, velocity, orientation angle, etc.”) Examiner notes that the prior art anticipates both (i) and (ii) sections of the limitation. Regarding Claim 16, Cobb teaches all the elements of claims 14 and 15 as shown above, Cobb also teaches: generating one or more feature vectors for the sensor input from the intermediate feature map generated by the prediction neural network, (Cobb, page 4, paragraph 0039, “the computer vision engine 135 shown in FIG. 2, the classification of objects is performed by the micro-feature classifier 221 in the machine learning engine 140 using the micro-feature vectors that are produced by the computer vision engine 135”). Regarding Claim 17, Cobb teaches all the elements of claims 14 and 15 as shown above, Cobb also teaches: the prediction output for the sensor input comprises object detection prediction data for the sensor input that specifies one or more regions of the sensor data that are each predicted to depict a respective object, (Cobb, page 4, paragraph 0038, “the computer vision engine is configured to classify each tracked object as being one of a known category of objects using training data that defines a plurality of object types” and “the classification of "other" represents an affirmative assertion that the object is neither a "person" nor a "vehicle." Additionally, the estimator/identifier component may identify characteristics of the tracked object, e.g., for a person, a prediction of gender, an estimation of a pose (e.g., standing or sitting) or an indication of whether the person is carrying an object”). Regarding Claim 18, Cobb teaches all the elements of claims 14 and 15 as shown above, Cobb also teaches: the prediction output for the sensor input comprises trajectory prediction data for the sensor input that characterizes a predicted future trajectory of a target agent, (Cobb, page 5, paragraph 0040, “ the primitive event detector 212 may be configured to receive the output of the computer vision engine 135 (i.e., the video images, the micro-feature vectors, and context event stream) and generate a sequence of primitive events--labeling the observed actions or behaviors in the video with semantic meaning” and “for example, a sequence of primitive events related to observations of the computer vision engine 135 occurring at a parking lot could include language semantic vectors representing the following: "vehicle appears in scene," "vehicle moves to a given location," "vehicle stops moving," "person appears proximate to vehicle," "person moves," person leaves scene" "person appears in scene," "person moves proximate to vehicle," "person disappears," "vehicle starts moving," and "vehicle disappears."” Examiner notes that some of the language semantic vectors are data that describes a predicted future trajectory). Regarding Claim 19, Cobb teaches all the elements of claim 14 as shown above, Cobb also teaches: generating, from at least the sensor input, a training data set for a downstream task, the generating comprising selecting one or more feature vectors from at least the feature vectors generated from the sensor input based on the respective rareness scores for the feature vectors, (Cobb, page 5, paragraph 0045, “The micro-feature classifier 221 includes a learning component 310, a SOM-ART network component 340, and a classification component 320. The SOM-ART network component 340 includes a SOM 315 and an ART 325. The SOM-ART network component 340 provides a specialized neural network configured to create object type clusters from a group of inputs, e.g., micro-feature vectors”). for each selected feature vector, generating a training example that includes the sensor input form which the selected feature vector is generated and including the training example in the training data (Cobb, page 6, paragraph 0047, “The learning component 310 organizes the micro-feature vector elements in the SOM 315 neurons”). training a downstream neural network on the training data for the downstream task, (Cobb, page 6, paragraph 0047, “The learning component 310 may update the SOM-ART component 340 incrementally, i.e. as each micro-feature vector is received. Alternatively, the learning component 310 may collect a batch (predetermined number) of micro-feature vectors and update the SOM-ART component 340 periodically” Examiner notes that updating the SOM-ART component is the same as training a neural network). Regarding Claim 20: One or more non-transitory computer-readable storage media storing instructions that when executed by one or more computers cause the one or more computers to perform operations (Cobb, page 3, paragraph 0027, “One embodiment of the invention is implemented as a program product for use with a computer system. The program(s) of the program product defines functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media”). obtaining a sensor input, (Cobb, page 3, paragraph 0030, “Network 110 receives video data (e.g., video stream(s), video images, or the like) from the video input source 105. The video input source 105 may be a video camera, a VCR, DVR, DVD, computer, web-cam device, or the like. For example, the video input source 105 may be a stationary video camera aimed at a certain area (e.g., a subway station, a parking lot, a building entry/exit, etc.), which records the events taking place therein”). processing the sensor input using an encoder neural network to generate one or more feature vectors for the sensor input, (Cobb, page 4, paragraph 0037, “The context processor component 220 may receive the output from other stages of the pipeline (i.e., the tracked objects and the background and foreground models). Using this information, the context processor 220 may be configured to generate a stream of micro-feature vectors corresponding to foreground patches tracked (by tracker component 210) Examiner notes that context processor 220 is being mapped to the encoder neural network). processing each of the one or more feature vectors using a density estimation model to generate a density score for the feature vector, (Cobb, page 6, paragraph 0047, “The anomaly detection component 322 is configured to compute a probability density function based on the existing clusters in the ART 325 and compute a probability density value for the micro-feature vector”). and generating a rareness score for each of the one or more feature vectors from the density score, wherein the rareness score represents a degree to which a classification of an object depicted in the sensor input is rare relative to other objects, (Cobb, page 8, paragraph 0064, “At step 476 the anomaly detection component 322 determines a rareness measure for the micro-feature vector. That is, the anomaly detection component 322 estimates a measure of the likelihood of observing the particular micro-feature vector, based on the probability density function and the probability micro-feature vector”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cobb in view of Mao et al. (3D Object Detection for Autonomous Driving: A Comprehensive Survey) (hereafter referred as Mao). Cobb teaches the methods of claim 1 and 6 and: the downstream neural network is the same neural network as the prediction neural network, (Cobb, Figure 3A and 3B. Examiner notes that “The micro-feature classifier 221 includes a learning component 310, a SOM-ART network component 340, and a classification component 320. The SOM-ART network component 340 includes a SOM 315 and an ART 325. The SOM-ART network component 340 provides a specialized neural network configured to create object type clusters from a group of inputs, e.g., micro-feature vectors (Cobb, page 5, paragraph 0045)” and Figure 3B shows that the micro-feature classifier 221 receives feature vectors and produces a predictive output. This means that the micro-feature classifier 221 is both the downstream neural network and the prediction neural network. Cobb does not teach, but Mao does teach the downstream task is three-dimensional object detection task, (Mao, paragraph 0002, “To obtain a comprehensive understanding of driving environments, many vision tasks can be involved in a perception system, e.g. object detection and tracking, lane detection, and semantic and instance segmentation. Among these perception tasks, 3D object detection is one of the most indispensable tasks in an automotive perception system”.) Cobb and Mao are considered analogous because both deal with object detection. It would have been obvious to one having ordinary skill in the art prior to the effective filling date to have modified Cobb to have the downstream task be a three-dimensional object detection task. Doing so allows “geometric information predicted by 3D object detection in real-world coordinates [to] be directly utilized to measure the distances between the ego-vehicle and critical objects, and to further help plan driving routes and avoid collisions” (Mao, paragraph 2). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cobb in view of Kuang et al. (Computer Vision and Normalizing Flow-Based Defect Detection) (hereafter referred as Kuang). Cobb teaches the method of claim 1. Cobb does not teach, but Kuang does teach the density estimation model is a normalizing flow, (Kuang, page 3, ‘The objective of density estimation is to learn the underlying probability density from a set of independent and identically distributed sample data [18]. In 2020, M. Rudolph et al. [19] proposed a normalizing flow-based model called DifferNet, which utilizes a latent space of normalizing flow to represent normal samples’ feature distribution”) Cobb and Kuang are considered analogous because both deal with detecting anomalies using senor data. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to have modified Cobb to use DifferNet for the density estimation model. One of the ordinary skill in the art would know doing so is a simple substitution of one known element (density estimation model) for another (DifferNet) to obtain predictable results (generating a density score) (MPEP 2141 (III)(B) Simple substitution of one known element for another to obtain predictable results). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cobb in view of Gao et al. (WO 2022226434 A1) (hereafter referred as Gao). Cobb teaches the method of claim 1. Cobb does not teach, but Gao does teach the rareness score for the feature vector is inversely proportional to the density score for the feature vector (Gao, page 17, paragraph 0067, “The self-driving behavior metric is related to a rareness of an event and a value(s) of the criteria of the self-driving behavior metric are calculated. For each criterion in a particular seif-driving behavior metric, the probability density of this criterion indicates the rareness of the calculated value. The rarer the calculated value, the more unusual (and potentially improper) is the behavior. Also, a smaller criteria value (e.g., smaller minimum distance) may indicate a more improper self-driving behavior. Thus, the self-driving behavior metric for each criterion can be defined to be the inverse of the probability density and the actual criterion value”) Cobb and Gao are considered analogous to the claimed invention because both deal with retrieving driving data and evaluating rare/improper scenarios. It would have been obvious to one having ordinary skill in the art prior to the effective filling date to have modified Cobb to use the inverse relationship to calculate the rareness score. One of the ordinary skill in the art would have known to apply the known technique of defining a rareness score as the inverse of a density score from Gao, to the anomaly detection method from Cobb. Therefore, applying Gao’s technique would yield the predictable result of determining the rarity of classified objects (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cobb in view of Daiki (US 20220156529 A1) (hereafter referred as Daiki). Cobb teaches the method of claim 1 and claim 6. Cobb does not teach, but Daiki does teach ranking respective feature vectors generated from the plurality of senor inputs by rareness scores, and selecting a proper subset of respective feature vectors having the highest rareness scores according to the ranking (Daiki, page 3, paragraph 0022, “detection program 200 can classify an image as an anomaly utilizing a distance vector based on features of the image” and “ranks and selects a top “k” elements (e.g., from k-means clustering) to obtain a reduced distance vector”) Cobb and Daiki are considered analogous to the claimed invention because both deal with anomaly detection. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Cobb by using the ranking and selecting method from Daiki. One of the ordinary skill in the art would have known to apply the known technique of ranking and selecting scores from Daiki, to the anomaly detection method from Cobb. Therefore, applying Gao’s technique would yield the predictable result of determining the rarity of classified objects, in order to improve training efficiency (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cobb in view of Chen et al. (Generating Autonomous Driving Test Scenarios based on OpenSCENARIO) (hereafter referred to as Chen). Cobb teaches the method of claim 1. Cobb does not teach, but Chen does teach generating, from the sensor input and other sensor inputs, one or more test scripts for a software module, (Chen, Section 2, “A SAM has proposed the Open SCENARIO standard, which developers can easily use to describe test scenarios and clearly describe the dynamic behavior of traffic participants” and “51Sim-One is an autonomous driving system independently developed by 51WORL D that integrates multi-sensor simulation, vehicle dynamics, road and scenario simulation, traffic flow and intelligent body simulation, perception and decision simulation, evaluation indicators, and autonomous driving behavior training”,(Chen, Section 2.2.2) Examiner notes that the 51Sim-One supports and uses the Open SCENARIO standard). and evaluating a performance of the software module by using the software module to process the one or more test scripts, (Chen, Section 1, “For autonomous vehicles in the simulation world, the autonomous driving system is based on the function under test (e.g., passing through the intersection), and the system under test decides control plan (e.g., local path) based on the semantic data in the simulation platform (e.g., self-vehicle speed), the dynamic model is converted into the corresponding response speed, etc., and fed back to the simulation platform, the user can evaluate the performance of the autonomous driving system through the recorded data in the simulation platform, so as to evaluate the autonomous driving system”) Cobb and Chen are considered analogous to the claimed invention because they both evaluate determining rareness scores. It would have been obvious to one having ordinary skill in the art prior to the effective filling date to have modified Cobb to also generate testing scripts for evaluation used in Chen. One of the ordinary skill in the art would have known to apply the known technique of creating test scripts and evaluating a performance of the software from Chen, to the anomaly detection method from Cobb. Therefore, applying Chen’s technique would yield the predictable result of determining the rarity of classified objects, in order to improve training efficiency (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Seow et al. ("UNSUPERVISED LEARNING OF FEATURE ANOMALIES FOR A VIDEO SURVEILLANCE SYSTEM") discloses techniques for analyzing a scene depicted in an input stream of video frames captured by a video camera. Virkar et al. (“Machine Learning Methods and Systems for Identifying Patterns in Data”) discloses methods for training machines to categorize data, and/or recognize patterns in data, and machines and systems so trained.Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN VO whose telephone number is (571)272-9622. The examiner can normally be reached Monday - Friday from 7:00 am - 3:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.V./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

May 02, 2023
Application Filed
Feb 04, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month