Prosecution Insights
Last updated: April 19, 2026
Application No. 17/398,436

HARD EXAMPLE MINING FOR TRAINING A NEURAL NETWORK

Final Rejection §101§103
Filed
Aug 10, 2021
Examiner
BAKER, EZRA JAMES
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Waymo LLC
OA Round
4 (Final)
50%
Grant Probability
Moderate
5-6
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
7 granted / 14 resolved
-5.0% vs TC avg
Strong +78% interview lift
Without
With
+77.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
33 currently pending
Career history
47
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The present application is being examined under the claims filed 12/29/2025. Claims 1-12, 14, 17-18, and 20-24 are pending. Response to Amendment This Office Action is in response to Applicant’s communication filed 12/29/2025 in response to office action mailed 08/28/2025. The Applicant’s remarks and any amendments to the claims or specification have been considered with the results that follow. Response to Arguments Regarding 35 U.S.C. 101 In Remarks pages 9-12, Argument 1 (Examiner summarizes Applicant’s arguments) Applicant makes the following arguments, primarily citing Ex Parte Desjardins: Examiner treats the claims at a too high level of generality Claims not directed to an abstract idea because the claims are integrated into a practical application Applicant points to the specification page 2, arguing that an improvement is obtained by identifying hard examples and training based on the hard examples The claims as amended do not merely recite the idea of a solution or outcome, and Desjardins teaches to avoid this line of reasoning. Applicant claims that the limitations of the claims lay out a specific method which is automatic and effectively identifies hard examples, which avoids the expensive and time consuming process of human labeling. Applicant argues that the claims reflect how hard examples automatically mined are incorporated into a training dataset which a model is trained on, thus reducing training time and improving model performance. Examiner’s response to Argument 1, Examiner disagrees. Regarding Ex Parte Desjardins, it is important to note that the claims were directed to a novel improved method of machine learning training. By contrast, Applicant’s claims employ merely generic training without further details about the method of training: “training the task neural network on the machine learning task using the training dataset that includes the one or more of the plurality of sensor data inputs”. Applicant’s training instead merely does what any other machine learning training system would do: learn to perform a task from a training dataset. Clearly, this limitation does not provide an inventive solution in machine learning but instead broadly recites any type of machine learning training based on the mental process. Moreover, Applicant’s arguments focus heavily on elements that are identified to be abstract ideas which are listed as follows: Applicant fails to argue why these limitations could not be performed in the human mind. Examiner points out that crucially (MPEP 2106.05(a)) It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements […] In addition, the improvement can be provided by the additional element(s) in combination with the recited judicial exception. While the judicial exceptions may provide improvements to identifying hard examples, the judicial exception alone cannot provide an improvement. Examiner maintains that the assessment of the additional elements is not overly broad, but instead treating them at the level of broadness they are recited. Merely gathering data for machine learning and performing generic machine learning training is not sufficient to integrate the judicial exception into a practical application. Even when viewed in combination, the claim is primarily directed to mental processes and alleged improvements to the mental process. Further, Ex Parte Desjardins does not erase the wealth of judicial precedent that came before it. The MPEP, along with judicial precedent, makes it clear that the claims cannot be eligible as currently recited. For example, the claims of the patents in Recentive Analytics v. Fox Corp recite using machine learning training for live event scheduling. The judges concluded in Recentive that Today, we hold only that patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101. Just like Recentive, the instant application applies generic machine learning to the allegedly new data environment of identified hard example data, without reflecting any improvements to the machine learning itself in the claims. Examiner further points to Example 47 claim 2 of the July 2024 Subject Matter Eligibility guidance, which applies machine learning to anomaly detection. However, claim 2 does not provide improvements to machine learning training nor any concrete steps that provide for improved network security. Therefore, for the reasoning explicitly laid out in MPEP 2106.05(a), Recentive Analytics v. Fox Corp, Example 47, and Ex Parte Desjardins the claims cannot be eligible under 35 U.S.C. 101. Regarding 35 U.S.C. 103 In Remarks page 12-13, Argument 2 (Examiner summarizes Applicant’s arguments) Applicant argues, based on the claim amendments, that Raghunathan does not teach the feature of determining a level of inconsistency[…]. Applicant argues that Raghunathan does not use a “single trained prediction model” to generate predictions but instead uses two different biased classifiers to be compared for inconsistency, nor does Raghunathan teach the classifiers processing inputs “each taken at a corresponding one of the plurality of different times”. Examiner’s response to Argument 2, Applicant’s arguments amount to attacking a single reference when the rejection relies upon multiple references. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In Remarks page 13-14, Argument 2 (Examiner summarizes Applicant’s argument) Applicant states that Examiner relies on Mordan to teach the feature of processing the plurality of sensor data inputs[…]. Applicant argues that Mordan does not determine a level of inconsistency between predictions as recited in the claims, but instead teaches aggregating the predictions of the same attributes detected in each of the images or frames of the video. Examiner’s response to Argument 2 Examiner does not rely on Mordan to teach determining the level of inconsistency, but relies instead on Raghunathan for that portion of the claim. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Moreover, while Mordan may not teach the claim in its entirety, Examiner notes that the aggregated vote could be interpreted as an inconsistency measure where more disagreement in the vote represents more inconsistency in predictions. In Remarks, page 14-15, Argument 3 (Examiner summarizes Applicant’s arguments) Applicant argues that it is unclear how Raghunathan is modified by Mordan since Mordan does not teach inconsistency. Applicant further argues that at best, the combination would result in evaluating each frame using two different classifiers and comparing their predictions, not the limitations claimed. Applicant argues that alone nor in combination, there is no teaching or suggestion of using different predictions from the single model to determine measures of inconsistency. Examiner’s response to Argument 3 Applicant misconstrues Examiner’s rejections. Raghunathan teaches a particular kind of inconsistency metric to determine hard examples, and then using a hard example dataset to train a task neural network. Mordan teaches a single classifier model which is used to generate a plurality of predictions based on video data. Applicant provides no reason that the inconsistency metric of Raghunathan cannot be used on the plurality of predictions generated by the single classifier model of Mordan, but instead insists on mere allegations that it cannot be done and that their combination would not result in the claimed invention. Examiner does not argue that any single reference teaches the entire claim, but that when the teachings of Raghunathan and Mordan are combined they meet the limitations of the claim, and that there is good reason to do so. Examiner points to the rejections for a showing that each limitation claimed by Applicant is taught by a respective portion of either Raghunathan or Mordan. Applicant cannot attack each reference individually when the rejection is based on their combination. Allowable Subject Matter Claims 7, 8, 10, 11, and 17 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Examiner notes that the claims must also be amended to overcome objections and rejections under 35 U.S.C. 101 prior to allowance. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12, 14, 17-18, and 20-24 are pending are rejected under 35 U.S.C. 101 for containing an abstract idea without significantly more. Regarding Claim 1: Step 1 – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is to a process. Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites the abstract ideas of: processing the plurality of sensor data inputs using a single trained prediction model to generate a plurality of predictions about a characteristic of an object of the scene, wherein each of the plurality of predictions is a prediction about the characteristic of the object of the scene at a corresponding one of the plurality of different times — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). The limitation is directed to a mental process because it amounts to making a plurality of judgements based on given data. determining a level of inconsistency between the plurality of predictions about the characteristic of the object of the scene — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). The limitation is directed to a mental process because it amounts to evaluating the differences between predictions made. determining that the level of inconsistency exceeds a threshold level — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). The limitation is directed to a mental process because it amounts to evaluating of a given value to see if it is greater than another value. Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, the claim does not recite additional elements that integrate the judicial exception into a practical application. The additional elements: A method for determining hard example sensor data inputs for training a task neural network of an autonomous vehicle — This limitation is directed to the field of use (see MPEP 2106.05(h)) as it merely limits the field of the abstract idea to the technological environments of sensors and hard examples. wherein the task neural network is configured to receive a sensor data input and to generate a respective output for the sensor data input to perform a machine learning task, the method comprising — This limitation is directed to mere data gathering and outputting which has been recognized by the courts (as per Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754) as insignificant extra-solution activity (see MPEP 2106.05(g)). receiving a plurality of sensor data inputs depicting a same scene of an environment wherein each of the plurality of sensor data inputs are taken at a corresponding time of a plurality of times during a predetermined time period — This limitation is directed to mere data gathering and outputting which has been recognized by the courts (as per Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754) as insignificant extra-solution activity (see MPEP 2106.05(g)). generated by the single trained prediction model from the plurality of sensor data inputs each taken at a corresponding one of the plurality of different times — This limitation is directed to mere instructions to apply a judicial exception. Using machine learning training to apply a judicial exception (see MPEP 2106.05(f)) is insufficient to integrate the judicial exception into a practical application. Even if the training is implemented on a generic computer (see MPEP 2106.05(f)(2), 2106.04(d)), the limitation does not integrate the judicial exception into a practical application. based on determining that the level of inconsistency exceeds a threshold level, adding one or more of the plurality of sensor data inputs to a training dataset — This limitation is directed to mere data gathering and outputting which has been recognized by the courts (as per Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754) as insignificant extra-solution activity (see MPEP 2106.05(g)). and training the task neural network on the machine learning task using the training dataset that includes the one or more sensor data inputs — This limitation is directed to mere instructions to apply a judicial exception. Using machine learning training to apply a judicial exception (see MPEP 2106.05(f)) is insufficient to integrate the judicial exception into a practical application. Even if the training is implemented on a generic computer (see MPEP 2106.05(f)(2), 2106.04(d)), the limitation does not integrate the judicial exception into a practical application. Step 2B – Does the claim recite additional elements that amount to significantly more than the abstract idea itself? No, the claim does not recite additional elements which amount to significantly more than the abstract idea itself. The additional elements as identified in step 2A prong 2: A method for determining hard example sensor data inputs for training a task neural network of an autonomous vehicle — Limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot provide significantly more than the judicial exception itself. wherein the task neural network is configured to receive a sensor data input and to generate a respective output for the sensor data input to perform a machine learning task, the method comprising — This limitation is recited at a high level of generality and amounts to mere data gathering of transmitting and receiving data over a network, which is well-understood, routine, and conventional activity (see MPEP 2106.05(d) II.) which cannot amount to significantly more than the judicial exception. receiving a plurality of sensor data inputs depicting a same scene of an environment wherein each of the plurality of sensor data inputs are taken at a corresponding time of a plurality of times during a predetermined time period — This limitation is recited at a high level of generality and amounts to mere data gathering of transmitting and receiving data over a network, which is well-understood, routine, and conventional activity (see MPEP 2106.05(d) II.) which cannot amount to significantly more than the judicial exception. generated by the single trained prediction model from the plurality of sensor data inputs each taken at a corresponding one of the plurality of different times — Mere instructions to apply a judicial exception (see MPEP 2106.05(f)) and using a generic computer as a tool (see MPEP 2106.05(f)(2), 2106.05(d)) cannot amount to significantly more than the judicial exception itself. based on determining that the level of inconsistency exceeds a threshold level, adding one or more of the plurality of sensor data inputs to a training dataset — This limitation is recited at a high level of generality and amounts to mere data gathering of storing and retrieving information in memory, which is well-understood, routine, and conventional activity (see MPEP 2106.05(d) II.), which cannot amount to significantly more than the judicial exception. and training the task neural network on the machine learning task using the training dataset that includes the one or more sensor data inputs — Mere instructions to apply a judicial exception (see MPEP 2106.05(f)) and using a generic computer as a tool (see MPEP 2106.05(f)(2), 2106.05(d)) cannot amount to significantly more than the judicial exception itself. Regarding Claim 2: Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which included an abstract idea (see rejection for claim 1). The claim recites the additional limitations: Step 2A Prong 1: and wherein generating the plurality of predictions about the characteristic of the object of the scene includes: for each of the plurality of sensor data inputs, generating a respective prediction using the single trained prediction model that is a classifier network — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). The limitation is directed to a mental process because it amounts to evaluating given data using a known algorithm, for example by performing a series of matrix multiplications and activation functions. Thus, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 3: Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which included an abstract idea (see rejection for claim 1). The claim recites the additional limitation: Step 2A Prong 2: wherein the characteristic of the object is an object class — This limitation is directed to the field of use (see MPEP 2106.05(h)) as it merely limits the field of the characteristic of the object. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. Step 2B: The additional elements as identified in step 2A prong 2: wherein the characteristic of the object is an object class — Limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot provide significantly more than the judicial exception itself. Thus, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 4: Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 3 which included an abstract idea (see rejection for claim 3). The claim recites the additional limitation: Step 2A Prong 2: the object class is one of a pedestrian, a cyclist, a car, a truck, a motorbike, a bicycle, a wheelchair, an animal, or an object that is stationary relative to other objects of the scene — This limitation is directed to the field of use (see MPEP 2106.05(h)) as it merely limits the field of the object class. Step 2B: The additional elements as identified in step 2A Prong 2: the object class is one of a pedestrian, a cyclist, a car, a truck, a motorbike, a bicycle, a wheelchair, an animal, or an object that is stationary relative to other objects of the scene — Limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot provide significantly more than the judicial exception itself. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 5: Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 3 which included an abstract idea (see rejection for claim 3). The claim merely recites the additional abstract ideas: Step 2A Prong 1: wherein determining the level of inconsistency between the plurality of predictions comprises: determining a number of times that the object class of the object has changed in the plurality of predictions — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). The limitation is directed to a mental process because it amounts to observing a number of times that a prediction has changed. and wherein determining that the level of inconsistency exceeds a threshold level comprises: determining that the number of times that the object class of the object has changed in the plurality of predictions exceeds a threshold number of times — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). The limitation is directed to a mental process because it amounts to an evaluation of a given value to see if it is greater than another value. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 6: Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which included an abstract idea (see rejection for claim 1). The claim recites the additional limitations: Step 2A Prong 2: wherein the characteristic of the object is a heading direction of a bounding box of the object — This limitation is directed to the field of use (see MPEP 2106.05(h)) as it merely limits the field of the characteristic of the object. wherein the bounding box is a rectangle with vertical and horizontal sides surrounding the object — This limitation is directed to the field of use (see MPEP 2106.05(h)) as it merely limits the field of the bounding box. Step 2B: The additional elements as identified in step 2A prong 2: wherein the characteristic of the object is a heading direction of a bounding box of the object — Limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot provide significantly more than the judicial exception itself. wherein the bounding box is a rectangle with vertical and horizontal sides surrounding the object — Limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot provide significantly more than the judicial exception itself. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 7: Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 6 which included an abstract idea (see rejection for claim 6). The claim recites the additional abstract ideas: Step 2A Prong 1: wherein determining the level of inconsistency between the plurality of predictions comprises: determining a number of times that the heading direction of the bounding box has changed more than a threshold angle in the plurality of predictions — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). The limitation is directed to a mental process because it amounts to observing a number of times that a prediction has changed. and wherein determining that the level of inconsistency exceeds a threshold level comprises: determining that the number of times that the heading direction of the bounding box has changed more than the threshold angle in the plurality of predictions exceeds a threshold number of times — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). The limitation is directed to a mental process because it amounts to an evaluation of a given value to see if it is greater than another value. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 8: Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 7 which included an abstract idea (see rejection for claim 7). The claim recites the additional limitation: Step 2A Prong 2: wherein the threshold angle is 90 degree — This limitation is directed to the field of use (see MPEP 2106.05(h)) as it merely limits the field of the threshold angle. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. Step 2B: The additional elements as identified in step 2A prong 2: wherein the threshold angle is 90 degree — Limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot provide significantly more than the judicial exception itself. Thus, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 9: Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which included an abstract idea (see rejection for claim 1). The claim recites the additional limitations: Step 2A Prong 2: wherein the characteristic of the object is a size of the object — This limitation is directed to the field of use (see MPEP 2106.05(h)) as it merely limits the field of the characteristic of the object. wherein the size of the object includes at least one of a width or a length of the object — This limitation is directed to the field of use (see MPEP 2106.05(h)) as it merely limits the size of the object. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. Step 2B: The additional elements as identified in step 2A prong 2: wherein the characteristic of the object is a size of the object — Limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot provide significantly more than the judicial exception itself. wherein the size of the object includes at least one of a width or a length of the object — Limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot provide significantly more than the judicial exception itself. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 10: Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 9 which included an abstract idea (see rejection for claim 9). The claim merely recites the additional abstract ideas: Step 2A Prong 1: wherein determining the level of inconsistency between the plurality of predictions comprises: determining a number of times that at least one of a width or a length of the object has changed more than a threshold distance in the plurality of predictions — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). and wherein determining that the level of inconsistency exceeds a threshold level comprises: determining that the number of times that at least one of a width or a length of the object has changed more than the threshold distance in the plurality of predictions exceeds a threshold number of times — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 11: Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 10 which included an abstract idea (see rejection for claim 10). The claim recites the additional limitation: Step 2A Prong 2: wherein the threshold distance is 1 meter — This limitation is directed to the field of use (see MPEP 2106.05(h)) as it merely limits the field of the threshold distance. Step 2B: The additional elements as identified in step 2A prong 1: wherein the threshold distance is 1 meter — Limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot provide significantly more than the judicial exception itself. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 12: Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which included an abstract idea (see rejection for claim 1). The claim recites the additional limitation: Step 2A Prong 2: wherein the plurality of sensor data inputs are captured by one or more camera sensors of an autonomous vehicle — This limitation is directed to mere data gathering and outputting which has been recognized by the courts (as per Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754) as insignificant extra-solution activity (see MPEP 2106.05(g)). Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. Step 2B: The additional elements as identified in step 2A prong 2: wherein the plurality of sensor data inputs are captured by one or more camera sensors of an autonomous vehicle — This limitation is recited at a high level of generality and amounts to mere data gathering of transmitting and receiving data over a network, which is well-understood, routine, and conventional activity (see MPEP 2106.05(d) II.) which cannot amount to significantly more than the judicial exception. Thus the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 14: Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which included an abstract idea (see rejection for claim 1). The claim merely recites the additional abstract idea: Step 2A Prong 1: wherein the respective prediction assigns a score to each object category of a set of object categories, with each score representing an estimated likelihood that the object of the scene depicted in the particular sensor data input belonging to the respective object category — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 17: Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 14 which included an abstract idea (see rejection for claim 14). The claim merely recites the additional abstract ideas: Step 2A Prong 1: wherein determining the level of inconsistency between the plurality of predictions comprises: for each object category of the set of object categories: determining a maximum score for the object category among the scores assigned to the object category by the plurality of predictions — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). determining a minimum score for the object category among the scores assigned to the object category by the plurality of predictions — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). and calculating a difference between the maximum score and the minimum score for the object category — This limitation is directed to the abstract idea of a mathematical process, and a mathematical calculation in particular (MPEP 2106.04(a)(2) I. C.). The claim describes the mathematical calculation of a difference calculation (e.g. subtraction/dot product/other measure of difference) in words. and wherein determining that the level of inconsistency exceeds a threshold level comprises: determining that the difference determined for at least one object category exceeds a threshold amount of difference — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 18: Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 14 which included an abstract idea (see rejection for claim 14). The claim merely recites the additional abstract ideas: Step 2A prong 1: wherein determining the level of inconsistency between the plurality of predictions comprises: for each object category of the set of object categories: determining a variance of the scores assigned to the object category by the plurality of predictions — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). and wherein determining that the level of inconsistency exceeds a threshold level comprises: determining that the variance of at least one object category exceeds a threshold variance — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 20: Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which included an abstract idea (see rejection for claim 1). The claim recites the additional limitation: Step 2A Prong 2: wherein the machine learning task is one of an image classification task or an object detection task — This limitation is directed to the field of use (see MPEP 2106.05(h)) as it merely limits the field of the machine learning task. Step 2B: wherein the machine learning task is one of an image classification task or an object detection task — Limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot provide significantly more than the judicial exception itself. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 21 Independent claim 21 is a computer storage medium claim corresponding to method claim 1, which was directed to an abstract idea, therefore the same rejection and rationale applies. The only difference is that claim 21 recites the following additional elements treated under step 2A prong 2 and step 2B: Step 2A Prong 2: One or more non-transitory computer storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform operations for — This limitation is directed to merely applying an abstract idea using a generic computer as a tool (see MPEP 2106.05(f)(2), 2106.04(d)). Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. Step 2B: One or more non-transitory computer storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform operations for — Using a generic computer as a tool (see MPEP 2106.05(f)(2), 2106.05(d)) cannot amount to significantly more than the judicial exception itself. Thus, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 22 Independent claim 22 is a computer system claim corresponding to method claim 1, which was directed to an abstract idea, therefore the same rejection and rationale applies. The only difference is that claim 22 recites the following additional elements treated under step 2A prong 2 and step 2B: Step 2A Prong 2: A system comprising one or more computers and one or more non- transitory computer storage media encoded with instructions that, when executed by the one or more computers, cause the one or more computers to perform operations for — This limitation is directed to merely applying an abstract idea using a generic computer as a tool (see MPEP 2106.05(f)(2), 2106.04(d)). Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. Step 2B: A system comprising one or more computers and one or more non- transitory computer storage media encoded with instructions that, when executed by the one or more computers, cause the one or more computers to perform operations for — Using a generic computer as a tool (see MPEP 2106.05(f)(2), 2106.05(d)) cannot amount to significantly more than the judicial exception itself. Thus, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 23 Claim 23 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which included an abstract idea (see rejection for claim 1). The claim recites the additional limitations: Step 2A Prong 2: further comprising after training, deploying the task neural network on-board the autonomous vehicle — This limitation is directed to merely limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) as it merely limits the judicial exception to the technological environment of autonomous vehicles. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. Step 2B: The additional elements as identified in step 2A prong 2: further comprising after training, deploying the task neural network on-board the autonomous vehicle — Merely limiting a judicial exception to a particular field of use (see MPEP 2106.05(h)) cannot amount to significantly more than the judicial exception. Thus, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Regarding Claim 24 Claim 24 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which included an abstract idea (see rejection for claim 1). The claim recites the additional limitations: Step 2A Prong 1: wherein determining a level of inconsistency between the plurality of predictions about the characteristic of the object of the scene comprises determining a number of times that a value assigned to the characteristic has changed in the plurality of predictions and wherein determining that the level of inconsistency exceeds a threshold level comprises:determining that the value assigned to the characteristic has changed more than a threshold number of times — This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) III. C.). The limitation is directed to a mental process because it amounts to evaluating the differences between predictions made. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 12, and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over NPL reference Raghunathan et al. “Scalable-effort Classifiers for Energy-efficient Machine Learning” in view of NPL reference Mordan et al. “Detecting 32 Pedestrian Attributes for Autonomous Vehicles” herein referred to as Mordan. Regarding Claim 1 Raghunathan teaches: determining a level of inconsistency between the plurality of predictions about the characteristic of the object of the scene generated from the plurality of sensor data inputs each taken at a corresponding one of the plurality of different times; (page 3 section 3.1) First, we consider the case of a binary classifier with two possible class outcomes + and -. Fig. 4(a) shows the block diagram of a classifier stage. In such a scenario, each stage is composed of two biased classifiers, which are trained to detect one particular class with high accuracy. For instance, if a classifier is biased towards class + (denoted by C+), it frequently mispredicts inputs from class - but seldom from class +. Besides the biased classifiers, the stage also contains a consensus module, whose functionality is shown in Fig. 4(a). The consensus module utilizes the output of the biased classifiers to determine if the input should get classified in the current stage or passed on to the next stage[*Examiner notes: determining a level of inconsistency]. This decision is based on the following two criteria: 1. If the biased classifiers predict the same class i.e., ++ or - -, then the corresponding label i.e., + or - is produced as output. 2. If the biased classifiers produce no consensus (NC) i.e., +- or -+, the input is deemed to be difficult to classify by the stage and is passed along to the next stage. determining that the level of inconsistency exceeds a threshold level (page 3 column 2) “As shown in Eq. 2, the consensus operation contains a parameter called the consensus threshold (denoted by δ) that the defines the degree to which the biased classifiers should agree (or contradict) for the input to be classified (or passed on) by the stage.”; Equation 2 based on determining that the level of inconsistency exceeds a threshold level, adding one or more of the plurality of sensor data inputs to a training dataset (page 3 section 3.1) “2. If the biased classifiers produce no consensus (NC) i.e., +- or -+, the input is deemed to be difficult to classify by the stage and is passed along to the next stage.”; (page 3 column 2 last paragraph) “In this case, we observe that modulating grows or shrinks the region separating the easy vs. hard inputs, resulting in the stage classifying a correspondingly smaller or larger fraction of inputs.”; Figure 1(b); [*Examiner notes: The model evaluates the level of agreement (inconsistency) and if the results disagree beyond a threshold, then the input is determined to be a hard example.]; (page 4 column 2 above section 4.2) “For any instance, if global consensus is achieved (line 12), we remove it from Dtr for subsequent stages and increment ΔIstg by one (line 13). If not, we add a fractional value to ΔIstg, which is proportional to the number of classes eliminated from consideration by the stage (line 15).”; [*Examiner notes: The broadest reasonable interpretation of adding hard examples to a training dataset includes removing all examples which are determined to NOT be hard examples. The resulting training dataset is the same.] and training the task neural network on the machine learning task using the training dataset that includes the one or more of the plurality of sensor data inputs (page 4 algorithm 1) PNG media_image1.png 380 1111 media_image1.png Greyscale Mordan teaches: A method for determining hard example sensor data inputs for training a task neural network of an autonomous vehicle (page 1 abstract) “Pedestrians are arguably one of the most safety-critical road users to consider for autonomous vehicles in urban areas. In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.”; (page 1 column 1 paragraph 1) “Although autonomous vehicles have already demonstrated successful autonomy on highways [4], [14], [31], urban areas and cities remain a challenge due to a higher degree of diversity in situations and actors”; (page 7 column 1 paragraph 2) “This yields final feature maps of size 121 × 47 neurons with our network[*Examiner notes: neural network].” wherein the task neural network is configured to receive a sensor data input and to generate a respective output for the sensor data input to perform a machine learning task, (page 1 abstract) “For this, we introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way. Each field spatially locates pedestrian instances and aggregates attribute predictions over them.” the method comprising: receiving a plurality of sensor data inputs depicting a same scene of an environment wherein each of the plurality of sensor data inputs are taken at a corresponding time of a plurality of different times during a predetermined time period; (page 6 column 2 paragraph 1) “We consider the default split, composed of 40,530 images (177 videos[*Examiner notes: plurality of sensor data inputs depicting a same scene]) for training, 7,170 images (29 videos) for validation, and 27,912 images (117 videos) for testing.”; [*Examiner notes: Video data is a sequence of frames representing a predetermined time period, where each frame image is taken at a corresponding time of a plurality of different times] processing the plurality of sensor data inputs using a single trained prediction model to generate a plurality of predictions about a characteristic of an object of the scene captured in the plurality of sensor data inputs, wherein each of the plurality of predictions is a prediction about the characteristic of the object of the scene at a corresponding one of the plurality of different times; the plurality of predictions about the characteristic of the object of the scene generated by the single trained prediction mode (page 8 column 2 last paragraph) “Our approach either outperforms or performs on par with state-of-the-art methods that use ground-truth detections and/or videos[*Examiner notes: plurality of sensor data inputs], while still learning multiple additional attributes”; (page 6 column 2 second to last paragraph) “Our model is based on a ResNet-50 backbone [24], with single 1 × 1 sub-pixel convolution layers [59] as task-specific predictors[*Examiner notes: single trained prediction model].”; (page 7 column 2 paragraph 1) “Image-wise predictions for crossing forecasting are obtained from the detection with highest confidence on this attribute.” [*Examiner notes: Figure 2A shown below is the single trained prediction model model used to generate predictions about each of the plurality of sensor data inputs, which are frames of videos (predictions about the characteristic of the object at times of the plurality of times).] PNG media_image2.png 354 741 media_image2.png Greyscale Raghunathan, Mordan, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the inconsistency measurements and training of Raghunathan with the sensor data inputs and predictions taught by Mordan because (Mordan page 1 abstract) “Experimental validation is performed on JAAD, a dataset providing numerous attributes for pedestrian analysis from autonomous vehicles, and shows competitive detection and attribute recognition results, as well as a more stable MTL training” Regarding Claim 2: Raghunathan in view of Mordan teaches: The method of claim 1 (see rejection of claim 1) And Mordan further teaches: and wherein generating the plurality of predictions about the characteristic of the object of the scene includes: for each of the plurality of sensor data inputs, generating a respective prediction using the single trained prediction model that is a classifier neural network (page 6 column 2 last paragraph) “Our model is based on a ResNet-50 backbone [24], with single 1 × 1 sub-pixel convolution layers [59] as task-specific predictors. We use pre-trained weights from PifPaf [30] since it uses a similar framework and is trained on humans specifically. The loss functions are (binary) focal cross-entropy [35] for (binary) classification tasks[*Examiner notes: classifier network], and L1 for regression ones (continuous scalar and vectorial attributes)” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to combine Raghunathan with Mordan for the same reasons given in claim 1 above. Regarding Claim 3 Raghunathan in view of Mordan teaches: The method of claim 1 (see rejection of claim 1) And Mordan further teaches: wherein the characteristic of the object is an object class. (page 6 column 2 section d) “Appearance attributes (A = 19): • binary: ‘gender’, ‘backpack’, ‘bag at elbow’, ‘bag at hand’, ‘bag on left side’, ‘bag on right side’, ‘bag on shoulder’, ‘cap’, ‘clothes below knee’, ‘dark lower clothes’, ‘dark upper clothes’, ‘light lower clothes’, ‘light upper clothes’, ‘hood’, ‘object’, ‘phone’, ‘stroller cart’ and ‘sunglasses’” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to combine Raghunathan with Mordan for the same reasons given in claim 1 above. Regarding Claim 4 Raghunathan in view of Mordan teaches: The method of claim 3 (see rejection of claim 3) Mordan further teaches: wherein the object class is one of a pedestrian, a cyclist, a car, a truck, a motorbike, a bicycle, a wheelchair, an animal, or an object that is stationary relative to other objects of the scene. (page 6 column 2 section d) “Appearance attributes (A = 19): • binary: ‘gender’, ‘backpack’, ‘bag at elbow’, ‘bag at hand’, ‘bag on left side’, ‘bag on right side’, ‘bag on shoulder’, ‘cap’, ‘clothes below knee’, ‘dark lower clothes’, ‘dark upper clothes’, ‘light lower clothes’, ‘light upper clothes’, ‘hood’, ‘object’, ‘phone’, ‘stroller cart’ and ‘sunglasses’” [*Examiner notes: The broadest reasonable interpretation of a “wheelchair” includes a stroller as taught by Mordan] It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to combine Raghunathan with Mordan for the same reasons given in claim 1 above. Regarding Claim 5 Raghunathan in view of Mordan teaches: The method of claim 3 (see rejection of claim 3) And Raghunathan further teach: wherein determining the level of inconsistency between the plurality of predictions comprises: determining a number of times that the object class of the object has changed in the plurality of predictions (page 3 column 1 section 3.1) “1. If the biased classifiers predict the same class i.e., ++ or - -, then the corresponding label i.e., + or - is produced as output. 2. If the biased classifiers produce no consensus (NC) i.e., +- or -+[*Examiner notes: determining a number of times the object class has changed], the input is deemed to be difficult to classify by the stage and is passed along to the next stage.” and wherein determining that the level of inconsistency exceeds a threshold level comprises: determining that the number of times that the object class of the object has changed in the plurality of predictions exceeds a threshold number of times. (page 3 column 2 last paragraph) “As shown in Eq. 2, the consensus operation contains a parameter called the consensus threshold (denoted by δ) that the defines the degree to which the biased classifiers should agree (or contradict) for the input to be classified (or passed on) by the stage[*Examiner notes: determining that the number of times that the object class of the object has changed a threshold number of times]. A positive δ makes the consensus operation more stringent, i.e., an input is classified by the stage only if the biased classifiers agree on their decisions and their respective confidence measures are greater than δ. For a positive δ, fewer inputs will be classified by the stage, but the accuracy of its classifications is improved. On the other hand, a negative δ relaxes the consensus threshold, as an input is classified by the stage even if the biased classifiers disagree, provided their confidence in the contradictory predictions is lower than δ.” Regarding Claim 12 Raghunathan in view of Mordan teaches: The method of claim 1 (see rejection of claim 1) Mordan further teaches: wherein the plurality of sensor data inputs are captured by one or more camera sensors of an autonomous vehicle. (page 1 abstract) “Experimental validation is performed on JAAD, a dataset providing numerous attributes for pedestrian analysis from autonomous vehicles”; (page 6 column 2 abstract) “We consider the default split, composed of 40,530 images (177 videos)[*Examiner notes: camera sensors] for training, 7,170 images (29 videos) for validation, and 27,912 images (117 videos) for testing.” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to combine Raghunathan and Mordan for the same reasons given in claim 1 above. Regarding Claim 20 Raghunathan in view of Mordan teaches: The method of claim 1 (see rejection of claim 1) Raghunathan further teaches: wherein the machine learning task is one of an image classification task or an object detection task (page 2 column 1 paragraph 2) “We quantify this intuition for the popular MNIST handwriting recognition dataset[*Examiner notes: image classification task]” Regarding Claim 21 Claim 21 is a computer storage media claim corresponding to method claim 1. The only difference is that claim 21 recites one or more non-transitory computer storage media with a computer: Ragunathan teaches: one or more non-transitory computer storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform operations for (page 5 column 1 last paragraph) “We measured runtime for the applications using performance counters on a commodity Intel Core i5 notebook with a 2.5 GHz processor and 8 GB of RAM.” The remaining limitations of the claim are taught by the rejection of claim 1. Regarding Claim 22 Claim 22 is a computer system claim corresponding to method claim 1. The only difference is that claim 22 recites a computer with one or more non-transitory computer storage media: Ragunathan teaches: A system comprising one or more computers and one or more non- transitory computer storage media encoded with instructions that, when executed by the one or more computers, cause the one or more computers to perform operations for (page 5 column 1 last paragraph) “We measured runtime for the applications using performance counters on a commodity Intel Core i5 notebook with a 2.5 GHz processor and 8 GB of RAM.” The remaining limitations of the claim are taught by the rejection of claim 1. Claims 6 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Raghunathan in view of Mordan, and further in view of NPL reference Liu et al. “Learning a Rotation Invariant Detector with Rotatable Bounding Box” herein referred to as Liu. Regarding Claim 6 Raghunathan in view of Mordan teaches: The method of claim 1 (see rejection of claim 1) Raghunathan in view of Mordan does not explicitly teach: wherein the characteristic of the object is a heading direction of a bounding box of the object, wherein the bounding box is a rectangle with vertical and horizontal sides surrounding the object. However, Liu teaches: wherein the characteristic of the object is a heading direction of a bounding box of the object (page 1 column 1 section 1 “Introduction” line 7) “This article discusses how to design and train a rotation invariant detector by introducing the rotatable bounding box (RBox)[*Examiner notes: mapped to bounding box of the object].”; (page 2 column 2 last paragraph) “RBox is a rectangle with a angle parameter to define its orientation[*Examiner notes: mapped to heading direction].” wherein the bounding box is a rectangle with vertical and horizontal sides surrounding the object (page 2 column 2 last paragraph) “RBox is a rectangle with a angle parameter to define its orientation.”; (page 3 column 1 line 1) “Compared with BBox, RBox surrounds the outline of the target object more tightly”; Table 1 row 2 column 2 image; [*Examiner notes: The image from table 1 shows the rectangle bounding box with vertical and horizontal sides surrounding the object with a heading angle.] PNG media_image3.png 88 99 media_image3.png Greyscale Raghunathan, Mordan, Liu, and the present application are analogous because they are all directed towards machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the task neural network as taught by Raghunathan in view of Mordan by using the heading direction of a bounding box as taught by Liu because (Liu page 3 table 1) “The width and height of RBox reflect the physical size of the object, which is helpful for customized designing of the prior boxes” and (Liu page 3 table 1) “RBox contains less background pixels than BBox does, so classification between object and background is easier” and (Liu page 3 table 1) “RBox can efficiently separate dense objects with no overlapped areas between nearby targets.” Regarding Claim 9 Raghunathan in view of Mordan teaches: The method of claim 1 (see rejection of claim 1) However Liu teaches: wherein the characteristic of the object is a size of the object, wherein the size of the object includes at least one of a width or a length of the object. (page 3 table 1 row 2 column 1) “The width and height of RBox reflect the physical size of the object” Raghunathan, Mordan, Liu, and the present application are analogous because they are all directed towards machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the task neural network as taught by Wei by using the object width as taught by Liu because (Liu page 3 table 1) “The width and height of RBox reflect the physical size of the object, which is helpful for customized designing of the prior boxes.” Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Raghunathan in view of Mordan, and further in view of NPL reference 3Blue1Brown “But what is a neural network? | Chapter 1, Deep learning” herein referred to as Sanderson. Regarding Claim 14 Raghunathan in view of Mordan teaches: The method of claim 1 (see rejection of claim 1) Raghunathan in view of Mordan does not explicitly teach: wherein the respective prediction assigns a score to each object category of a set of object categories, with each score representing an estimated likelihood that the object of the scene depicted in the particular sensor data input belonging to the respective object category. However, Sanderson teaches: wherein the respective prediction assigns a score to each object category of a set of object categories, with each score representing an estimated likelihood that the object of the scene depicted in the particular sensor data input belonging to the respective object category. (timestamp 3:46) “3:46 Now jumping over to the last layer[*Examiner notes: corresponds to respective prediction], this has 10 neurons, 3:49 each representing one of the digits[*Examiner notes: corresponds to each object category of the object categories]. 3:52 The activation in these neurons, again some number that's between 0 and 1, 3:56 represents how much the system thinks that a given image corresponds with a given digit[*Examiner notes: mapped to assigns a score].”; [*Examiner notes: the term “likelihood” may refer to any measure of the belief that a statement is true or an event will happen]; (Screen capture at timestamp 3:56) PNG media_image4.png 759 1370 media_image4.png Greyscale Raghunathan, Mordan, Sanderson and the present application are analogous because they are all directed towards machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the classifier neural network of Raghunathan in view of Mordan with the likelihood that objects belong to respective categories as taught by Sanderson because (Sanderson timestamp 5:22) “5:22 And the brightest neuron of that output layer is the network's choice, 5:26 so to speak, for what digit this image represents.” That is, using likelihoods in the output layer allows for selection of the class label by identifying the neuron with the highest likelihood (“brightest neuron”). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Raghunathan in view of Mordan and Sanderson, and further in view of NPL reference Black et al. “Leave-one-out Unfairness” herein referred to as Black. Regarding Claim 18: Raghunathan in view of Mordan and Sanderson teaches: The method of claim 14 (see rejection of claim 14) Raghunathan in view of Mordan and Sanderson does not teach: wherein determining the level of inconsistency between the plurality of predictions comprises: for each object category of the set of object categories: determining a variance of the scores assigned to the object category by the plurality of predictions; wherein determining that the level of inconsistency exceeds a threshold level comprises: determining that the variance of at least one object category exceeds a threshold variance However, Black teaches: wherein determining the level of inconsistency between the plurality of predictions comprises: for each object category of the set of object categories: determining a variance of the scores assigned to the object category by the plurality of predictions; (Page 4 column 2 paragraph 2) “Definition 2 (Leave-one-out Unfairness (LUF)). Let 𝐷 be the distribution from which the training set 𝑆 is drawn, and let 𝑥 be in the support of𝐷. We define the leave-one-out unfairness (LUF) experienced by 𝑥 under learning rule ℎ and training set 𝑆 ∼𝐷 to be: LUF(ℎ,𝑆,𝑥) = max𝑖,𝑘|Pr[ℎ𝑆 (𝑥) =𝑘]−Pr[ℎ𝑆(\𝑖) (𝑥) =𝑘] | […] In other words, given a learning rule ℎ and a training set 𝑆, the LUF experienced by a person 𝑥 is the worst-case probability[*Examiner notes: mapped to score assigned to the object category] that 𝑥 receives a different prediction in a model trained with ℎ on 𝑆, and one trained with ℎ on 𝑆 with a single point removed[*Examiner notes: mapped to a plurality of predictions].”; [*Examiner notes: Pr[ℎ𝑆 (𝑥) =𝑘] and Pr[ℎ𝑆(\𝑖) (𝑥) =𝑘] denote the probability that classifiers ℎ𝑆 and ℎ𝑆(\𝑖) respectively classify the point x to be in object category k. The difference |Pr[ℎ𝑆 (𝑥) =𝑘]−Pr[ℎ𝑆(\𝑖) (𝑥) =𝑘] | can be interpreted as a variance of scores assigned to the object category by the plurality of predictions. The maximum is taken over all i and all k, and thus the level of inconsistency is performed for each object category.] wherein determining that the level of inconsistency exceeds a threshold level comprises: determining that the variance of at least one object category exceeds a threshold variance (page 5 paragraph 3 “Proposition 3.1”) “Then there exists a training set 𝑆 such that E𝑥[LUF(ℎ,𝑠,𝑥)] >𝜖stable (𝑚) and 𝑥.”; [*Examiner notes: The inequality above states that for some training set, the expected value of the leave-one-out unfairness exceeds the threshold ϵ stable(m). This includes the variance of at least one object category.] Raghunathan, Mordan, Sanderson, Black, and the present application are analogous as they are all directed towards machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the hard example mining of Raghunathan in view of Mordan and Sanderson with the inconsistency of the variance and threshold variance as taught by Black because (Black page 1 abstract line 4) “Leave-one-out unfairness appeals to the idea that fair decisions are not arbitrary: they should not be based on the chance event of any one person’s inclusion in the training data. Leave-one-out unfairness is closely related to algorithmic stability, but it focuses on the consistency of an individual point’s prediction outcome over unit changes to the training data, rather than the error of the model in aggregate”. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Raghunathan in view of Mordan, and further in view of NPL reference Fremont et al. “Formal Scenario-Based Testing of Autonomous Vehicles: From Simulation to the Real World” herein referred to as Fremont. Regarding Claim 23 Raghunathan in view of Mordan teaches: The method of claim 1 (see rejection of claim 1) Raghunathan in view of Mordan does not explicitly teach further comprising after training, deploying the task neural network on-board the autonomous vehicle. However, Fremont teaches further comprising after training, deploying the task neural network on-board the autonomous vehicle. (page 5 column 2 last paragraph) “The test vehicle is a 2018 lincoln MKZ Hybrid (shown in Fig. 1) enhanced with Data Speed drive-by-wire functionality and several sensors including a Velodyne VLS128 LiDAR, three Leopard Imaging AR023ZWDR USB cameras, and a Novatel PwrPak7 dual-antenna GPS/IMU with RTK correction for ~ 2 cm position accuracy. The tests were performed using the open-source Apollo 3.56 selfdriving software [25] installed on an x86 Industrial PC with an NVIDIA GTX-1080 GPU. Apollo’s perception processes data from the LiDAR sensor using GPU-accelerated deep neural networks to identify perceived obstacles.” Raghunathan, Mordan, Fremont, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the hard example mining of Raghunathan in view of Mordan with the model deployment of Fremont because (Fremont page 1 column 1 first paragraph) “Experiments with a real autonomous vehicle at an industrial testing facility support our hypotheses that (i) formal simulation can be effective at identifying test cases to run on the track, and (ii) the gap between simulated and real worlds can be systematically evaluated and bridged.” Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Raghunathan in view of Mordan, and further in view of NPL reference Zheng et al. “Improving the Robustness of Deep Neural Networks via Stability Testing” herein referred to as Zheng. Regarding Claim 24 Raghunathan in view of Mordan teaches: The method of claim 1 (see rejection of claim 1) Raghunathan in view of Mordan does not explicitly teach: wherein determining a level of inconsistency between the plurality of predictions about the characteristic of the object of the scene comprises determining a number of times that a value assigned to the characteristic has changed in the plurality of predictions. and wherein determining that the level of inconsistency exceeds a threshold level comprises: determining that the value assigned to the characteristic has changed more than a threshold number of times. However, Zheng teaches: wherein determining a level of inconsistency between the plurality of predictions about the characteristic of the object of the scene comprises determining a number of times that a value assigned to the characteristic has changed in the plurality of predictions. (page 4481 column 1 figure 2 caption) “Visually similar video frames can confuse state-of-the art classifiers: two neighboring frames are visually indistinguishable, but can lead to very different class predictions. The class score for ’fox’ is significantly different for the left frame (27%) and right frame (63%)” and wherein determining that the level of inconsistency exceeds a threshold level comprises: determining that the value assigned to the characteristic has changed more than a threshold number of times. [*Examiner notes: The broadest reasonable interpretation of a number of times that a value has changed includes determining an amount of change between two predictions.]; (page 4485) “To do so, we define the detection criterion as follows: given an image pair (a, b), we say that a, b are near-duplicates ⇐⇒ ||f(a) − f(b)||2 < T, (11) where T is the near-duplicate detection threshold.” Raghunathan, Mordan, and the instant application are analogous because they are all directed to neural networks. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the hard example mining of Raghunathan in view of Mordan with the inconsistency metric taught by Zheng because (Zheng page 4480 abstract) “We validate our method by stabilizing the state of-the-art Inception architecture [11] against these types of distortions. In addition, we demonstrate that our stabilized model gives robust state-of-the-art performance on large scale near-duplicate detection, similar-image ranking, and classification on noisy datasets.” Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ezra J Baker whose telephone number is (703)756-1087. The examiner can normally be reached Monday - Friday 10:00 am - 8:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.J.B./Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Aug 10, 2021
Application Filed
Nov 18, 2024
Non-Final Rejection — §101, §103
Feb 11, 2025
Examiner Interview Summary
Feb 11, 2025
Applicant Interview (Telephonic)
Feb 20, 2025
Response Filed
Mar 25, 2025
Final Rejection — §101, §103
Jun 27, 2025
Examiner Interview Summary
Jun 27, 2025
Applicant Interview (Telephonic)
Jul 31, 2025
Request for Continued Examination
Aug 05, 2025
Response after Non-Final Action
Aug 26, 2025
Non-Final Rejection — §101, §103
Nov 18, 2025
Examiner Interview Summary
Nov 18, 2025
Applicant Interview (Telephonic)
Dec 29, 2025
Response Filed
Feb 05, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585964
EXHAUSTIVE LEARNING TECHNIQUES FOR MACHINE LEARNING ALGORITHMS
2y 5m to grant Granted Mar 24, 2026
Patent 12579477
FEATURE SELECTION USING FEEDBACK-ASSISTED OPTIMIZATION MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12505379
COMPUTER-READABLE RECORDING MEDIUM STORING MACHINE LEARNING PROGRAM, MACHINE LEARNING METHOD, AND INFORMATION PROCESSING DEVICE OF IMPROVING PERFORMANCE OF LEARNING SKIP IN TRAINING MACHINE LEARNING MODEL
2y 5m to grant Granted Dec 23, 2025
Patent 12373674
CODING OF AN EVENT IN AN ANALOG DATA FLOW WITH A FIRST EVENT DETECTION SPIKE AND A SECOND DELAYED SPIKE
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+77.8%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month