Prosecution Insights
Last updated: April 19, 2026
Application No. 17/994,991

TRAINING A CLASSIFIER TO DETECT OPEN VEHICLE DOORS

Final Rejection §101§103
Filed
Nov 28, 2022
Examiner
KAPOOR, DEVAN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Waymo LLC
OA Round
2 (Final)
11%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
1 granted / 9 resolved
-43.9% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
38.1%
-1.9% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is responsive to the application filed on 01/20/2026. Claims 1-5,7-12 and 14-19 are pending and have been examined. This action is Final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Response to Arguments Argument 1: Applicant argues that the pending claims are not directed to an abstract idea under Step 2A, Prong 1, because the claimed limitations cannot be practically performed in the human mind and instead recite specific machine-learning operations applied to sensor data. Alternatively, under Step 2A, Prong 2, Applicant contends that even if a judicial exception is present, the claims integrate the exception into a practical application by improving autonomous vehicle technology and machine-learning systems, particularly in scenarios where labeled data is sparse and noisy (e.g., detecting rare “open door” events). Applicant emphasizes that the invention provides a technical improvement by generating additional valid training examples using temporal propagation logic constrained by vehicle identity and time thresholds, thereby improving classifier accuracy without requiring extensive manual labeling. Applicant further points to the added limitation of causing a vehicle to adjust its movement based on the generated open door score as evidence of a real-world technological application, and relies on USPTO guidance (2025 Memo) and case law to argue that the claims improve computer functionality and therefore are patent-eligible. Examiner Response to Argument 1: The examiner has considered the argument above, however it is not persuasive because, as shown in the rejection and supported by the mapping, claim 1 is still directed to generating a predicted likelihood (open door score) and classifying data using a machine-learning model, which are mathematical operations and mental processes under Step 2A, Prong 1. The applicant’s amendments regarding additional training examples and temporal relationships do not change this, as they still involve collecting data, generating labels, and processing that data using a classifier, which are forms of data analysis rather than a technological improvement. Under Step 2A, Prong 2, the additional elements do not integrate the abstract idea into a practical application because the steps of receiving sensor data and defining labeled training examples are merely data gathering, and the use of a machine-learning classifier is a generic tool. Under Step 2B, the elements, individually and in combination, amount to well understood, routine, and conventional activities, including collecting data, training a model, generating labels, and using classification outputs. Accordingly, consistent with current SME Guidance and reasoning reflected in Ex parte Desjardins, the claim does not add significantly more than the abstract idea, and the rejection under 35 U.S.C. 101 is maintained. Argument 2 (art): Applicant argues that the applied references, individually and in combination, fail to teach or suggest the key limitation requiring that each additional training example be generated based on (i) an additional sensor sample that characterizes the same vehicle as a labeled training example and (ii) that is captured within a threshold time relative to that labeled example. Applicant specifically contends that Budvytis only discloses pixel-level or patch-level label propagation across frames and does not ensure that propagated samples correspond to the same vehicle or object instance, nor does it impose the claimed temporal constraint tied to a specific labeled sample. Applicant further asserts that the cited references do not disclose object-level propagation of labels based on both identity and temporal proximity, and therefore fail to teach the claimed method of generating additional training examples. Accordingly, Applicant maintains that the combination of Silver, Lee, Budvytis, and Zhu does not render the amended independent claims obvious. Examiner Response to Argument 2: The examiner has considered the argument above, however it is not persuasive because the rejection relies on the combined teachings of the applied references, which collectively disclose the claimed limitations. As set forth in the mapping, Budvytis teaches generating additional labeled data from temporally proximate frames, including “row (a) contains three images extracted at varying distances (4 or 8) from a seed labelled frame” and “d is a constant which builds correspondences from the current frame to the previous frame or to the next frame when set to -1 and 1 respectively” (Budvytis, page 230 and page 232), which are directed to selecting additional samples within a bounded temporal relationship relative to a labeled sample. While Applicant argues that Budvytis operates at a pixel or patch level and does not ensure correspondence to the same vehicle, Zhu teaches that temporally proximate samples correspond to the “same object instance… observed in multiple ‘snap-shots’ in a short time” (Zhu, page 1), which is directed to multiple samples representing the same object over time. The rejection relies on this combined teaching to satisfy the limitation that the additional sensor sample characterizes the same vehicle as the labeled training example. Further, Silver teaches detecting and classifying a vehicle condition such as an open door (Silver, col. 8–9; col. 9, lines 65-end, lines 1), and although the claim recites adjusting vehicle movement based on the open door score, the mapping shows that Silver already teaches detecting an open door and assigning weight to that condition in determining a vehicle response, which demonstrates that using such information to influence behavior is conventional and does not reflect a specific improvement to vehicle control technology. Lee teaches generating label data for additional samples using classifier outputs (Lee, page 3), thereby supporting the overall framework for generating additional training examples. Applicant’s argument focuses on Budvytis alone and does not address the combined teachings of the references, which is the proper basis for the rejection. Additionally, the combination is supported by reasoning that using temporally related samples of the same object improves training consistency and classification performance. Accordingly, the applied references, when considered together, teach or suggest the disputed limitation, and the rejection under 35 U.S.C. 103 is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5,7-12 and 14-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, Step 1: The claim is directed to a method, which falls under the category of a process. The claim satisfies step 1. Step 2A Prong 1: “to generate an open door score that represents a predicted likelihood that the first vehicle has an open door…processing the input sensor sample using a machine-learning classifier having a plurality of weights and having been trained in a first training process at least on first training data including a plurality of labeled training examples and a plurality of additional training examples that have been generated based on the plurality of labeled training examples” -- The limitation is directed to generating an open door score that will represent a probability if the first vehicle will have an open door, and processing the input sensor sample using a ML classifier with trained weight group that include a group of trained, labeled examples that are generated. The limitation is directed to the use of mathematical calculations/operations, and thus the limitation is directed to math. Step 2A Prong 2 and Step 2B: “A computer-implemented method for detecting open vehicle doors, comprising: receiving an input sensor sample that characterizes a first vehicle and is generated from sensor data captured by one or more sensors of a second vehicle;” -- The limitation recites a method to receive input sensor samples that will characterize a first vehicle and is generated from gathered sensor data. The limitation is directed to an insignificant, extra-solution activity, that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of receiving/data over a network is a well-understood, routine, and conventional activity that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). “wherein each labeled training example comprises (i) a sensor sample for which label data is available and (ii) label data classifying the sensor sample as characterizing a vehicle that has an open door, and wherein each additional training example comprises” -- The limitation recites that the labeled training will comprise of a sensor sample where data is considered available, and data that is classified as a sensor sample that will characterize a vehicle that has an open door. The limitation is directed to mere data gathering, and it is an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, gathering data to be manipulated and used within a system is a well-understood, routine, and conventional activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 1 is non-patent eligible. Claims 8 and 15 are analogous to claim 1 (aside from type of claim from method vs system vs CRM), and thus will face the same rejection as set forth above. Regarding claim 2, Step 1: The claim is directed to a method, which falls under the category of a process. The claim satisfies step 1. Step 2A Prong 1: “generated using the machine-learning classifier” -- This limitation, similar to claim 1, is directed to math. Step 2A Prong 2 and Step 2B: “wherein the machine-learning classifier has been further trained in a second training process on second training data including a plurality of further training examples…and in accordance with updated values for the weights of the machine-learning classifier that have been updated in the first training process.” -- The limitation recites that the ML classifier will be further trained in a second training process that includes more training examples and is in accordance with new/updated values for the weights of the ML classifier that have been updated the first process. The limitation amounts to no more than mere further limiting to a field of use/environment, and it does not integrate to a practical application, nor does it provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 2 is non-patent eligible. Claims 9 and 16 are analogous to claim 2 (aside from type of claim from method vs system vs CRM), and thus will face the same rejection as set forth above. Regarding claim 3, Step 1: The claim is directed to a method, which falls under the category of a process. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and 2B: “The computer-implemented method of claim 2, wherein the second training process comprises using the second training data to further update weights of the machine-learning classifier starting from the updated values.” -- The limitation recites that the second training process will comprise using data to update weight values. The limitation is directed to an insignificant, extra-solution activity and it does not integrate to a practical application (see MPEP 2106.05(g)). Furthermore, under step 2B, updating values from gathered data is a well-understood, routine, and conventional activity that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 3 is non-patent eligible. Claims 10 and 17 are analogous to claim 3 (aside from type of claim from method vs system vs CRM), and thus will face the same rejection as set forth above. Regarding claim 4, Step 1: The claim is directed to a method, which falls under the category of a process. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and 2B: “wherein the second training process comprises using the second training data to update the weights for the machine-learning classifier starting from initial values for the weights of the machine-learning classifier. -- The limitation recites the second training process will update weight values for the classifier. The limitation is an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, updating weights for a training process, which is considered to be a well-understood, routine, and conventional activity (WURC), and thus it cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 4 is non-patent eligible. Claims 11 and 18 are analogous to claim 4 (aside from type of claim from method vs system vs CRM), and thus will face the same rejection as set forth above. Regarding claim 5, Step 1: The claim is directed to a method, which falls under the category of a process. The claim satisfies step 1. Step 2A Prong 1: “The computer-implemented method of claim 2, wherein the plurality of further training examples are generated by: processing each of a plurality of candidate sensor samples using the machine-learning classifier and in accordance with the updated values for the weights to generate a respective open door score for each candidate sensor sample;” -- The limitation is directed to generating an open door score using a ML classifier that will represent a probability if the first vehicle will have an open door, and processing the input sensor sample using a ML classifier with trained weight group that include a group of trained, labeled examples that are generated. The limitation is directed to the use of mathematical calculations/operations, and thus the limitation is directed to math. “and classifying each candidate sensor sample having an open door score that exceeds a threshold score as a sensor sample that characterizes a vehicle with an open door, classifying each candidate sensor sample having an open door score that exceeds a threshold score as a sensor sample that characterizes a vehicle with an open door.” -- The limitation is directed to classifying sensor samples that exceed a threshold score that will characterize a vehicle with an open door. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgement, and thus the limitation is directed to a mental process. There are no elements to be evaluated under Step 2A Prong 2 and Step 2B. Thus, claim 5 is non-patent eligible. Claims 12 and 19 are analogous to claim 5 (aside from type of claim from method vs system vs CRM), and thus will face the same rejection as set forth above. Regarding claim 7, Step 1: The claim is directed to a method, which falls under the category of a process. The claim satisfies step 1. There are no elements under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The computer-implemented method of claim 1, wherein the sensor sample of each of the plurality of labeled training examples includes a more than a threshold amount of measurements outside of an outline of a body of the vehicle characterized by the sensor sample.” -- The limitation recites that a sensor sample for the labeled training examples will further include more than a threshold amount of measurements outside an outline of the characterized vehicle. The limitation amounts to no more than mere further limiting to a field of use/environment, and thus it cannot be integrated to a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 7 is non-patent eligible. Claim 14 are analogous to claim 7 (aside from type of claim from method vs system vs CRM). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5,7-12 and 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over US10156851B1, by Silver et. al. (referred herein as Silver) in view of NPL reference “Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks.” By Lee et. al. (referred herein as Lee) in view of NPL reference “Large scale labelled video data augmentation for semantic segmentation in driving scenarios.”, by Budvytis et. al. (referred herein as Budvytis) further in view of NPL reference “Flow-Guided Feature Aggregation for Video Object Detection” by Zhu et. al. (referred herein as Zhu). Regarding claim 1, Silver teaches: A computer-implemented method for detecting open vehicle doors, comprising: receiving an input sensor sample that characterizes a first vehicle and is generated from sensor data captured by one or more sensors of a second vehicle; ([Silver, col. 8, col. 9, lines 65-end, lines 1] “as autonomous vehicle 100 approaches vehicle 530, it may detect, using one or more of the sensors or cameras described above, that door 532 of vehicle 530 is currently open”, wherein the examiner interprets “detect, using one or more of the sensors or cameras… that door 532 of vehicle 530 is currently open” to be the same as “receiving an input sensor sample that characterizes a first vehicle and is generated from sensor data captured by one or more sensors of a second vehicle” because they are both directed to a first vehicle being characterized using sensor data captured from another vehicle.) wherein each labeled training example comprises (i) a sensor sample for which label data is available and (ii) label data classifying the sensor sample as characterizing a vehicle that has an open door; ([Silver, page 17, col 12, lines 1-4] “the detected vehicle having the driver’s side door open may be given three times the weight compared to the displaying of hazard lights by the detected vehicle”, wherein the examiner interprets “the detected vehicle having the driver’s side door open” to be the same as “label data classifying the sensor sample as characterizing a vehicle that has an open door” because they are both directed to identifying and labeling a vehicle condition corresponding to an open door). (ii) label data that classifies the particular training vehicle characterized by the additional sensor sample as having an open door, ([Silver, col. 8, col. 9, lines 65-end, lines 1] “as autonomous vehicle 100 approaches vehicle 530, it may detect, using one or more of the sensors or cameras described above, that door 532 of vehicle 530 is currently open” AND [Silver, page 16, col. 9, lines 20-22] “the autonomous vehicle may determine whether a detected vehicle has an open trunk, hood, or door”, wherein the examiner interprets “detect… that door 532 of vehicle 530 is currently open” to be the same as “label data that classifies the particular training vehicle characterized by the additional sensor sample as having an open door” because they are both directed to determining, from sensor data, a classified state of a vehicle indicating that the vehicle has an open door. Furthermore, the examiner interprets “determine whether a detected vehicle has an open… door” to be the same as “classifies… as having an open door” because they are both directed to categorizing a detected vehicle into a defined condition class corresponding to an open door based on sensed data). causing the second vehicle to adjust a movement of the second vehicle based at least in part on the generated open door score, ([Silver, page 17, col. 12, lines 1-7] “the detected vehicle having the driver's side door open may be given three times the weight compared to the displaying of hazard lights by the detected vehicle. The probability of the detected vehicle being in a long-term stationary state may also be adjusted based on whether the autonomous vehicle has previously observed the detected vehicle to have moved.” AND ([Silver, col. pages 15-16, col. 8-9,lines 65-67, 1] “For example, as autonomous vehicle 100 approaches vehicle 530, it may detect, using one or more of the sensors or cameras described above, that door 532 of vehicle 530 is currently open.”, wherein the examiner interprets “given three times the weight” to be the same as “based at least in part on the generated open door score” because they are both directed to assigning a weighted importance to an open-door condition in determining a vehicle response. Furthermore, the examiner interprets “approaches vehicle 530” in view of detecting the open-door condition to be the same as “causing the second vehicle to adjust a movement of the second vehicle” because they are both directed to an autonomous vehicle modifying its driving behavior in response to detected conditions of another vehicle in its environment). Silver does not teach processing the input sensor sample using a machine-learning classifier having a plurality of weights and having been trained in a first training process at least on first training data including a plurality of labeled training examples and a plurality of additional training examples that have been generated based on the plurality of labeled training examples to generate an open door score that represents a predicted likelihood that the first vehicle has an open door. Lee teaches: processing the input sensor sample using a machine-learning classifier having a plurality of weights; ([Lee, page 1], “the proposed network is trained in a supervised fashion with labeled and unlabeled data simultaneously” and “the weights of all layers are initialized by this layer-wise unsupervised training… in a second phase, fine-tuning, the weights are trained globally with labels using backpropagation algorithm”, wherein the examiner interprets “the weights of all layers… trained… using backpropagation algorithm” to be the same as “a machine-learning classifier having a plurality of weights” because they are both directed to a model that applies learned weight parameters to input data for classification) label data generated by classifying the additional sensor sample as a sensor sample that characterizes a vehicle that has an open door; ([Lee, page 3] “Pseudo-Label are target classes for unlabeled data as if they were true labels. We can just pick up the class that has maximum network output for each unlabeled sample.”, wherein the examiner interprets “Pseudo-Label… pick up the class that has maximum network output” to be the same as “label data generated by classifying the additional sensor sample” because they are both directed to assigning class labels to unlabeled data using classifier outputs). Silver and Lee do not explicitly teach having been trained in a first training process at least on first training data including a plurality of labeled training examples and a plurality of additional training examples that have been generated based on the plurality of labeled training examples; to generate an open door score that represents a predicted likelihood that the first vehicle has an open door; wherein each additional training example comprises (i) an additional sensor sample that has been identified by in response to determining….and (b) the additional sensor sample has been captured less than a threshold amount of time before the sensor sample of one of the labeled training examples; Budvytis teaches: having been trained in a first training process at least on first training data including a plurality of labeled training examples and a plurality of additional training examples that have been generated based on the plurality of labeled training examples; ([Budvytis, Abstract] “To generate additional labelled data, we make use of an occlusion-aware and uncertainty-enabled label propagation algorithm”, wherein the examiner interprets “generate additional labelled data” to be the same as “additional training examples… generated based on the plurality of labeled training examples” because they are both directed to augmenting labeled training data using existing labeled samples.) to generate an open door score that represents a predicted likelihood that the first vehicle has an open door; ([Budvytis, page 232] “The final per pixel class distributions are obtained by summing over distributions of overlapping pixels as follows [Equation 3]” AND [Budvytis, page 234] “The high (and increasing) average class accuracy on training data for E-Net trained on hand labels indicates overfitting and explains the large class average and IoU score differences on the test data.”, wherein the examiner interprets “class distributions” and “large class average and IoU score” to be the same as “a predicted likelihood” because they are both directed to probabilistic outputs representing likelihood of a classification) wherein each additional training example comprises (i) an additional sensor sample that has been identified by in response to determining….and (b) the additional sensor sample has been captured less than a threshold amount of time before the sensor sample of one of the labeled training examples; ([Budvytis, page 230] “row (a) contains three images extracted at varying distances (4 or 8) from a seed labelled frame” and [Budvytis, page 232] “d is a constant which builds correspondences from the current frame to the previous frame or to the next frame when set to -1 and 1 respectively”, wherein the examiner interprets “images extracted at varying distances… from a seed labelled frame” and “correspondences… to the previous frame” to be the same as “captured less than a threshold amount of time before the sensor sample of one of the labeled training examples” because they are both directed to selecting temporally proximate samples relative to a labeled reference sample). Zhu teaches: wherein the additional sensor sample characterizes a particular training vehicle that is also characterized by the sensor sample of a particular one of the labeled training examples; ([Zhu, page 1] “the video has rich information about the same object instance, usually observed in multiple ‘snap-shots’ in a short time”, wherein the examiner interprets “same object instance… observed in multiple snapshots in a short time” to be the same as “the additional sensor sample characterizes a particular training vehicle that is also characterized by the sensor sample of a particular one of the labeled training examples” because they are both directed to multiple temporally proximate samples corresponding to the same physical object). Silver, Lee, Budvytis, Zhu, and the instant application are analogous art because they are all directed to detecting object states using sensor data and machine-learning techniques that utilize labeled data, generated label data, and temporally related samples to classify conditions of objects, including vehicles, and to support responsive decision-making in dynamic environments. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of detecting open vehicle doors disclosed by Silver to include the “Pseudo-Label are target classes for unlabeled data as if they were true labels” disclosed by Lee. One would be motivated to do so to efficiently generate label data for additional sensor samples using model predictions, thereby expanding the available training data and improving the classifier’s ability to determine vehicle conditions, including whether a vehicle has an open door, as suggested by Lee ([Lee, page 3], “Pseudo-Label are target classes for unlabeled data as if they were true labels.”). It would have been further obvious to a person of ordinary skill in the art before the effective filing date of the invention to further include the “To generate additional labelled data, we make use of an occlusion-aware and uncertainty-enabled label propagation algorithm” disclosed by Budvytis. One would be motivated to do so to effectively generate additional training examples from temporally proximate sensor samples corresponding to observed vehicles, thereby improving robustness and consistency of classification of vehicle conditions across sequential observations, as suggested by Budvytis ([Budvytis, Abstract], “increase the availability of high-resolution labelled frames by a factor of 20.”). It would have been further obvious to a person of ordinary skill in the art before the effective filing date of the invention to further include the “the video has rich information about the same object instance, usually observed in multiple ‘snap-shots’ in a short time” disclosed by Zhu. One would be motivated to do so to effectively associate temporally adjacent sensor samples with the same vehicle instance, thereby ensuring that generated label data corresponds to the same particular vehicle and improving consistency of classification across time, as suggested by Zhu ([Zhu, page 1], “improves the video recognition accuracy.”). Claims 8 and 15 are analogous to claim 1 (aside from type of claim from method vs system vs CRM), and thus will face the same rejection as set forth above. Regarding claim 2, Silver, Lee, Budvytis, and Zhu teaches The computer-implemented method of claim 1, (see rejection of claim 1). Lee further teaches wherein the machine-learning classifier has been further trained in a second training process on second training data including a plurality of further training examples generated using the machine-learning classifier ([Lee, page 1] “Most work in two phases. In a first phase, unsupervised pre-training, the weights of all layers are initialized by layer-wise unsupervised training. In a second phase, fine-tuning, the weights are trained globally in a supervised fashion.” and ([Lee, page 2] “Pseudo-Label are target classes for unlabeled data as if they were true labels. We can just pick up the class that has maximum network output for each unlabeled sample.”, wherein the examiner interprets Lee’s “second phase” of fine-tuning (a subsequent phase after an initial training) to be the same as a “second training process” because they are both directed to further training of the same classifier after a first training process has already updated its weights. The examiner further interprets, creating pseudo-labels from the classifier’s own outputs for unlabeled samples to be the same as generating further training examples using the machine-learning classifier because they are both directed to forming additional training targets/examples from the model’s predictions.) and in accordance with updated values for the weights of the machine-learning classifier that have been updated in the first training process. ([Lee, page 1] “In a first phase, unsupervised pre-training, the weights of all layers are initialized by layer-wise unsupervised training.” and [Lee, page 2] “Pseudo-Label that are re-calculated every weights update are used for the same loss function.”, wherein the examiner interprets the first phase updating the model’s weights and the subsequent re-calculation of pseudo-labels after each weights update to be the same as generating the further training examples in accordance with updated values for the weights that were updated in the first training process because they are both directed to using training data derived from the classifier’s predictions after its weights have been updated by the earlier (first) training step.) Silver, Lee, Budvytis, Zhu, and the instant application are analogous art because they are all directed to a second training process that further trains a machine-learning classifier. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 1 disclosed by Silver, Lee, Budvytis, and Zhu to include the pseudo-labeling technique disclosed by Lee. One would be motivated to do so to efficiently expand the effective training set and continue fine-tuning based on the model’s current parameters by labeling target classes for unlabeled data, as suggested by Lee (Lee, [page 1] “Pseudo-Label are target classes for unlabeled data as if they were true labels. We can just pick up the class that has maximum network output for each unlabeled sample.”). Claims 9 and 16 are analogous to claim 2 (aside from type of claim from method vs system vs CRM), and thus will face the same rejection as set forth above. Regarding claim 4, Silver, Lee, Budvytis, and Zhu teaches The computer-implemented method of claim 2, (see rejection of claim 2). Lee further teaches wherein the second training process comprises using the second training data to update the weights for the machine-learning classifier starting from initial values for the weights of the machine-learning classifier. ([Lee, page 1] “Most work in two main phases. In a first phase, unsupervised pre-training, the weights of all layers are initialized by layer-wise unsupervised training. In a second phase, fine-tuning, the weights are trained globally in a supervised fashion.”, AND [Lee, page 1] “the proposed network is trained in a supervised fashion with labeled and unlabeled data simultaneously.”, wherein the examiner interprets “a second phase, fine-tuning, the weights are trained globally in a supervised fashion” together with “trained…with labeled and unlabeled data” to be the same as “the second training process comprises using the second training data to update the weights” because they are both directed to a subsequent training phase that uses supervised training data to update model weights; and “in a first phase, unsupervised pre-training, the weights…are initialized by layer-wise unsupervised training” to be the same as “starting from initial values for the weights of the machine-learning classifier” because they are both directed to beginning the later training from already-initialized weight values.) Silver, Lee, Budvytis, Zhu, and the instant application are analogous art because they are all directed to a two-phase training workflow in which a second training process updates network weights from initialized values using supervised training data. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the open-door sensing and decision process disclosed by Silver to include the network training process disclosed by Lee. One would be motivated to do so to efficiently expand effective training data and train the network simultaneously, as suggested by Lee (Lee, [page 1], “the proposed network is trained in a supervised fashion with labeled and unlabeled data simultaneously”). Claims 11 and 18 are analogous to claim 4 (aside from type of claim from method vs system vs CRM), and thus will face the same rejection as set forth above. Regarding claim 5, Silver, Lee, Budvytis, and Zhu teaches The computer-implemented method of claim 2, (see rejection of claim 2). Lee further teaches wherein the plurality of further training examples are generated by: processing each of a plurality of candidate sensor samples using the machine-learning classifier and in accordance with the updated values for the weights ([Lee, page 1] “For unlabeled data, Pseudo-Labels, just picking up the class which has the maximum network output every weights update, are used as if they were true labels.” and “fi = hM+1 i are output units used for predicting target class”, wherein the examiner interprets maximum network output every weights update and fi = hM+1 i are output units used for predicting target class to be the same as using updated values for the weights to process each candidate sensor sample and produce a class-specific score because they are both directed to applying the current weights of the neural network to compute per-class outputs for each input sample.) Silver further teaches to generate a respective open door score for each candidate sensor sample; [Silver, page 16, col 9, lines 21-22, page 17, col 12, lines 1-4] “Autonomous vehicle may also detect whether a detected vehicle has an open trunk, hood, or door.” and “the detected vehicle having the driver’s side door open may be given three times the weight compared to the displaying of hazard lights by the detected vehicle”, wherein the examiner interprets detect whether a detected vehicle has an open…door and giving the open-door condition three times the weight to be the same as defining the target open-door class and producing a corresponding score because they are both directed to computing a measure of the likelihood that a vehicle’s door is open as the classification target.) Budvytis further teaches classifying each candidate sensor sample having an open door score that exceeds a threshold score as a sensor sample that characterizes a vehicle with an open door. ([Budvytis, page 232] “First, for each pixel i in frame k, we assign the most likely class label argmax…For pixels where the most likely label has a probability lower than a threshold 1/L + 0.0001 we assign the ‘void’ label”, wherein the examiner interprets assigning a class when the probability exceeds a stated threshold (and assigning “void” when below) to be the same as classifying each candidate sensor sample having a score that exceeds a threshold score as belonging to the target class (open door) because they are both directed to thresholding a classifier’s confidence to decide class membership.) Silver, Lee, Budvytis, Zhu, and the instant application are analogous art because they are all directed to generating further training examples by processing candidate sensor samples with a machine-learning classifier. It would have been further obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 2 disclosed by Silver, Lee, Budvytis, and Zhu to include the predicting target class method of Lee, the detecting and condition of open vehicles as disclosed by Silver, and the likelihood class-labeling approach disclosed by Budvytis. One would be motivated to do so to effectively apply a decision threshold that accepts high-confidence open-door candidates and rejects low-confidence ones when generating further training examples, as suggested by Budvytis ([Budvytis, page 232] “First, for each pixel i in frame k, we assign the most likely class label argmax…For pixels where the most likely label has a probability lower than a threshold 1/L + 0.0001 we assign the ‘void’ label”). Claims 12 and 19 are analogous to claim 5 (aside from type of claim from method vs system vs CRM), and thus will face the same rejection as set forth above. Regarding claim 7, Silver, Lee, Budvytis, and Zhu teaches The computer-implemented method of claim 1, (see rejection of claim 1). Silver further teaches wherein the additional sensor sample characterizes the same vehicle as the sensor sample of the one of the labeled training examples. ([Zhu, page 1] “Nevertheless, the video has rich information about the same object instance, usually observed in multiple “snap-shots” in a short time.”, wherein the examiner interprets the description that multiple samples across nearby frames refer to the same object instance in a video sequence to be the same as the additional sensor sample characterizes the same vehicle as the sensor sample of the one of the labeled training examples because they are both directed to using two temporally close samples that depict the same physical entity rather than different entities.). Claim 14 is analogous to claim 7 (aside from claim type and dependency to independent claim), and thus the same rejection can be applied to both claims. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVAN KAPOOR whose telephone number is (703)756-1434. The examiner can normally be reached Monday - Friday: 9:00AM - 5:00 PM EST (times may vary). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEVAN KAPOOR/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Nov 28, 2022
Application Filed
Sep 16, 2025
Non-Final Rejection — §101, §103
Dec 04, 2025
Interview Requested
Dec 10, 2025
Examiner Interview Summary
Dec 10, 2025
Applicant Interview (Telephonic)
Jan 20, 2026
Response Filed
Mar 26, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
11%
Grant Probability
28%
With Interview (+16.7%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month