Prosecution Insights
Last updated: April 19, 2026
Application No. 18/460,414

AUTOMATIC REMOVAL OF SELECTED TRAINING DATA FROM DATASET

Non-Final OA §101§103§112
Filed
Sep 01, 2023
Examiner
KASSIM, IMAD MUTEE
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
GM Cruise Holdings LLC
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
116 granted / 160 resolved
+17.5% vs TC avg
Strong +34% interview lift
Without
With
+33.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
23 currently pending
Career history
183
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 160 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) do not recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: “the object detection module configured to…” (claim 1). For an analysis of the structure, material, or acts corresponding to the claimed functions, see rejection under 35 USC § 112(b) infra . Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) do not recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: “the object detection module configured to…” (claim 1). However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. For the purpose of examination, any computer capable of performing the claimed functions reads on the claims. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claims 2-7 are rejected as they are being directly or indirectly dependent on rejected claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims’ subject matter eligibility will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”). With respect to claim 1. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 1 recites a system , which is a machine . Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations identified below each, under its broadest reasonable interpretation, covers mental processes abstract idea grouping (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)), see MPEP 2106.04(a)(2), subsection III and the 2019 PEG, but for the recitation of generic computer components: “ a training dataset including a plurality of ground truth objects and sensor data; the object detection module configured to: receive a training dataset; detect predicted objects based on the sensor data; a computing system for training the object detection module, configured to: evaluate the predicted objects after a first time period; determine a first loss contribution for each ground truth object of the plurality of ground truth objects over the first time period; determine, for each ground truth object of the plurality of ground truth objects, whether the first loss contribution is one of a plurality of outlier contributions; identify a subset of the plurality of ground truth objects for which the first loss contribution is one of the plurality of outlier contributions; down-weight each ground truth object in the subset; update the training dataset to replace each ground truth object in the subset with a corresponding down-weighted ground truth object to generate an updated training dataset, wherein the updated training dataset is transmitted to the object detection module, and the object detection module is configured to detect updated predicted objects based on the updated training dataset; and evaluate the updated predicted objects after a second time period.”: (Mental processes- evaluating and reviewing error metrics or mathematical operations of loss and thresholds for data evaluation and selection ). Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application. “a training dataset including a plurality of ground truth objects and sensor data; the object detection module configured to: receive a training dataset; detect predicted objects based on the sensor data; a computing system for training the object detection module, configured to;” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). The generic computer components in these steps are recited at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No—there are no additional limitations beyond the mental processes identified above. The limitation treated above, are directed to the well-understood, routine, and conventional activity of storing and retrieving information in memory. See MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). It also includes limitations that Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The additional element is insignificant application, which is similar to examples of activities that the courts have found to be insignificant extra-solution activity, in accordance with MPEP 2106.05(g), Insignificant Extra-Solution Activity. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 2. Step 1 : A system , as above. Step 2A Prong 1 : The claim recites that “ wherein the object detection module is further configured to detect a plurality of predicted objects based on the sensor data ”: This limitation merely specifies mental processes- concept of observation and evaluation of data . Step 2A Prong 2, Step 2B : This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 3. Step 1 : A system , as above. Step 2A Prong 1 : The claim recites that “ wherein the computing system is further configured to determine, for each ground truth object, a difference between the respective ground truth object and a corresponding predicted object of the plurality of predicted objects .”: This limitation merely specifies mental processes- concept of observation and evaluation of comparing datasets for difference and loss. Step 2A Prong 2, Step 2B : This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 4. Step 1 : A system , as above. Step 2A Prong 1 : The claim recites that “ wherein the computing system is further configured to determine the first loss contribution for each ground truth object by determining the first loss contribution based on the difference .”: This limitation merely specifies mental processes- concept of observation and evaluation of comparing datasets for difference and loss . Step 2A Prong 2, Step 2B : The claim recites that “ obtaining a set of the potential queries;” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 5. Step 1 : A system , as above. Step 2A Prong 1 : The claim recites that “ wherein the computing system is configured to determine whether the first loss contribution is one of the plurality of outlier contributions by: determining a percent improvement in the first loss contribution over the first time period; and determining that the percent improvement is below a threshold .”: This limitation merely specifies mental processes- concept of observation and evaluation of percent improving and mathematical analysis . Step 2A Prong 2, Step 2B : This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 6. Step 1 : A system, as above. Step 2A Prong 1 : The claim recites that “ wherein the computing system is configured to determine whether the first loss contribution is one of the plurality of outlier contributions by: determining an average first loss contribution for the plurality of ground truth objects over the first time period; identifying a threshold loss contribution; and determining, for the subset, that the first loss contribution exceeds the threshold loss contribution .”: This limitation merely specifies mental processes- concept of observation and evaluation of percent improving and mathematical analysis . Step 2A Prong 2, Step 2B : This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 7. Step 1 : A system, as above. Step 2A Prong 1 : The claim recites that “ wherein the computing system is further configured to down-weight each ground truth object in the subset including removing at least one ground truth object in the subset from the dataset .”: This limitation merely specifies mental processes- concept of observation and evaluation of data selection/filtering and mathematical analysis . Step 2A Prong 2, Step 2B : This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claims 8-14 recites a method to perform the method recited in claims 1-7. Therefore the rejection of claims 1-7 above applies equally here. With respect to claim 15 . Claim 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 7 recites a method, which is a process. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations identified below each, under its broadest reasonable interpretation, covers mental processes abstract idea grouping (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)), see MPEP 2106.04(a)(2), subsection III and the 2019 PEG, but for the recitation of generic computer components: “ inputting a training dataset to the object detection module, wherein the training dataset includes a plurality of ground truth objects and sensor data; training the object detection module based on the training dataset; determining a first loss contribution for each ground truth object of the plurality of ground truth objects for a first epoch of the dataset; determining a second loss contribution for each ground truth object of the plurality of ground truth objects for a second epoch of the dataset; determining, for each ground truth object of the plurality of ground truth objects, a percentage change from the first loss contribution to the second loss contribution; determining an average percentage change across the plurality of ground truth objects; selecting a threshold percentage change that is less than the average; identifying a subset of the plurality of ground truth objects for which the respective percentage change is less than the threshold; down-weighting each ground truth object in the subset; and updating the training dataset to replace each ground truth object in the subset with a corresponding down-weighted ground truth object to generate an updated training dataset. ”: (Mental processes- evaluating and reviewing error metrics or mathematical operations of loss and thresholds for data evaluation and selection). This falls within the mental process grouping of abstract ideas that can be performed in the human mind, or by a human with pencil and paper. Thus, Claim 1 recites an abstract idea. Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application. inputting a training dataset to the object detection module, wherein the training dataset includes a plurality of ground truth objects and sensor data ;” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). “ training the object detection module based on the training dataset;” Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The generic computer components in these steps are recited at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No—there are no additional limitations beyond the mental processes identified above. The limitation treated above, are directed to the well-understood, routine, and conventional activity of storing and retrieving information in memory. See MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). It also includes limitations that Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The additional element is insignificant application, which is similar to examples of activities that the courts have found to be insignificant extra-solution activity, in accordance with MPEP 2106.05(g), Insignificant Extra-Solution Activity. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 16. Step 1 : A method, as above. Step 2A Prong 1 : The claim recites that “ detecting, by the object detection module, a plurality of predicted objects based on the sensor data .”: This limitation merely specifies mental processes- concept of observation and evaluation of data . Step 2A Prong 2, Step 2B : This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 17. Step 1 : A method, as above. Step 2A Prong 1 : The claim recites that “ determining, for each ground truth object, a difference between the respective ground truth object and a corresponding predicted object of the plurality of predicted objects .”: This limitation merely specifies mental processes- concept of observation and evaluation of percent improving and mathematical analysis . Step 2A Prong 2, Step 2B : This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 18. Step 1 : A method, as above. Step 2A Prong 1 : The claim recites that “ wherein determining the first loss contribution for each ground truth object includes determining the first loss contribution based on the difference .”: This limitation merely specifies mental processes- concept of observation and evaluation of percent improving and mathematical analysis . Step 2A Prong 2, Step 2B : This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 19. Step 1 : A method, as above. Step 2A Prong 1 : The claim recites that “ wherein inputting the training dataset including the plurality of ground truth objects includes, for each ground truth object, providing an object identification and a scene identification pair .”: This limitation merely specifies mental processes- concept of observation and evaluation of datasets . Step 2A Prong 2, Step 2B : This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 20. Step 1 : A method, as above. Step 2A Prong 1 : The claim recites that “ wherein down-weighting each ground truth object in the subset includes removing a first ground truth object in the subset from the dataset .”: This limitation merely specifies mental processes- concept of observation and evaluation of filtering and mathematical analysis . Step 2A Prong 2, Step 2B : This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1- 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (“Large Loss Matters in Weakly Supervised Multi-Label Classification”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 14156-14165) in view of Shrivastava et al. (“Training Region-based Object Detectors with Online Hard Example Mining”, 2016, Facebook AI Research, Carnegie Mellon University). Regarding claim 1. Kim teaches a system for training an object detection module, comprising: a training dataset including a plurality of ground truth objects and sensor data ( see abstract, e.g. Pascal VOC 2012 , MS COCO, NUSWIDE, CUB, and Open Images V3 datasets, also see page 14158, section 3.1, “Let us define an input x ∈ X and a target y ∈ Y where X and Y compose a dataset D. In a weakly super vised multi-label learning for image classification task, X is an image set and Y = {0,1,u}K where u is an annotation of ‘unknown’, i.e. unobserved label, and K is the number of categories.” ) ; the object detection module configured to: receive a training dataset ( see abstract, e.g. Pascal VOC 2012 , MS COCO, NUSWIDE, CUB, and Open Images V3 datasets, also see page 14158, section 3.1, “Let us define an input x ∈ X and a target y ∈ Y where X and Y compose a dataset D. In a weakly super vised multi-label learning for image classification task, X is an image set and Y = {0,1,u}K where u is an annotation of ‘unknown’, i.e. unobserved label, and K is the number of categories.” ) ; detect predicted objects based on the sensor data (see page 14158, section 3.1, “The naive way of training the model f with the dataset D′ = (X,Y AN) is to minimize the loss function L, where f(·) ∈ [0,1]K and BCELoss ( · , · ) is the binary cross entropy loss between the function output and the target. We call this naive method as Naive AN.” ) ; a computing system for training the object detection module, configured to: evaluate the predicted objects after a first time period ( see page 14158, Section 3.2 last paragraph, "For every label, we track the loss value on each training epoch. Then we count the number of labels having the largest loss in the first epoch". i.e. the first evaluation is then done after the first epoch which is the first time period ) ; determine a first loss contribution for each ground truth object of the plurality of ground truth objects over the first time period ( see page 14158, Section 3.2 last paragraph, "For every label, we track the loss value on each training epoch . Then we count the number of labels having the largest loss in the first epoch". i.e. the first evaluation is then done after the first epoch which is the first time period , also see page 14159, section 3.3, ) ; determine, for each ground truth object of the plurality of ground truth objects, whether the first loss contribution is one of a plurality of outlier contributions ( see page 14159, Section 3.3, false negative labels or noisy labels are with large loss ) ; identify a subset of the plurality of ground truth objects for which the first loss contribution is one of the plurality of outlier contributions ( see page 14159, Section 3.3, large loss rejection, ) ; down-weight each ground truth object in the subset ( see page 14159, “The term λi is defined as a function, λi = λ(f(x) i,yAN i i ), where arguments are also omitted for convenience. λi is the weighted value for how much the loss li should be considered in the loss function L in Equation 3. Intuitively, λi should be small when i ∈ S u and the loss li has high value in the middle of the training , that is, to ignore that loss since it is likely to be the loss from a false negative sample. We set λi = 1 when i ∈ Sp ∪ Sn since the label yAN from these indices is a clean label.” ) ; update the training dataset to replace each ground truth object in the subset with a corresponding down-weighted ground truth object to generate an updated training dataset, wherein the updated training dataset is transmitted to the object detection module, and the object detection module is configured to detect updated predicted objects based on the updated training dataset ( see page , section 3.2, “ For a true negative label, the corresponding loss value keeps decreasing as the number of iteration increases (blue line). Meanwhile, the loss of a false negative label slightly increases in the initial learning phase, and then reaches the highest in the middle phase followed by decreasing to reach near 0 at the end(red line)… For every label, we track the loss value on each training epoch. Then we count the number of labels having the largest loss in the first epoch.”, also see page 14159, Section 3.3 paragraph called large loss rejection, the set of large loss samples is recomputed at every epoch ) ; and evaluate the updated predicted objects after a second time period ( see page , section 3.2, “For a true negative label, the corresponding loss value keeps decreasing as the number of iteration increases (blue line). Meanwhile, the loss of a false negative label slightly increases in the initial learning phase, and then reaches the highest in the middle phase followed by decreasing to reach near 0 at the end(red line)…For every label, we track the loss value on each training epoch. Then we count the number of labels having the largest loss in the first epoch.”, also see page 14159, Section 3.3 paragraph called large loss rejection, the set of large loss samples is recomputed at every epoch ) . Kim do not specifically teach update the training dataset to replace each ground truth object in the subset with a corresponding down-weighted ground truth object. Shrivastava teaches update the training dataset to replace each ground truth object in the subset with a corresponding down-weighted ground truth object ( see page 4, section 4.1, “Our main observation is that these alternating steps can be combined with how FRCN is trained using online SGD. The key is that although each SGD iteration samples only a small number of images, each image contains thousands of example RoIs from which we can select the hard examples rather than a heuristically sampled subset. This strategy fits the alternation template to SGD by “freezing” the model for only one mini-batch. Thus the model is updated exactly as frequently as with the baseline SGD approach and therefore learning is not delayed .. . Given a list of RoIs and their losses, NMS works by iteratively selecting the RoI with the high est loss, and then removing all lower loss RoIs that have high overlap with the selected region. We use a relaxed IoU threshold of 0.7 to suppress only highly overlapping RoIs . ” ). Both Kim and Shrivastava pertain to the problem of object detection, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Kim and Shrivastava to teach the above limitations. The motivation for doing so would be “Our motivation is the same as it has always been detection datasets contain an overwhelming number of easy examples and a small number of hard examples. Automatic selection of these hard examples can make training more effective and efficient. OHEM is a simple and intuitive algorithm that eliminates several heuristics and hyperparameters in common use. But more importantly, it yields consistent and significant boosts in detection performance on benchmarks like PASCALVOC2007and2012. Its effective ness increases as datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset.” (see Shrivastava abstract). Regarding claim 2. Kim and Shrivastava teaches t he system of claim 1, Kim further teaches wherein the object detection module is further configured to detect a plurality of predicted objects based on the sensor data ( see page 14158, “Let us define an input x ∈ X and a target y ∈ Y where X and Y compose a dataset D. In a weakly super vised multi-label learning for image classification task, X is an image set and Y = {0,1,u}K where u is an annotation of ‘unknown’, i.e. unobserved label, and K is the number of categories.” ) . Regarding claim 3. Kim and Shrivastava teaches the system of claim 2, Kim further teaches wherein the computing system is further configured to determine, for each ground truth object, a difference between the respective ground truth object and a corresponding predicted object of the plurality of predicted objects ( see page 14158, Section 3.2 last paragraph, "For every label, we track the loss value on each training epoch . Then we count the number of labels having the largest loss in the first epoch". i.e. the first evaluation is then done after the first epoch which is the first time period, also see page 14159, section 3.3, , also see Section 3.3, Equation (4) or (5). ) . Regarding claim 4. Kim and Shrivastava teaches the system of claim 3, Kim further teaches wherein the computing system is further configured to determine the first loss contribution for each ground truth object by determining the first loss contribution based on the difference ( see page 14158, Section 3.2 last paragraph, "For every label, we track the loss value on each training epoch . Then we count the number of labels having the largest loss in the first epoch". i.e. the first evaluation is then done after the first epoch which is the first time period, also see page 14159, section 3.3, , also see Section 3.3, Equation (4) or (5). ) . Regarding claim 5. Kim and Shrivastava teaches the system of claim 1, Kim further teaches wherein the computing system is configured to determine whether the first loss contribution is one of the plurality of outlier contributions by: determining a percent improvement in the first loss contribution over the first time period; and determining that the percent improvement is below a threshold ( see page 14160, “Absolute variant. Instead of gradually increasing the rejection/correction rate, we borrow the idea of using absolute value of loss as a rejection threshold [17] and apply it in WSML. In the rejection and temporary correction schemes, we define the function λi the same as Equation 4 except for R(t) where it is defined as R(t) = R0 − t · ∆abs. R0 and ∆abs are hyperparameters where R0 is an initial threshold and ∆abs determines the speed of decrease of the threshold. We report the experimental results of these variant methods in Appendix.” ). Regarding claim 6. Kim and Shrivastava teaches the system of claim 1, Shrivastava further teaches wherein the computing system is configured to determine whether the first loss contribution is one of the plurality of outlier contributions by: determining an average first loss contribution for the plurality of ground truth objects over the first time period; identifying a threshold loss contribution; and determining, for the subset, that the first loss contribution exceeds the threshold loss contribution ( see page 4, section 4.1, “Our main observation is that these alternating steps can be combined with how FRCN is trained using online SGD. The key is that although each SGD iteration samples only a small number of images, each image contains thousands of example RoIs from which we can select the hard examples rather than a heuristically sampled subset. This strategy fits the alternation template to SGD by “freezing” the model for only one mini-batch. Thus the model is updated exactly as frequently as with the baseline SGD approach and therefore learning is not delayed... Given a list of RoIs and their losses, NMS works by iteratively selecting the RoI with the highest loss, and then removing all lower loss RoIs that have high overlap with the selected region. We use a relaxed IoU threshold of 0.7 to suppress only highly overlapping RoIs .” ) . The motivation utilized in the combination of claim 1, super, applies equally as well to claim 6. Regarding claim 7. Kim and Shrivastava teaches the system of claim 1, Kim further teaches wherein the computing system is further configured to down-weight each ground truth object in the subset including removing at least one ground truth object in the subset from the dataset ( see page 14159 , section 3.3. “Defining λi as Equation 4 makes rejecting large loss samples in the loss function L. We do not reject any loss values at the first epoch, t = 1, since the model learns clean patterns in the initial phase. In practice, we use mini-batch in each iteration instead of full batch D′ for composing the loss set. We call this method as LL-R.” ) . Shrivastava also teaches removing lower loss (see page 4, section 4.1, “Our main observation is that these alternating steps can be combined with how FRCN is trained using online SGD. The key is that although each SGD iteration samples only a small number of images, each image contains thousands of example RoIs from which we can select the hard examples rather than a heuristically sampled subset. This strategy fits the alternation template to SGD by “freezing” the model for only one mini-batch. Thus the model is updated exactly as frequently as with the baseline SGD approach and therefore learning is not delayed... Given a list of RoIs and their losses, NMS works by iteratively selecting the RoI with the highest loss, and then removing all lower loss RoIs that have high overlap with the selected region. We use a relaxed IoU threshold of 0.7 to suppress only highly overlapping RoIs .”). The motivation utilized in the combination of claim 1, super, applies equally as well to claim 7. Claims 8-14 recites a method to perform the method recited in claims 1-7. Therefore the rejection of claims 1-7 above applies equally here. Allowable Subject Matter Claims 15-20 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 101 abstract idea, set forth in this Office action. In addition, examiner notes, the claims should also be amended to overcome the claim rejections indicated in this Office action; and the claim amendments do not raise new issues that would require an updated rejection of claims. Related prior arts: Ravi et al. ( US 20200125956 A1 ) teaches a first loss term that describes a trainer prediction error exhibited between an output the pre-trained machine-learned model and a ground truth; a second loss term that describes a student simulation error exhibited between the output of the pre-trained machine-learned model and an output of the compact model; and a third loss term that describes a student prediction error exhibited between the output of the compact model and the ground truth . Rahman et al. ( US 20230385643 A1 ) teaches the first filtered error content (Error.sub.1f) can be processed as input to a first lost function (L.sub.1) to compute a first loss value (L1) based on the first filtered error content. At 1212, the second filtered error content (Error.sub.2f) can be processed using the first lost function (L.sub.1) or a second loss function (L.sub.2) to compute a second loss value (L1) based on the second filtered error content. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT IMAD M KASSIM whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-2958 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 10:30AM-5:30PM, M-F (E.S.T.) . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Michael J. Huntley can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (303) 297 - 4307 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IMAD KASSIM/ Primary Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Sep 01, 2023
Application Filed
Mar 19, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596923
MACHINE LEARNING OF KEYWORDS
2y 5m to grant Granted Apr 07, 2026
Patent 12572843
AGENT SYSTEM FOR CONTENT RECOMMENDATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12572854
ROOT CAUSE DISCOVERY ENGINE
2y 5m to grant Granted Mar 10, 2026
Patent 12566980
SYSTEM AND METHOD HAVING THE ARTIFICIAL INTELLIGENCE (AI) ALGORITHM OF K-NEAREST NEIGHBORS (K-NN)
2y 5m to grant Granted Mar 03, 2026
Patent 12566861
IDENTIFYING AND CORRECTING VULNERABILITIES IN MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+33.8%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 160 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month