Prosecution Insights
Last updated: April 19, 2026
Application No. 18/461,577

Method for Determining Training Data for Training a Model, in particular for Solving a Recognition Task

Non-Final OA §101§103§112
Filed
Sep 06, 2023
Examiner
CHEN, KUANG FU
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
203 granted / 252 resolved
+25.6% vs TC avg
Strong +67% interview lift
Without
With
+67.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
289
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. This action is responsive to the claims filed 9 / 6 /202 3 . Claims 1 - 1 0 are presented for examination. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claims 4-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention . Claim 4 limitations recite in part selecting a further partial sample to generate another labeled test sample of the at least one labeled training sample (underlining added for emphasis) . Claim 8(c) limitations contains the exact same phrasing. The phrase "test sample of the at least one labeled training sample" is internally inconsistent and illogical. Claim 1 clearly establishes a distinction between a "training sample" and a "test sample." A test sample cannot be a subset or derivative "of" a training sample in this context. This is clearly a typographical error where the applicant likely intended to write "another labeled test sample of the at least one labeled test sample" (referencing the test sample introduced in c laim 1). This error renders the scope of c laims 4 and 8 unclear. For the purposes of examination Examiner will interpret the said limitations of claims 4 and 8 as selecting a further partial sample to generate another labeled test sample of the at least one labeled test sample . Claim 5(a) limitations recite a similarity of individual samples of a sample (underlining added for emphasis) . While "a sample" provides its own antecedent basis grammatically, it is highly ambiguous in the context of the claim tree. Claim 2 introduces "an unlabeled sample," and Claim 4 introduces "a first partial sample" and "a further partial sample." It is unclear which specific sample Claim 5(a) is referring to. Based on Applicant’s specification [0053], it appears the applicant is referring to the "unlabeled sample," but the claim language fails to specify this, rendering the boundary of the limitation vague. T hus, claim 5 is indefinite. For the purpose of examination Examiner will interpret the said limitations of claim 5(c) as a similarity of individual samples of a unlabel e d sample . Claim 5(c) limitations recite in part a proportion of conditions in the first and the other partial sample (underlining added for emphasis) . T he use of the definite article "the" before "other partial sample" lacks antecedent basis and leaves the reader guessing whether it refers to the "further partial sample" or a completely new element because c laim 4 introduces " a first partial sample" and "a further partial sample " but t here is no prior introduction of an "other" partial sample in the claim chain . Thus, claim 5 is further indefinite. For the purpose of examination Examiner will interpret the said limitations of claim 5 (c) as a proportion of conditions in the first and the further partial sample . Dep endent claim 6 do not cure the deficiencies of base claim 5 and thus claim 6 is also rejected under 35 U.S.C. 112(b) for at least being dependent on the rejected base claim 5. Claim 7 limitations recite in part wherein the generating labels is performed for the first partial sample (underlining added for emphasis) . Claim 7 depends on c laim 6. Claim 6 recites "generating the labeled training sample" and "generating the other labeled test sample." However, c laim 6 does not explicitly recite a standalone step of "generating labels." Therefore, the phrase "the generating labels" in c laim 7 lacks direct antecedent basis in the parent claims, rendering the limitation indefinite. For the purpose of examination Examiner will interpret the said limitations of claim 7 as wherein generating the labeled training sample and the other labeled test sample is performed for the first partial sample . Claim 8(d) limitations recite in part generating the other labeled test sample based on the further sample (underlining added for emphasis) . Claim 8(c) introduces "a further partial sample." Claim 8(d) subsequently refers to "the further sample" (omitting the word "partial"). Because "the further sample" was not explicitly introduced, it lacks antecedent basis. It must be corrected to match the exact nomenclature introduced in step (c) to maintain clarity. For the purpose of examination Examiner will interpret the said limitations of claim 8 (d) as generating the other labeled test sample based on the further partial sample . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Claim 1 Step 1 : This claim recites “ An iterative method for ”; therefore, it is directed to the statutory category of a process. Step 2A Prong 1 : This claim recites, inter alia: An iterative method for determining training data for a primary model to solve a primary recognition task, the iterative method comprising: a) providing at least one labeled training sample : These limitations recite a mentally performable process of iterative intended for determining, using judgement and with the aid of pen and paper, training data for the intended use of a primary model to solve a primary recognition task and using observation to provide at least one labeled training sample. c) providing at least one labeled test sample; d) evaluating a recognition performance of the primary model using the labeled test sample on the primary recognition task; and depending on a result of the evaluating the recognition performance either (i) re-performing parts a), b), c), and d) of the iterative method, or (ii) ending the iterative method : These limitations recite furthering the mentally performable process of using observation to provide at least one labeled test sample, using judgement to evaluate a recognition performance observed from the primary model using the labeled test sample on the primary recognition task; and depending on a result of the evaluating the recognition performance using judgement to either (i) re-performing parts a), b), c), and d) of the iterative method, or (ii) ending the iterative method. Thus, this claim recites a judicial exception. Step 2A Prong 2 : This judicial exception is not integrated into a practical application. The additional elements of this claim are as follows: (b) training the primary model with the at least one labeled training sample : These additional elements are mere instructions to implement a judicial exception with training the primary model with the at least one labeled training sample because the additional elements only recite the idea of a solution or outcome but fails to recite details of how the primary model is trained with the at least one labeled training sample, e.g. via supervised/unsupervised/reinforcement training/etc., and no description of the particular mechanism for training to provide meaningful limitations to the claimed invention. See MPEP 2106.05(f). Thus, the way in which the additional elements use or interact with the judicial exception do not integrate the judicial exception into a practical application. Step 2B : The additional elements from Step 2A Prong 2 include adding the words "apply it" (or an equivalent) with the judicial exception. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 2 Step 1 : a process, as in claim 1. Step 2A Prong 1 : This claim recites, inter alia: further comprising: providing an unlabeled sample : This furthers the mentally performable process by using observation to provide an unlabeled sample. Thus, this claim furthers the recited judicial exception. Step 2A Prong 2 & Step 2B : There are no additional elements recited so this claim does not provide a practical application and is not considered to be significantly more. As such, this claim is patent ineligible. Claim 3 Step 1 : a process, as in claim 2. Step 2A Prong 1 : This claim recites the same judicial exception as claim 2. Step 2A Prong 2 : This judicial exception is not integrated into a practical application. The additional elements of this claim are as follows: further comprising: generating pre-labels for the unlabeled sample using the primary model; and/or generating tags using a secondary model for the unlabeled sample : These additional elements are mere additional instructions to implement the judicial exception because the additional elements only recite the idea of a solution or outcome but fails to recite details of how pre-labels for the unlabeled sample using the primary model are generated and/or how tags using a secondary model for the unlabeled sample are generated and no description of the particular mechanism for the primary model and a secondary model to provide meaningful limitations to the claimed invention. See MPEP 2106.05(f). Thus, the way in which the additional elements use or interact with the judicial exception do not integrate the judicial exception into a practical application. Step 2B : The additional elements from Step 2A Prong 2 include adding the words "apply it" (or an equivalent) with the judicial exception. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 4 Step 1 : a process, as in claim 3. Step 2A Prong 1 : This claim recites, inter alia: further comprising: evaluating the pre-labels and/or the tags; and based on the evaluation of the pre-labels and/or the tags, selecting a first partial sample to generate a labeled training sample of the at least one labeled training sample and selecting a further partial sample to generate another labeled test sample of the at least one labeled training sample (interpreted as selecting a further partial sample to generate another labeled test sample of the at least one labeled test sample per the 35 U.S.C. 112(b) rejection set forth above): This furthers the mentally performable process by mentally evaluating the observed pre-labels and/or the tags generated and based on the mental evaluation of the observed pre-labels and/or tags, using judgement to select a first partial sample to generate a labeled training sample of the at least one labeled training sample with aid of pen and paper and using judgement to select a further partial sample to generate another labeled test sample of the at least one labeled test sample with aid of pen and paper. Thus, this claim furthers the recited judicial exception. Step 2A Prong 2 & Step 2B : There are no additional elements recited so this claim does not provide a practical application and is not considered to be significantly more. As such, this claim is patent ineligible. Claim 5 Step 1 : a process, as in claim 4. Step 2A Prong 1 : This claim recites the same judicial exception as claim 4. Step 2A Prong 2 : This judicial exception is not integrated into a practical application. The additional elements of this claim are as follows: wherein: at least one of the following elements is considered when evaluating: a) a similarity of individual samples of a sample (interpreted as a similarity of individual samples of a unlabeled sample per the 35 U.S.C. 112(b) rejection set forth above) , b) a relevance of samples for training the primary model, c) a proportion of conditions in the first and the other partial sample (interpreted as a proportion of conditions in the first and the further partial sample per the 35 U.S.C. 112(b) rejection set forth above) , d) correlations between metrics, which characterize a recognition accuracy and/or reliability of the primary recognition task and/or correlations between metrics, which characterize a recognition accuracy and/or reliability of the primary recognition task, and tags, e) continuous and/or modified metrics of the primary recognition task, and f) a recognition performance of certain sensors : These additional elements are recited at a high level of generality and is viewed as nothing more than an attempt to generally link the use of the judicial exception recited in claim 4 to a technological environment or a field of use wherein at least one of the listed elements is considered when evaluating which does not meaningfully limit the claim. See MPEP 2106.05(h). Thus, the way in which the additional elements use or interact with the judicial exception do not integrate the judicial exception into a practical application. Step 2B : The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 6 Step 1 : a process, as in claim 4. Step 2A Prong 1 : This claim recites, inter alia: further comprising: evaluating the pre-labels and/or the tags; and based on the evaluation of the pre-labels and/or the tags, selecting a first partial sample to generate a labeled training sample of the at least one labeled training sample and selecting a further partial sample to generate another labeled test sample of the at least one labeled training sample : These limitations recite furthering the mentally performable process by mentally evaluating the observed pre-labels and/or the tags generated and based on the mental evaluation of the pre-labels and/or the tags, using judgement to select a first partial sample to generate a labeled training sample of the at least one labeled training sample with the aid of pen and paper and using judgement to select a further partial sample to generate another labeled test sample of the at least one labeled training sample with the aid of pen and paper. Thus, this claim furthers the recited judicial exception. Step 2A Prong 2 & Step 2B : There are no additional elements recited so this claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible. Claim 7 Step 1 : a process, as in claim 6. Step 2A Prong 1 : This claim recites, inter alia: wherein the generating labels is performed for the first partial sample (interpreted as wherein generating the labeled training sample and the other labeled test sample is performed for the first partial sample per the 35 U.S.C. 112(b) rejection set forth above) and/or the further partial sample based on pre-labels as a function of a confidence of the pre-label : These limitations recite a mathematical relationship of organizing information and manipulating information, e.g. generating the labeled training sample and the other labeled test sample being performed for the first partial sample and/or the further partial sample based on pre-labels, through mathematical correlations, e.g. based on pre-labels as a function of a confidence of the pre-label. Thus, this claim furthers the recited judicial exception. Step 2A Prong 2 & Step 2B : There are no additional elements recited so this claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible. Claim 8 Step 1 : a process, as in claim 1. Step 2A Prong 1 : This claim recites, inter alia: wherein re-performing parts of the iterative method as a function of the result of evaluating the recognition performance comprises: a) providing an unlabeled sample, b) generating pre-labels and/or tags for the unlabeled sample, c) evaluating the pre-labels and/or tags, and based on the evaluating, selecting a first partial sample to generate a labeled training sample of the at least one labeled training sample and selecting a further partial sample to generate another labeled test sample of the at least one labeled training sample, d) generating the labeled training sample based on the first partial sample and generating the other labeled test sample based on the further sample (interpreted as generating the other labeled test sample based on the further partial sample per the 35 U.S.C. 112(b) rejection set forth above): These limitations recite furthering the mentally performable process in claim 1, by further using observation to provide an unlabeled sample, using judgement to generate pre-labels and/or tags for the observed unlabeled sample with the aid of pen and paper, mentally evaluating the pre-labels and/or tags, and based on the evaluating, selecting a first partial sample to generate a labeled training sample of the at least one labeled training sample and selecting a further partial sample to generate another labeled test sample of the at least one labeled training sample, and using judgement to generate the labeled training sample based on the first partial sample and generating the other labeled test sample based on the further partial sample. Thus, this claim furthers the recited judicial exception. Step 2A Prong 2 & Step 2B : There are no additional elements recited so this claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible. Claim 9 Step 1 : a process, as in claim 1. Step 2A Prong 1 : This claim recites the same judicial exception as claim 1. Step 2A Prong 2 : This judicial exception is not integrated into a practical application. The additional elements of this claim are as follows: wherein the evaluation of the recognition performance of the primary model is based on metrics for characterizing reliability and/or accuracy of the primary recognition task of the primary model : These additional elements are recited at a high level of generality and is viewed as nothing more than an attempt to generally link the use of the judicial exception recited in claim 1 to a technological environment or a field of use wherein the evaluation of the recognition performance of the primary model is based on metrics for characterizing reliability and/or accuracy of the primary recognition task of the primary model with no specifics of how the metrics are arrived at to meaningfully limit the claim. do not integrate the judicial exception into a practical application. See MPEP 2106.05(h). Thus, the way in which the additional elements use or interact with the judicial exception do not integrate the judicial exception into a practical application. Step 2B : The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 10 Step 1 : a process, as in claim 1. Step 2A Prong 1 : This claim recites the same judicial exception as claim 1. Step 2A Prong 2 : This judicial exception is not integrated into a practical application. The additional elements of this claim are as follows: further comprising: using the training data determined according to the iterative method to train the primary model to solve the primary recognition task : These additional elements are mere instructions to implement a judicial exception with the iterative method including training the primary model with the at least one labeled training sample because the additional elements only recite the idea of a solution or outcome but again fails to recite details of how the primary model is trained with the at least one labeled training sample, e.g. via supervised/unsupervised/reinforcement training/etc., and no description of the particular mechanism for training to provide meaningful limitations to the claimed invention. Thus these additional elements do not integrate a judicial exception into a practical application. See MPEP 2106.05(f). Thus, the way in which the additional elements use or interact with the judicial exception do not integrate the judicial exception into a practical application. Step 2B : The additional elements from Step 2A Prong 2 include adding the words "apply it" (or an equivalent) with the judicial exception. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 - 1 0 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (hereinafter Zhao) , US 2020/0250527 A1 , in view of Bates-Haus et al. (hereinafter Bates - Haus), US 2017/0075918 A1 . Regarding in d ependent c laim 1 , Zhao teaches an iterative method for determining training data for a primary model to solve a primary recognition task, the iterative method comprising: ( see Zhao Abstract "systems and methods for active learning on a training dataset that includes both labeled and unlabeled datapoints. In particular, the systems and methods described herein can select (e.g., at each of a number of iterations) a number of the unlabeled datapoints for which labels should be obtained to gain additional labeled datapoints on which to train a machine learned model (e.g., machine learned classifier model).", [0042] "classification problem where a feature vector x … is mapped to a label y... supervised machine learning (“ML”) model (e.g., classifier) can be trained... This process may be continued for a sequence of time steps t = 0, 1, 2, ..., T."; the machine learned classifier model solving a classification problem corresponds to the primary model solving a primary recognition task, and the sequence of time steps corresponds to the iterative method ) : a) providing at least one labeled training sample (see Zhao [0042] "let L t = {(xi, yi)} denote the labeled data set at that time"; the labeled data set L0 corresponds to the at least one labeled training sample); b) training the primary model with the at least one labeled training sample (see Zhao [0042] "train a ML model such as a classifier Ct at time t... a new classifier Ct+1 can be trained on Lt+1"; training the classifier Ct on the labeled data set corresponds to training the primary model with the labeled training sample L0 ); c) providing at least one labeled test sample (see Zhao [0059] "validation set E of size 10^4 was generated as a basis to evaluate the performance of different algorithms", [0065] "evaluate the new model on the validation set"; the validation set E corresponds to the labeled test sample); d) evaluating a recognition performance of the primary model using the labeled test sample on the primary recognition task (see Zhao [0065] "evaluate the new model on the validation set. Let et be the evaluation metric at step t (AUC-PR used in the experiment)"; evaluating the new model of the classifier Ct on the validation set using an evaluation metric corresponds to evaluating the recognition performance of the primary model using the labeled test sample) . Zhao does not ex pressly teach and depending on a result of the evaluating the recognition performance either (i) re-performing parts a), b), c), and d) of the iterative method, or (ii) ending the iterative method . However, Bates-Haus teaches and depending on a result of the evaluating the recognition performance either (i) re-performing parts a), b), c), and d) of the iterative method, or (ii) ending the iterative method (see Bates-Haus [0090] "The iterative training continues until we reach one of the following conditions: A. The precision and recall of the current linkage model (based on the data expert's labels) are above the minimum precision and recall that are required by a system operator; or B. The precision and recall of the model did not significantly change in the most recent rounds, which indicates the training process has converged."; continuing or stopping the iterative training based on precision and recall reaching a condition corresponds to depending on a result of the evaluating the recognition performance either re-performing parts of the iterative method or ending the iterative method). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of ending the iterative training process when the model's performance metrics reach a required threshold or converge as suggested by Bates-Haus into Zhao's iterative method, with a reasonable expectation of success. This modification would have been motivated by the desire to ensure that the training process is efficient and stops when the quality specifications defined by the system operator are met or when further iterations yield no significant improvement, thereby saving computational resources and labeling costs (see Bates-Haus [0070], [0090]). Regarding dependent Claim 2, Zhao, in view of Bates-Haus, teach the iterative method according to Claim 1, further comprising: providing an unlabeled sample (see Zhao [0042] "let U = {xi} denote the unlabeled data set at time step t"; providing the unlabeled data set U corresponds to providing an unlabeled sample). Regarding dependent claim 3, Zhao in view of Bates-Haus , teach the iterative method according to Claim 2, further comprising: generating pre-labels for the unlabeled sample using the primary model; and/or generating tags using a secondary model for the unlabeled sample (see Zhao [0046] "classifier Ct suggests a prediction vector … for each unlabeled point x in Ut, such that … the probability of x being of class k ” ; the prediction vector generated by the classifier Ct for the unlabeled points corresponds to generating pre-labels for the unlabeled sample using the primary model). Regarding dependent claim 4, Zhao , in view of Bates-Haus , teach the iterative method according to Claim 3, further comprising: evaluating the pre-labels and/or the tags ( see Zhao [0046] "certainty score can be defined for each point x in Ut", [0047] "The certainty score represents how certain the classifier Ct is about x's label"; computing certainty scores from the prediction vectors corresponds to evaluating the pre-labels) ; and based on the evaluation of the pre-labels and/or the tags, selecting a first partial sample to generate a labeled training sample of the at least one labeled training sample ( see Zhao [0047] "margin sampler selects the points with lowest certainty score", [0042] "select a set Mt consisting of m unlabeled datapoints in Ut and query their labels to get a set of m labeled points... a new classifier Ct+1 can be trained on Lt+1"; selecting points based on the certainty score to query their labels and train the classifier corresponds to selecting a first partial sample to generate a labeled training sample) and selecting a further partial sample to generate another labeled test sample of the at least one labeled training sample (interpreted as selecting a further partial sample to generate another labeled test sample of the at least one labeled test sample per the 35 U.S.C. 112(b) rejection set forth above) (see Bates-Haus [0086] "select a small subset of candidate record pairs using a 'stratified sampling' method", [0089] "send them to the data expert(s) for labeling", [0090] "precision and recall of the current linkage model (based on the data expert's labels)"; selecting a subset of data to be labeled and using it to evaluate precision and recall corresponds to selecting a further partial sample to generate another labeled test sample). Regarding dependent claim 5, Zhao , in view of Bates-Haus , teach the iterative method according to Claim 4, wherein: at least one of the following elements is considered when evaluating: a) a similarity of individual samples of a sample (interpreted as a similarity of individual samples of a unlabeled sample per the 35 U.S.C. 112(b) rejection set forth above) , b) a relevance of samples for training the primary model, c) a proportion of conditions in the first and the other partial sample (interpreted as a proportion of conditions in the first and the further partial sample per the 35 U.S.C. 112(b) rejection set forth above) , d) correlations between metrics, which characterize a recognition accuracy and/or reliability of the primary recognition task and/or correlations between metrics, which characterize a recognition accuracy and/or reliability of the primary recognition task, and tags, e) continuous and/or modified metrics of the primary recognition task, and f) a recognition performance of certain sensors (see Zhao [0046] "certainty score can be defined for each point", [0047] "margin sampler selects the points with lowest certainty score", [0049] "exploration score st(x) can be assigned to each unlabeled point x in Ut that measures how explored the area around x is", [0052] "The closer an unlabeled point x to a labeled point z, the larger the score"; the certainty score identifying points the classifier is most uncertain about corresponds to a relevance of samples for training the primary model, and measuring how explored an area is based on closeness/distance to labeled points corresponds to considering a similarity of individual samples of an unlabeled sample). Regarding dependent claim 6, Zhao , in view of Bates-Haus , teach the iterative method according to Claim 4, further comprising: generating the labeled training sample based on the first partial sample; and generating the other labeled test sample based on the further partial sample (see Zhao [0042] "query their labels to get a set of m labeled points... a new classifier Ct+1 can be trained on Lt+1", and see Bates-Haus [0089] "send them to the data expert(s) for labeling", [0090] "precision and recall of the current linkage model (based on the data expert's labels)"; querying labels for the selected set to train the classifier corresponds to generating the labeled training sample based on the first partial sample, and sending the selected subset to experts for labeling to evaluate precision and recall corresponds to generating the other labeled test sample based on the further partial sample). Regarding dependent claim 7, Zhao , in view of Bates-Haus , teach the iterative method according to Claim 6, wherein the generating labels is performed for the first partial sample and/or the further partial sample based on pre-labels as a function of a confidence of the pre-label (interpreted as wherein generating the labeled training sample and the other labeled test sample is performed for the first partial sample and/or the further partial sample based on pre-labels as a function of a confidence of the pre-label per the 35 U.S.C. 112(b) rejection set forth above) (see Zhao [0046] "certainty score can be defined for each point x in Ut", [0047] "margin sampler selects the points with lowest certainty score"; selecting points for labeling based on the certainty score, which is derived from the prediction probabilities, corresponds to generating the labeled training sample and the other labeled test sample based on pre-labels as a function of a confidence of the pre-label). Regarding dependent claim 8, Zhao , in view of Bates-Haus , teach the iterative method according to Claim 1, wherein re-performing parts of the iterative method as a function of the result of evaluating the recognition performance comprises: a) providing an unlabeled sample, b) generating pre-labels and/or tags for the unlabeled sample, c) evaluating the pre-labels and/or tags, and based on the evaluating, selecting a first partial sample to generate a labeled training sample of the at least one labeled training sample and selecting a further partial sample to generate another labeled test sample of the at least one labeled training sample (interpreted as selecting a further partial sample to generate another labeled test sample of the at least one labeled test sample per the 35 U.S.C. 112(b) rejection set forth above) , d) generating the labeled training sample based on the first partial sample and generating the other labeled test sample based on the further sample (interpreted as generating the other labeled test sample based on the further partial sample per the 35 U.S.C. 112(b) rejection set forth above) (see Zhao [0042] "This process may be continued for a sequence of time steps t = 0, 1, 2, ..., T.", [0065] -[0067] describe updat ing the training data and model , evaluat ing the new model on the validation set , and updat ing the sampling strategy; continuing the process for a sequence of time steps and updating the strategy based on the evaluation metric inherently re-performs the steps of providing the unlabeled sample, generating pre-labels, evaluating them, and generating the labeled samples as established in claims 2-6). Regarding dependent claim 9, Zhao , in view of Bates-Haus , teach the iterative method according to Claim 1, wherein the evaluation of the recognition performance of the primary model is based on metrics for characterizing reliability and/or accuracy of the primary recognition task of the primary model (see Zhao [0043] "performance of the ML model can be measured in terms of different metrics such as its accuracy, area under precision-recall curve and recall at a certain precision"; measuring performance in terms of accuracy and recall corresponds to the evaluation of the recognition performance of the primary model is based on metrics for characterizing reliability and/or accuracy of the primary recognition task of the primary model). Regarding dependent claim 10, Zhao , in view of Bates-Haus , teach the iterative method according to Claim 1, further comprising: using the training data determined according to the iterative method to train the primary model to solve the primary recognition task (see Zhao [0042] "a new classifier Ct+1 can be trained on Lt+1"; training the new classifier on the updated labeled data set corresponds to using the training data determined according to the iterative method to train the primary model to solve the primary recognition task). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Marcheret, US 2015/0220853 A1 (Aug. 6, 2015) (ABSTRACT Techniques for evaluation and/or retraining of a classification model built using labeled training data. In some aspects, a classification model having a first set of weights is retrained by using unlabeled input to reweight the labeled training data to have a second set of weights, and by retraining the classification model using the labeled training data weighted according to the second set of weights. In some aspects, a classification model is evaluated by building a similarity model that represents similarities between unlabeled input and the labeled training data and using the similarity model to evaluate the labeled training data to identify a subset of the plurality of items of labeled training data that is more similar to the unlabeled input than a remainder of the labeled training data ). Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Enter examiner's name" \* MERGEFORMAT KUANG FU CHEN whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-1393 . The examiner can normally be reached FILLIN "Work schedule?" \* MERGEFORMAT M-F 9:00-5:30pm ET . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Jennifer Welch can be reached on FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-7212 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KC CHEN/ Primary Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Sep 06, 2023
Application Filed
Mar 21, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579425
PARAMETERIZED ACTIVATION FUNCTIONS TO ADJUST MODEL LINEARITY
2y 5m to grant Granted Mar 17, 2026
Patent 12566994
SYSTEMS AND METHODS TO CONFIGURE DEFAULTS BASED ON A MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12561593
METHOD FOR DETERMINING PRESENCE OF A SIGNATURE CONSISTENT WITH A PAIR OF MAJORANA ZERO MODES AND A QUANTUM COMPUTER
2y 5m to grant Granted Feb 24, 2026
Patent 12561561
Mapping User Vectors Between Embeddings For A Machine Learning Model for Authorizing Access to Resource
2y 5m to grant Granted Feb 24, 2026
Patent 12561497
AUTOMATED OPERATING MODE DETECTION FOR A MULTI-MODAL SYSTEM WITH MULTIVARIATE TIME-SERIES DATA
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+67.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month