Prosecution Insights
Last updated: April 19, 2026
Application No. 19/186,392

DATA PROCESSING METHOD, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT

Non-Final OA §101§103
Filed
Apr 22, 2025
Examiner
RAJAPUTRA, SUMAN
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Lemon Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
114 granted / 164 resolved
+14.5% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
194
Total Applications
across all art units

Statute-Specific Performance

§101
15.2%
-24.8% vs TC avg
§103
55.9%
+15.9% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 164 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This Office Action is in response to the filing with the office dated 04/22/2025. Claims 1, 14 and 20 are independent claims. Claims 1-20 are presented for examination. Priority 3. Applicant’s claims priority of International Patent Application for No. PCT/CN2024/089110, filed on April 22, 2024 is acknowledged by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Determining whether claims are statutory under 35 U.S.C. 101 involves a two-step analysis. Step 1 requires a determination of whether the claims are directed to the statutory categories of invention. Step 2 requires a determination of whether the claims are directed to a judicial exception without significantly more. Step 2 is divided into two prongs, with the first prong having a part 1 and part 2. See MPEP 2106; See 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG). Pursuant to Step 1, Claims 14-19 recite a non-transitory computer readable storage medium, which are directed to a manufacture. Claim 20 recite a An electronic device which are directed to the statutory category of a machine. Regarding claims 1, 14 and 20 Pursuant to Step 2A, part 1, claims are analyzed to determine whether they are directed to an abstract idea. Under the 2019 PEG, claims are deemed to be directed to an abstract idea if they fall within one of the enumerated categories of (a) mathematical concepts, (b) certain methods of organizing human activity, and (c) mental processes. Here, claim 1 are directed to an abstract idea categorized under mental processes. Courts consider a mental process if it “can be performed in the human mind, or by a human using a pen and paper.” MPEP 2016(a)(2)(III). Courts also consider a mental process as one that can be performed in the human mind and is merely using a computer as a tool to perform the concept. MPEP 2016(a)(2)(III)(C)(3). Claim 1, 14 and 20 recites a mental process because the recited steps recite actions of processing a sample for classification into categories and determining for each category a probability threshold corresponding to the category, but is recited at a high level of generality that merely used computers as a tool to perform the processes. See MPEP 2106(a)(2)(III). For example, claims 1, 14 and 20 recites limitations of “processing…”, “determining….” These limitations, at the high level of generality as drafted, would encompass a user to process each sample using machine learning model and determining a probability threshold corresponding to the category are essentially steps of generating and manipulating data at a high level of generality, which can be performed by a person using a computer as a tool. which is mentally performable as an evaluation or judgement. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Pursuant to Step 2A, part 2, claims are analyzed to determine whether the recited abstract idea is integrated into a practical application. In this case, as explained above, claim 1 merely recite a mental process. While claim 1 recites additional components in the form of processors, a memory, and device, these components are recited at a high level of generality, machine learning model is utilizing the model as a tool, which do not add meaningful limits on the recited abstract idea to integrate it into a practical application by providing an improvement to the functioning of a computer or technology, implementing the abstract idea with a particular machine or manufacture that is integral to the claim, effecting a transformation or reduction of a particular article to a different state or thing, nor applying the abstract idea in some meaningful way beyond linking its use to computer technology. See 2019 PEG. Neither of these additional limitations recite sufficient limitations to adequately convey the asserted improvement of the claimed invention. Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The additional elements are not sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the recitation of generic computing components is still mere instructions to apply the exception under MPEP 2106.05(f) and does not provide significantly more. Considering the additional elements in combination and the claim as a whole does not change the analysis, and does not amount to significantly more. Thus the claims are abstract. The remaining dependent claims 2-13, 15-19 which impose additional limitations explained above also fail to claim patent-eligible subject matter because the limitations cannot be considered statutory. In reference to claim 1, these dependent claims have also been reviewed with the same analysis as independent claim 1. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 1; where all claims are directed to the same abstract idea. Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1-10, and 14-20are rejected under 35 U.S.C. 103 as being unpatentable over Liao; Chinghsien (US 11805139 B1) in view of BARKAN; Oren (US 20230418909 A1). Regarding independent claim 1, Liao; Chinghsien (US 11805139 B1) teaches, comprising: processing each sample in a first sample set by using a first machine learning model to obtain a prediction probability that each sample is classified into each category of one or more categories (Col 2 Lines 1-16, (7) A class with the highest prediction probability value among the classes in a probability vector is selected as the predicted class. When the prediction probability value of the class meets a probability threshold, a confidence score is calculated based on the prediction probability value of the class); and determining, for the each category, a probability threshold corresponding to the category to maximize a number of positives of the category (Col 4, Lines 14-19, (15) In each prediction cycle, the arbitration module 325 is configured to receive a probability vector 324, which has a set of prediction probability values for the plurality of classes that the multiclass classifier 323 has been trained to detect. The arbitration module 325 selects, during the prediction cycle, a predicted class as the class with the highest prediction probability value relative to the other classes. The arbitration module 325 compares the prediction probability value of the predicted class to a probability threshold Y. The probability threshold Y helps minimize false positives); Liao fails to explicitly teach, wherein a confidence corresponding to the category is not lower than a confidence threshold, the probability threshold is for determining a category to which the each sample pertains based on a prediction probability of the each sample wherein a confidence corresponding to the category is not lower than a confidence threshold and the confidence corresponding to the category is a confidence at which an actual precision of classification based on the probability threshold meets a precision condition. BARKAN; Oren (US 20230418909 A1) teaches, wherein a confidence corresponding to the category is not lower than a confidence threshold, the probability threshold is for determining a category to which the each sample pertains based on a prediction probability of the each sample wherein a confidence corresponding to the category is not lower than a confidence threshold and the confidence corresponding to the category is a confidence at which an actual precision of classification based on the probability threshold meets a precision condition (Paragraph [0046] Responsive to determining that the probability has a predetermined relationship with confidence level 108, the possible classification threshold value being analyzed is added to a set of candidate classification threshold values 220. The foregoing process is performed for each possible classification threshold value 212, where a possible classification threshold value is added to the set if the probability that the target evaluation metric modeled (using the possible classification threshold value) meeting or exceeding the target evaluation metric value meets or exceeds the confidence level). Therefore it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of Liao et al by providing wherein a confidence corresponding to the category is not lower than a confidence threshold, the probability threshold is for determining a category to which the each sample pertains based on a prediction probability of the each sample wherein a confidence corresponding to the category is not lower than a confidence threshold and the confidence corresponding to the category is a confidence at which an actual precision of classification based on the probability threshold meets a precision condition, as taught by BARKAN et al Paragraph [0046]). One of the ordinary skill in the art would have been motivated to make this modification, by doing so, The metrics are utilized to better train (or tune hyperparameters of) the model (e.g., to prevent the overfitting of the training data set (i.e., the data set utilized to train machine learning model 102)). It is noted that a validation data set is distinguished from both a training data set (which is used to train (or fit) machine learning model 102) and a test data set (which is used to test the efficacy of machine learning model 102 after training and provide an unbiased evaluation of a final model fit on the training data set) as taught by BARKAN et al Paragraph [0034]). Regarding dependent claim 2, Liao et al and BARKAN et al teach, the data processing method according to claim 1. BARKAN et al further teaches, wherein the confidence corresponding to the category is determined according to a target probability (Paragraph [0025], [0026] if machine learning model 102 is configured to recognize animals in images, machine learning model 102 may output the classifications indicating whether a recognized animal is a dog, cat, horse, cow, etc. Each of the classification(s) may be represented as a probability score (e.g., a value between 0.0 and 1.0), where higher the value, the greater the probability that an image comprises a particular animal), wherein the target probability is: a probability of obtaining a number of positives and an observation precision determined based on the first sample set in response to the probability threshold possessing the actual precision (Paragraph [0027] The performance of machine learning model (e.g., machine learning model 102) is typically estimated based on evaluation metrics, such as precision and its complementary metric recall. The precision metric quantifies the number of correct positive predictions (or classifications) made by machine learning model 102 that actually belong to the positive class. The recall metric quantifies the number of positive class predictions made out of all positive examples in the data set. For instance, suppose machine learning model 102 is configured to recognize dogs in an image (i.e., dog is the positive class). Further suppose that the image contains 10 cats and 12 dogs. If machine learning model 102 detects eight dogs, but only five of the detected eight are actually dogs (i.e., there are five true positives and three false positives), the precision metric of machine learning model 102 is 0.625 (i.e., ⅝), and the recall metric is approximately 0.42 (i.e., 5/12).(Examiner interprets observation precision as the number of total detected images such as eight dogs with five true positives and three false positives. Examiner interprets actual precision as correct positives)). Regarding dependent claim 3, Liao et al and BARKAN et al teach, the data processing method according to claim 2. BARKAN et al further teaches, wherein the confidence corresponding to the category is determined according to a first sum and a second sum, wherein the first sum is a sum of values of the target probability that meet the precision condition, and the second sum is a sum of all the values of the target probability (Paragraph [0027] The performance of machine learning model (e.g., machine learning model 102) is typically estimated based on evaluation metrics, such as precision and its complementary metric recall. The precision metric quantifies the number of correct positive predictions (or classifications) made by machine learning model 102 that actually belong to the positive class. The recall metric quantifies the number of positive class predictions made out of all positive examples in the data set. For instance, suppose machine learning model 102 is configured to recognize dogs in an image (i.e., dog is the positive class). Further suppose that the image contains 10 cats and 12 dogs. If machine learning model 102 detects eight dogs, but only five of the detected eight are actually dogs (i.e., there are five true positives and three false positives), the precision metric of machine learning model 102 is 0.625 (i.e., ⅝), and the recall metric is approximately 0.42 (i.e., 5/12). Examiner interprets the first sum is a sum of values of the target probability that meet the precision condition as the number of correct positive predictions and examiner interprets the second sum is a sum of all the values of the target probability as the number of total detected images such as eight dogs with five true positives and three false positives). Regarding dependent claim 4, Liao et al and BARKAN et al teach, the data processing method according to claim 2. BARKAN et al further teaches, wherein a probability density function of the target probability follows a conjugate prior distribution (Paragraph [0058] discloses, Beta distribution (Based on specification Paragraph [0048] The conjugate prior distribution may, for example, use a Beta distribution). Also see [0060]). Regarding dependent claim 5, Liao et al and BARKAN et al teach, the data processing method according to claim 1. BARKAN et al further teaches, wherein the determining, for the each category, a probability threshold corresponding to the category to maximize a number of positives of the category, wherein the confidence corresponding to the category is not lower than a confidence threshold, comprises: determining, for the each category, the precision condition according to the confidence corresponding to the category and the confidence threshold (Paragraph [0046] Responsive to determining that the probability has a predetermined relationship with confidence level 108, the possible classification threshold value being analyzed is added to a set of candidate classification threshold values 220. The foregoing process is performed for each possible classification threshold value 212, where a possible classification threshold value is added to the set if the probability that the target evaluation metric modeled (using the possible classification threshold value) meeting or exceeding the target evaluation metric value meets or exceeds the confidence level); and determining a probability threshold that allows that an observation precision is greater than the precision threshold (Paragraphs [0044] discloses, precision value greater than the threshold ) and the number of positives is maximum as a probability threshold corresponding to the category (Paragraph [0046] Responsive to determining that the probability has a predetermined relationship with confidence level 108, the possible classification threshold value being analyzed is added to a set of candidate classification threshold values 220. The foregoing process is performed for each possible classification threshold value 212, where a possible classification threshold value is added to the set if the probability that the target evaluation metric modeled (using the possible classification threshold value) meeting or exceeding the target evaluation metric value meets or exceeds the confidence level. Regarding dependent claim 6, Liao et al and BARKAN et al teach, the data processing method according to claim 1. BARKAN et al further teaches, wherein the precision condition is that the actual precision is greater than a specified value (Paragraph [0044] in a scenario in which the targeted metric is precision, threshold candidate determiner 208 may determine the probability that modeled precision value 218 meets or exceeds target metric value 106 (i.e., the target value set for the precision)). Regarding dependent claim 7, Liao et al and BARKAN et al teach, the data processing method according to claim 1. BARKAN et al further teaches, further comprising: re-determining the probability threshold corresponding to the each category in response to updating the first sample set (Paragraph [0049] Responsive to receiving the error message, a new target metric value and/or confidence level may be selected, for example, by the user, or automatically, and the analysis described above is performed again with the newly-set target metric value and/or confidence level). Regarding dependent claim 8, Liao et al and BARKAN et al teach, the data processing method according to claim 7. BARKAN et al further teaches, further comprising: obtaining, for the each category, a plurality of probability thresholds determined for multiple times and corresponding to the category; determining a plurality of observation precisions determined for the plurality of probability thresholds; and determining parameters of a distribution of a probability density function of the actual precision by using the plurality of observation precisions (Fig. 3, Paragraph [0054]-[0068] discloses, plurality of threshold values, determined for multiple times to corresponding category. Determining number of observations for the probability threshold and determining probability density function of the actual precision by using the plurality of observation precisions), Regarding dependent claim 9, Liao et al and BARKAN et al teach, the data processing method according to claim 1. Liao et al further teaches, wherein the machine learning model comprises a Softmax layer, the prediction probability is output by the Softmax layer, and the probability threshold is a Softmax threshold Col 1, Lines 16-21, (2) In machine learning, multiclass classification pertains to a task of classifying input data to one of three or more classes. Multiclass classification may be performed by multinomial logistic regression (also known as softmax regression), multiclass linear discriminant analysis, using naive Bayes classifiers, using artificial neural networks, etc). Regarding dependent claim 10, Liao et al and BARKAN et al teach, the data processing method according to claim 1. Liao et al further teaches, further comprising: classifying a sample to be processed based on the first machine learning model and the probability threshold corresponding to the each category (Col 3, 4, Lines, 44-62 (12), (13) A single probability vector 324 has a set of probability prediction values, with each prediction probability value indicating a probability that a data unit 301 is of the corresponding class. More particularly, each probability vector 324 includes a prediction probability value for each class that the multiclass classifier 323 has been trained to detect). Regarding independent claim 14, Liao; Chinghsien (US 11805139 B1) teaches, a non-transitory computer readable storage medium, having a computer program stored thereon that, when executed by a processor (Fig. 3, Col 2, Lines 59-66), implements a data processing method comprising: processing each sample in a first sample set by using a first machine learning model to obtain a prediction probability that each sample is classified into each category of one or more categories (Col 2 Lines 1-16, (7) A class with the highest prediction probability value among the classes in a probability vector is selected as the predicted class. When the prediction probability value of the class meets a probability threshold, a confidence score is calculated based on the prediction probability value of the class); and determining, for the each category, a probability threshold corresponding to the category to maximize a number of positives of the category (Col 4, Lines 14-19, (15) In each prediction cycle, the arbitration module 325 is configured to receive a probability vector 324, which has a set of prediction probability values for the plurality of classes that the multiclass classifier 323 has been trained to detect. The arbitration module 325 selects, during the prediction cycle, a predicted class as the class with the highest prediction probability value relative to the other classes. The arbitration module 325 compares the prediction probability value of the predicted class to a probability threshold Y. The probability threshold Y helps minimize false positives); Liao fails to explicitly teach, wherein a confidence corresponding to the category is not lower than a confidence threshold, the probability threshold is for determining a category to which the each sample pertains based on a prediction probability of the each sample, and the confidence corresponding to the category is a confidence at which an actual precision of classification based on the probability threshold meets a precision condition. BARKAN; Oren (US 20230418909 A1) teaches, wherein a confidence corresponding to the category is not lower than a confidence threshold, the probability threshold is for determining a category to which the each sample pertains based on a prediction probability of the each sample wherein a confidence corresponding to the category is not lower than a confidence threshold and the confidence corresponding to the category is a confidence at which an actual precision of classification based on the probability threshold meets a precision condition (Paragraph [0046] Responsive to determining that the probability has a predetermined relationship with confidence level 108, the possible classification threshold value being analyzed is added to a set of candidate classification threshold values 220. The foregoing process is performed for each possible classification threshold value 212, where a possible classification threshold value is added to the set if the probability that the target evaluation metric modeled (using the possible classification threshold value) meeting or exceeding the target evaluation metric value meets or exceeds the confidence level). Therefore it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of Liao et al by providing wherein a confidence corresponding to the category is not lower than a confidence threshold, the probability threshold is for determining a category to which the each sample pertains based on a prediction probability of the each sample wherein a confidence corresponding to the category is not lower than a confidence threshold and the confidence corresponding to the category is a confidence at which an actual precision of classification based on the probability threshold meets a precision condition, as taught by BARKAN et al Paragraph [0046]). One of the ordinary skill in the art would have been motivated to make this modification, by doing so, The metrics are utilized to better train (or tune hyperparameters of) the model (e.g., to prevent the overfitting of the training data set (i.e., the data set utilized to train machine learning model 102)). It is noted that a validation data set is distinguished from both a training data set (which is used to train (or fit) machine learning model 102) and a test data set (which is used to test the efficacy of machine learning model 102 after training and provide an unbiased evaluation of a final model fit on the training data set) as taught by BARKAN et al Paragraph [0034]). Regarding dependent claim 15, Liao et al and BARKAN et al teach, the non-transitory computer readable storage medium according to claim 14. BARKAN et al further teaches, wherein the confidence corresponding to the category is determined according to a target probability (Paragraph [0025], [0026] if machine learning model 102 is configured to recognize animals in images, machine learning model 102 may output the classifications indicating whether a recognized animal is a dog, cat, horse, cow, etc. Each of the classification(s) may be represented as a probability score (e.g., a value between 0.0 and 1.0), where higher the value, the greater the probability that an image comprises a particular animal) , wherein the target probability is: a probability of obtaining a number of positives and an observation precision determined based on the first sample set in response to the probability threshold possessing the actual precision (Paragraph [0027] The performance of machine learning model (e.g., machine learning model 102) is typically estimated based on evaluation metrics, such as precision and its complementary metric recall. The precision metric quantifies the number of correct positive predictions (or classifications) made by machine learning model 102 that actually belong to the positive class. The recall metric quantifies the number of positive class predictions made out of all positive examples in the data set. For instance, suppose machine learning model 102 is configured to recognize dogs in an image (i.e., dog is the positive class). Further suppose that the image contains 10 cats and 12 dogs. If machine learning model 102 detects eight dogs, but only five of the detected eight are actually dogs (i.e., there are five true positives and three false positives), the precision metric of machine learning model 102 is 0.625 (i.e., ⅝), and the recall metric is approximately 0.42 (i.e., 5/12).(Examiner interprets observation precision as the number of total detected images such as eight dogs with five true positives and three false positives. Examiner interprets actual precision as correct positives)). Regarding dependent claim 16, Liao et al and BARKAN et al teach, the non-transitory computer readable storage medium according to claim 15. BARKAN et al further teaches, wherein the confidence corresponding to the category is determined according to a first sum and a second sum, wherein the first sum is a sum of values of the target probability that meet the precision condition, and the second sum is a sum of all the values of the target probability (Paragraph [0027] The performance of machine learning model (e.g., machine learning model 102) is typically estimated based on evaluation metrics, such as precision and its complementary metric recall. The precision metric quantifies the number of correct positive predictions (or classifications) made by machine learning model 102 that actually belong to the positive class. The recall metric quantifies the number of positive class predictions made out of all positive examples in the data set. For instance, suppose machine learning model 102 is configured to recognize dogs in an image (i.e., dog is the positive class). Further suppose that the image contains 10 cats and 12 dogs. If machine learning model 102 detects eight dogs, but only five of the detected eight are actually dogs (i.e., there are five true positives and three false positives), the precision metric of machine learning model 102 is 0.625 (i.e., ⅝), and the recall metric is approximately 0.42 (i.e., 5/12). Examiner interprets the first sum is a sum of values of the target probability that meet the precision condition as the number of correct positive predictions and examiner interprets the second sum is a sum of all the values of the target probability as the number of total detected images such as eight dogs with five true positives and three false positives). Regarding dependent claim 17, Liao et al and BARKAN et al teach, the non-transitory computer readable storage medium according to claim 15. BARKAN et al further teaches, wherein a probability density function of the target probability follows a conjugate prior distribution (Paragraph [0058] discloses, Beta distribution (Based on specification Paragraph [0048] The conjugate prior distribution may, for example, use a Beta distribution). Also see [0060]). Regarding dependent claim 18, Liao et al and BARKAN et al teach, the non-transitory computer readable storage medium according to claim 14. wherein the determining, for the each category, a probability threshold corresponding to the category to maximize a number of positives of the category, wherein the confidence corresponding to the category is not lower than a confidence threshold, comprises: determining, for the each category, the precision condition according to the confidence corresponding to the category and the confidence threshold (Paragraph [0046] Responsive to determining that the probability has a predetermined relationship with confidence level 108, the possible classification threshold value being analyzed is added to a set of candidate classification threshold values 220. The foregoing process is performed for each possible classification threshold value 212, where a possible classification threshold value is added to the set if the probability that the target evaluation metric modeled (using the possible classification threshold value) meeting or exceeding the target evaluation metric value meets or exceeds the confidence level); and determining a probability threshold that allows that an observation precision is greater than the precision threshold (Paragraphs [0044] discloses, precision value greater than the threshold) and the number of positives is maximum as a probability threshold corresponding to the category (Paragraph [0046] Responsive to determining that the probability has a predetermined relationship with confidence level 108, the possible classification threshold value being analyzed is added to a set of candidate classification threshold values 220. The foregoing process is performed for each possible classification threshold value 212, where a possible classification threshold value is added to the set if the probability that the target evaluation metric modeled (using the possible classification threshold value) meeting or exceeding the target evaluation metric value meets or exceeds the confidence level). Regarding dependent claim 19, Liao et al and BARKAN et al teach, the non-transitory computer readable storage medium according to claim 14. BARKAN et al further teaches, wherein the precision condition is that the actual precision is greater than a specified value (Paragraph [0044] in a scenario in which the targeted metric is precision, threshold candidate determiner 208 may determine the probability that modeled precision value 218 meets or exceeds target metric value 106 (i.e., the target value set for the precision)). Regarding independent claim 20, Liao; Chinghsien (US 11805139 B1) teaches, an electronic device, comprising: a memory; and a processor coupled to the memory, the processor configured to, based on instructions stored in the memory, carry out a data processing method (Fig. 3, Col 2, Lines 59-66) comprising: processing each sample in a first sample set by using a first machine learning model to obtain a prediction probability that each sample is classified into each category of one or more categories (Col 2 Lines 1-16, (7) A class with the highest prediction probability value among the classes in a probability vector is selected as the predicted class. When the prediction probability value of the class meets a probability threshold, a confidence score is calculated based on the prediction probability value of the class); and determining, for the each category, a probability threshold corresponding to the category to maximize a number of positives of the category (Col 4, Lines 14-19, (15) In each prediction cycle, the arbitration module 325 is configured to receive a probability vector 324, which has a set of prediction probability values for the plurality of classes that the multiclass classifier 323 has been trained to detect. The arbitration module 325 selects, during the prediction cycle, a predicted class as the class with the highest prediction probability value relative to the other classes. The arbitration module 325 compares the prediction probability value of the predicted class to a probability threshold Y. The probability threshold Y helps minimize false positives); Liao fails to explicitly teach, wherein a confidence corresponding to the category is not lower than a confidence threshold, the probability threshold is for determining a category to which the each sample pertains based on a prediction probability of the each sample, and the confidence corresponding to the category is a confidence that an actual precision of classification based on the probability threshold meets a precision condition. BARKAN; Oren (US 20230418909 A1) teaches, wherein a confidence corresponding to the category is not lower than a confidence threshold, the probability threshold is for determining a category to which the each sample pertains based on a prediction probability of the each sample wherein a confidence corresponding to the category is not lower than a confidence threshold and the confidence corresponding to the category is a confidence at which an actual precision of classification based on the probability threshold meets a precision condition (Paragraph [0046] Responsive to determining that the probability has a predetermined relationship with confidence level 108, the possible classification threshold value being analyzed is added to a set of candidate classification threshold values 220. The foregoing process is performed for each possible classification threshold value 212, where a possible classification threshold value is added to the set if the probability that the target evaluation metric modeled (using the possible classification threshold value) meeting or exceeding the target evaluation metric value meets or exceeds the confidence level). Therefore it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of Liao et al by providing wherein a confidence corresponding to the category is not lower than a confidence threshold, the probability threshold is for determining a category to which the each sample pertains based on a prediction probability of the each sample wherein a confidence corresponding to the category is not lower than a confidence threshold and the confidence corresponding to the category is a confidence at which an actual precision of classification based on the probability threshold meets a precision condition, as taught by BARKAN et al Paragraph [0046]). One of the ordinary skill in the art would have been motivated to make this modification, by doing so, The metrics are utilized to better train (or tune hyperparameters of) the model (e.g., to prevent the overfitting of the training data set (i.e., the data set utilized to train machine learning model 102)). It is noted that a validation data set is distinguished from both a training data set (which is used to train (or fit) machine learning model 102) and a test data set (which is used to test the efficacy of machine learning model 102 after training and provide an unbiased evaluation of a final model fit on the training data set) as taught by BARKAN et al Paragraph [0034]). 6. Claims 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Liao; Chinghsien (US 11805139 B1) in view of BARKAN; Oren (US 20230418909 A1) and in further view of HE; Huihui (US 20220383190 A1). Regarding dependent claim 11, Liao et al and BARKAN et al teach, the data processing method according to claim 10. Liao et al and BARKAN et al further teaches, wherein the classifying the sample to be processed based on the first machine learning model and the probability threshold corresponding to the each category (Col 4, Lines 14-19, (15) In each prediction cycle, the arbitration module 325 is configured to receive a probability vector 324, which has a set of prediction probability values for the plurality of classes that the multiclass classifier 323 has been trained to detect. The arbitration module 325 selects, during the prediction cycle, a predicted class as the class with the highest prediction probability value relative to the other classes). BARKAN et al fails to explicitly teach, determining a classification result of a sample in a second sample set to be processed based on the first machine learning model and the probability threshold corresponding to the each category, wherein the second sample set is training data of the second machine learning model; and labeling the sample in the second sample set according to the classification result. HE; Huihui (US 20220383190 A1) teaches, determining a classification result of a sample in a second sample set to be processed based on the first machine learning model and the probability threshold corresponding to the each category, wherein the second sample set is training data of the second machine learning model; and labeling the sample in the second sample set according to the classification result (Paragraphs [0040]-[0047] discloses, determining a classification result second sample to be processed based on the first model and labelling the sample according to the classification result. Also see [0032],[0033]). Therefore it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of Liao et al and BARKAN et al by determining a classification result of a sample in a second sample set to be processed based on the first machine learning model and the probability threshold corresponding to the each category, wherein the second sample set is training data of the second machine learning model; and labeling the sample in the second sample set according to the classification result, as taught by HE et al (Paragraph [0044]-[0047]). One of the ordinary skill in the art would have been motivated to make this modification, by doing so, would improve the model performance, as taught by HE et al (Paragraph [0035]). Regarding dependent claim 12, Liao et al, BARKAN et al and HE et al teach, the data processing method according to claim 10. HE et al further teaches, wherein the sample to be processed is an online sample to be audited, and the data processing method further comprises: outputting the online sample online in response to that a labeling result of the online sample indicates that audit is passed (Paragraph [0031] the classification sub-model may classify the instant message based on the semantics of the instant message, and output a result indicating that the instant message is classified into a class of instant message passing the checking or a class of instant message failing to pass the checking, so as to achieve the instant message checking (Examiner interprets audit as checking)). Regarding dependent claim 13, Liao et al, BARKAN et al and HE et al teach, the data processing method according to claim 12. HE et al further teaches, further comprising: updating a sample pool by using the online sample in response to that the labeling result of the online sample indicates that the audit is failed, wherein the sample pool is for training or testing the first machine learning model (Paragraph [0031] The classification sub-model may classify the instant message based on the semantics of the instant message, and output a result indicating that the instant message is classified into a class of instant message passing the checking or a class of instant message failing to pass the checking, so as to achieve the instant message checking. During the iteration of the language sub-model and the classification sub-model, adjustment of the model may be performed based on a sum of loss functions of the two sub-models. The finally trained model may have an excellent performance in a task of checking an instant message, so that the automatic labeling system 130 may output an instant message text with a class label). Closest Prior Art 7. The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure. Elisha; Oren (US 20210256420 A1) teaches, Methods, systems and computer program products are described to improve machine learning (ML) model-based classification of data items by identifying and removing inaccurate training data. Inaccurate training samples may be identified, for example, based on excessive variance in vector space between a training sample and a mean of category training samples, and based on a variance between an assigned category and a predicted category for a training sample. Suspect or erroneous samples may be selectively removed based on, for example, vector space variance and/or prediction confidence level. As a result, ML model accuracy may be improved by training on a more accurate revised training set. ML model accuracy may (e.g., also) be improved, for example, by identifying and removing suspect categories with excessive (e.g., weighted) vector space variance. Suspect categories may be retained or revised. Users may (e.g., also) specify a prediction confidence level and/or coverage (e.g., to control accuracy) (Abstract). 8. Examiner has pointed out particular references contained in the prior arts of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and Figures may apply as well. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior arts or disclosed by the examiner. It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968))). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUMAN RAJAPUTRA whose telephone number is (571) 272-4669. The examiner can normally be reached between 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi (571) 272-4078 can be reached. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/ patents/ apply/ patent-center for more information about Patent Center and https://www.uspto.gov/ patents/ docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S. R./ Examiner, Art Unit 2163 /ALEX GOFMAN/Primary Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Apr 22, 2025
Application Filed
Mar 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12455878
SYSTEM AND METHOD FOR SQL SERVER RESOURCES AND PERMISSIONS ANALYSIS IN IDENTITY MANAGEMENT SYSTEMS
2y 5m to grant Granted Oct 28, 2025
Patent 12436988
KEYPHRASE GENERATION
2y 5m to grant Granted Oct 07, 2025
Patent 12423367
SEARCH ENGINE INTERFACE USING TAG/OPERATOR SEARCH CHIP OBJECTS
2y 5m to grant Granted Sep 23, 2025
Patent 12424304
Systems and Methods for Analyzing Longitudinal Health Information and Generating a Dynamically Structured Electronic File
2y 5m to grant Granted Sep 23, 2025
Patent 12412664
ADDICTION PREDICTOR AND RELAPSE DETECTION SUPPORT TOOL
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+37.6%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 164 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month