DETAILED ACTION
The present application, filed on 7/12/2023 is being examined under the AIA first inventor to file provisions.
The following is a non-final First Office Action on the Merits. Claims 1-20 are pending and have been considered below.
Priority
The application claims priority to REPUBLIC OF KOREA 10-2022-0145743 11/04/2022. The priority is not acknowledged, because a certified English translation of the filed document has not been provided.
Information Disclosure Statement (IDS)
The information disclosure statement (IDS) submitted on 7/12/2023; 8/3/2023; 5/15/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, such IDS is being considered by Examiner.
Claim Rejections - 35 USC § 101
35 USC 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 USC 101 because the claimed invention is not directed to patent eligible subject matter. The claimed matter is directed to a judicial exception, i.e. an abstract idea, not integrated into a practical application, and without significantly more.
Per Step 1 of the multi-step eligibility analysis, claims 1-10 are directed to a computer implemented method, claims 11-13 are directed to a computer implemented method, claims 14 are directed to computer executable instructions stored on a non-transitory storage medium, and claims 15-20 are directed to a system.
Thus, on its face, each independent claim and the associated dependent claims are directed to a statutory category of invention.
[INDEPENDENT CLAIMS]
Per Step 2A.1. Independent claim 1, (which is representative of independent claims 14) is rejected under 35 USC 101 because the independent claim is directed to an abstract idea, a judicial exception, without reciting additional elements that integrate the judicial exception into a practical application.
The limitations of the independent claim 1, (which is representative of independent claims 14) recite an abstract idea, shown in bold below:
[A] A processor-implemented method
[B] training a first model to predict confidences of labels for data samples in a training dataset, including using a corrected data sample obtained by correcting an incorrect label based on a corresponding confidence detected by the first model and an estimated corrected label generated by a second model;
[C] training the second model to estimate correct labels for the data samples, including
[D] estimating a correct other label corresponding to another incorrect label detected based on a corresponding confidence generated by the first model with respect to the other incorrect label; and
[E] automatically correcting the other incorrect label with an estimated correct other label.
Independent claim 1, (which is representative of independent claims 14) recites: training a first and a second model to estimate labels ([B], [C]); and estimating n alternative correct label and correcting the incorrect label ([D], [E]), which, based on the claim language and in view of the application disclosure, represents a process aimed at: utilizing a deep learning model to automate labeling.
This is a combination that, under its broadest reasonable interpretation, covers performance of limitations expressing mathematical concepts like mathematical relationships, mathematical calculations. These fall under the Mathematical Concepts. i.e., mathematical relationships, mathematical formulas or equations, or mathematical calculations grouping of abstract ideas (see MPEP 2106.04(a)(2) I).
Accordingly, it is concluded that independent claim 1, (which is representative of independent claims 14) recites an abstract idea that corresponds to a judicial exception.
[INDEPENDENT CLAIMS – Additional Elements]
Per Step 2A.2. The identified abstract idea is not integrated into a practical application because the additional elements in the independent claims only amount to instructions to apply the judicial exception to a computer, or are a general link to a technological environment (see MPEP 2106.05(f); MPEP 2106.05(h)).
For example, the added elements “processor,” “computer-readable storage medium” recite computing elements at a high level of generality, generally linking the use of a judicial exception to a particular technological environment (see MPEP 2106.05(h)), or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
These additional elements of the independent claims do not preclude from carrying out the identified abstract idea utilizing a deep learning model to automate labeling, and do not serve to integrate the identified abstract idea into a practical application.
Therefore, the additional claim elements of independent claim 1, (which is representative of independent claims 14) do not integrate the identified abstract idea into a practical application and the claims are directed to the recited judicial exception.
Per Step 2B. Independent claim 1, (which is representative of independent claims 14) does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when the independent claim is reevaluated as a whole, as an ordered combination under the considerations of Step 2B, the outcome is the same like under Step 2A.2.
Overall, it is concluded that independent claims 1, 11 are deemed ineligible.
Per Step 2A.1. Independent claim 11 is rejected under 35 USC 101 because the independent claim is directed to an abstract idea, a judicial exception, without reciting additional elements that integrate the judicial exception into a practical application.
The limitations of the independent claim 11 recite an abstract idea, shown in bold below:
[A] An automatic labeling method
[B] detecting whether a label for a data sample is an incorrect label by applying the data sample to a first model,
wherein the first model comprises a first neural network that is trained to
[C] detect the incorrect label comprised in the data sample based on confidence of the label.
Independent claim 11 recites: detecting an incorrect label ([B], [C}, which, based on the claim language and in view of the application disclosure, represents a process aimed at: utilizing a deep learning model to automate labeling.
This is a combination that, under its broadest reasonable interpretation, covers performance of limitations expressing mathematical concepts like mathematical relationships, mathematical calculations. These fall under the Mathematical Concepts. i.e., mathematical relationships, mathematical formulas or equations, or mathematical calculations grouping of abstract ideas (see MPEP 2106.04(a)(2) I).
Accordingly, it is concluded that independent claim 11 recites an abstract idea that corresponds to a judicial exception.
[INDEPENDENT CLAIMS – Additional Elements]
Per Step 2A.2. The identified abstract idea is not integrated into a practical application because the additional elements in the independent claims only amount to instructions to apply the judicial exception to a computer, or are a general link to a technological environment (see MPEP 2106.05(f); MPEP 2106.05(h)).
For example, the additional elements “wherein the first model comprises a first neural network” as applied to the detection model, are nothing more than (a) descriptive limitations of claim elements, such as describing the nature, structure and/or content of other claim elements, or (b) general links to the computing environment, which amount to instructions to “apply it,” or equivalent (MPEP 2106.05(f)).
These additional elements of the independent claims do not preclude from carrying out the identified abstract idea utilizing a deep learning model to automate labeling, and do not serve to integrate the identified abstract idea into a practical application.
Therefore, the additional claim elements of independent claim 11 do not integrate the identified abstract idea into a practical application and the claims are directed to the recited judicial exception.
Per Step 2B. Independent 11 does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when the independent claim is reevaluated as a whole, as an ordered combination under the considerations of Step 2B, the outcome is the same like under Step 2A.2.
Overall, it is concluded that independent claims 11 are deemed ineligible.
Per Step 2A.1. Independent claim 15 is rejected under 35 USC 101 because the independent claim is directed to an abstract idea, a judicial exception, without reciting additional elements that integrate the judicial exception into a practical application.
The limitations of the independent claim 15 recite an abstract idea, shown in bold below:
[A] An electronic device, comprising: a communication system; and a processor configured to, based on confidences of labels for data samples in a training dataset received by the communication system,
[B] iteratively train a first model to detect incorrect labels in the training dataset and
a second model to
[C] estimate correct labels corresponding to the incorrect labels, and
[D] generate a data sample in which an incorrect label is corrected using at least one of the first model or the second model,
wherein the processor is further configured to:
[E] train the first model to predict the confidences, including using the corrected data sample generated by correcting the incorrect label based on a corresponding confidence detected by the first model and an estimated corrected label generated by the second model,
[F] train the second model to estimate correct labels for the data samples, including estimating a corrected other label corresponding to another incorrect label detected based on a corresponding confidence generated by the first model with respect to the other incorrect label, and
[G] automatically correct the other incorrect label with the estimated correct other label.
Independent claim 15 recites: training a model to detect incorrect labels and a model with correct labels ([B], [C]); generate data samples with correct label along with a confidence level ([D], [E]); and training a model to estimate the correct labels and to correct incorrect labels ([F], [G]), which, based on the claim language and in view of the application disclosure, represents a process aimed at: utilizing a deep learning model to automate labeling.
This is a combination that, under its broadest reasonable interpretation, covers performance of limitations expressing mathematical concepts like mathematical relationships, mathematical calculations. These fall under the Mathematical Concepts. i.e., mathematical relationships, mathematical formulas or equations, or mathematical calculations grouping of abstract ideas (see MPEP 2106.04(a)(2) I).
Accordingly, it is concluded that independent claim 15 recites an abstract idea that corresponds to a judicial exception.
[INDEPENDENT CLAIMS – Additional Elements]
Per Step 2A.2. The identified abstract idea is not integrated into a practical application because the additional elements in the independent claims only amount to instructions to apply the judicial exception to a computer, or are a general link to a technological environment (see MPEP 2106.05(f); MPEP 2106.05(h)).
For example, the added elements “communication system,” “processor” recite computing elements at a high level of generality, generally linking the use of a judicial exception to a particular technological environment (see MPEP 2106.05(h)), or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
These additional elements of the independent claims do not preclude from carrying out the identified abstract idea utilizing a deep learning model to automate labeling, and do not serve to integrate the identified abstract idea into a practical application.
Therefore, the additional claim elements of independent claim 15 do not integrate the identified abstract idea into a practical application and the claims are directed to the recited judicial exception.
Per Step 2B. Independent claim 15 does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when the independent claim is reevaluated as a whole, as an ordered combination under the considerations of Step 2B, the outcome is the same like under Step 2A.2.
Overall, it is concluded that independent claims 15 are deemed ineligible.
[DEPENDENT CLAIMS]
Dependent claim 2, which is representative of dependent claims 16, recites:
iteratively trains the first model to detect incorrect labels and the second model to estimate the correct labels; the iterative training further comprises:
determining the confidence comprising a first probability of each of the labels being correct and a second probability of each of the labels being incorrect;
training the first model by updating first parameters of the first model, to predict confidence,
using the corrected data samples obtained by correcting the incorrect labels in the first data sample.
When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: utilizing a deep learning model to automate labeling. The elements in this dependent claim are comparable to performance of limitations expressing mathematical concepts like mathematical relationships, mathematical calculations. These fall under the Mathematical Concepts. i.e., mathematical relationships, mathematical formulas or equations, or mathematical calculations grouping of abstract ideas (see MPEP 2106.04(a)(2) I).
The dependent claim elements have the same relationship to the underlying abstract idea (utilizing a deep learning model to automate labeling) as outlined in the independent claims analysis above. Thus, the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (utilizing a deep learning model to automate labeling).
Therefore, dependent claim 2 (which is representative of dependent claims 16) is deemed ineligible.
Dependent claim 3 recites:
sampling the second data sample comprising the correct labels based on a Bernoulli distribution.
When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: utilizing a deep learning model to automate labeling. The elements in this dependent claim are comparable to “receiving or transmitting data over a network, e.g., using the Internet to gather or provide data”, which has been recognized by a controlling court as "well-understood, routine and conventional computing functions" when claimed generically as they are in these dependent claims. Thus, it is concluded that these claim elements do not integrate the identified abstract idea (utilizing a deep learning model to automate labeling) into a practical application (see MPEP 2106.05(d) II)).
The dependent claim elements have the same relationship to the underlying abstract idea (utilizing a deep learning model to automate labeling) as outlined in the independent claims analysis above. Thus, the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (utilizing a deep learning model to automate labeling).
Therefore, dependent claim 3 is deemed ineligible.
Dependent claim 4, which is representative of dependent claims 17, recites:
updating the first parameters of the first model based on a maximum likelihood corresponding to the corrected data samples.
When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: utilizing a deep learning model to automate labeling. The elements in this dependent claim are comparable to receiving/transmitting data, processing data, storing results or transmitting data that serves merely to implement the abstract idea using computing components for performing computer functions (corresponding to the words “apply it” or an equivalent), or merely uses a computer as a tool to perform the identified abstract idea. Thus, it is concluded that these claim elements do not integrate the identified abstract idea (utilizing a deep learning model to automate labeling) into a practical application (see MPEP 2106.05(f)(2)).
The dependent claim elements have the same relationship to the underlying abstract idea (utilizing a deep learning model to automate labeling) as outlined in the independent claims analysis above. Thus, the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (utilizing a deep learning model to automate labeling).
Therefore, dependent claim 4 (which is representative of dependent claims 17) is deemed ineligible.
Dependent claim 5, which is representative of dependent claims 18, recites:
training the first model by applying respective regularization penalties for the confidences to the updated first parameters.
When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: utilizing a deep learning model to automate labeling. The elements in this dependent claim are comparable to performance of limitations expressing mathematical concepts like mathematical relationships, mathematical calculations. These fall under the Mathematical Concepts. i.e., mathematical relationships, mathematical formulas or equations, or mathematical calculations grouping of abstract ideas (see MPEP 2106.04(a)(2) I).
The dependent claim elements have the same relationship to the underlying abstract idea (utilizing a deep learning model to automate labeling) as outlined in the independent claims analysis above. Thus, the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (utilizing a deep learning model to automate labeling).
Therefore, dependent claim 5 (which is representative of dependent claims 18) is deemed ineligible.
Dependent claim 6, which is representative of dependent claims 19, recites:
determining initial parameter values of the first model based on a calculated cross-entropy loss.
When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: utilizing a deep learning model to automate labeling. The elements in this dependent claim are comparable to performance of limitations expressing mathematical concepts like mathematical relationships, mathematical calculations. These fall under the Mathematical Concepts. i.e., mathematical relationships, mathematical formulas or equations, or mathematical calculations grouping of abstract ideas (see MPEP 2106.04(a)(2) I).
The dependent claim elements have the same relationship to the underlying abstract idea (utilizing a deep learning model to automate labeling) as outlined in the independent claims analysis above. Thus, the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (utilizing a deep learning model to automate labeling).
Therefore, dependent claim 6 (which is representative of dependent claims 19) is deemed ineligible.
Dependent claim 7, which is representative of dependent claims 20, recites:
estimating a probability of the correct other label corresponding to the other incorrect label of the first data sample;
training the second model by updating second parameters of the second model, to estimate the other correct label, using a first data sample comprising the estimated probability of the other correct label.
When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: utilizing a deep learning model to automate labeling. The elements in this dependent claim are comparable to performance of limitations expressing mathematical concepts like mathematical relationships, mathematical calculations. These fall under the Mathematical Concepts. i.e., mathematical relationships, mathematical formulas or equations, or mathematical calculations grouping of abstract ideas (see MPEP 2106.04(a)(2) I).
The dependent claim elements have the same relationship to the underlying abstract idea (utilizing a deep learning model to automate labeling) as outlined in the independent claims analysis above. Thus, the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (utilizing a deep learning model to automate labeling).
Therefore, dependent claim 7 (which is representative of dependent claims 20) is deemed ineligible.
Dependent claim 8 recites:
classifying the data samples in the training dataset into a first data sample comprising the incorrect labels and a second data sample comprising the correct labels based on a distribution of the confidences;
wherein the data samples are mixed such that the first data sample comprises
training data and the incorrect label corresponding to the training data and the second data sample comprises the training data and the correct label corresponding to the training data.
When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: utilizing a deep learning model to automate labeling. The elements in this dependent claim are comparable to performance of limitations expressing mathematical concepts like mathematical relationships, mathematical calculations. These fall under the Mathematical Concepts. i.e., mathematical relationships, mathematical formulas or equations, or mathematical calculations grouping of abstract ideas (see MPEP 2106.04(a)(2) I).
The dependent claim elements have the same relationship to the underlying abstract idea (utilizing a deep learning model to automate labeling) as outlined in the independent claims analysis above. Thus, the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (utilizing a deep learning model to automate labeling).
Therefore, dependent claim 8 is deemed ineligible.
Dependent claim 12 recites:
wherein the data sample comprises input data and the label corresponding to the input data, and the method further comprises:
generating a correct label corresponding to the incorrect label by applying the input data to a second model, as the label is determined as the incorrect label,
wherein the second model comprises a second neural network that is trained to estimate the correct label corresponding to the data sample comprising the incorrect label.
When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: utilizing a deep learning model to automate labeling. The elements in this dependent claim are comparable to performance of limitations expressing mathematical concepts like mathematical relationships, mathematical calculations. These fall under the Mathematical Concepts. i.e., mathematical relationships, mathematical formulas or equations, or mathematical calculations grouping of abstract ideas (see MPEP 2106.04(a)(2) I).
The dependent claim elements have the same relationship to the underlying abstract idea (utilizing a deep learning model to automate labeling) as outlined in the independent claims analysis above. Thus, the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (utilizing a deep learning model to automate labeling).
Therefore, dependent claim 12 is deemed ineligible.
Dependent claims 9, 10, 13 recite:
wherein the training data comprises image data of a semiconductor obtained by an image sensor.
wherein the first model and the second model are each trained based on an expectation-maximization (EM) algorithm.
wherein the input data comprises image data of a semiconductor obtained by an image sensor.
These further elements in the dependent claims do not perform any claimed method steps. They describe the nature, structure and/or content of other claim elements – the training data; the first model; the second model; the input data – and as such, cannot change the nature of the identified abstract idea (utilizing a deep learning model to automate labeling), from a judicial exception into eligible subject matter, because they do not represent significantly more (see MPEP 2106.07). The nature, form or structure of the other claim elements themselves do not practically or significantly alter how the identified abstract idea would be performed and do not provide more than a general link to a technological environment.
Therefore, dependent claims 9, 10, 13 are deemed ineligible.
When the dependent claims are considered as a whole, as an ordered combination, the claim elements noted above appear to merely apply the abstract concept to a technical environment in a very general sense. The most significant elements, which form the abstract concept, are set forth in the independent claims. The fact that the computing devices and the dependent claims are facilitating the abstract concept is not enough to confer statutory subject matter eligibility, since their individual and combined significance do not transform the identified abstract concept at the core of the claimed invention into eligible subject matter. Therefore, it is concluded that the dependent claims of the instant application, considered individually, or as a as a whole, as an ordered combination, do not amount to significantly more (see MPEP 2106.07(a)II).
In sum, Claims 1-20 are rejected under 35 USC 101 as being directed to non-statutory subject matter.
Claims 11-13 are directed to non-statutory subject matter because the method disclosed is software per se. The applicant is advised to implement a particular hardware component such as a computer in the body of the claim language to the claimed system in order to overcome the rejections.
Claims 11-13 recite a method directed to purely mental steps. In order for a method to be considered a "process" under §101, a claimed process must either: (1) be tied to another statutory class (such as a particular apparatus) or (2) transform underlying subject matter (such as an article or materials). Diamond v. Diehr, 450 U.S. 175, 184 (1981); Parker v. Flook, 437 U.S. 584, 588 n.9 (1978); Gottschalk v. Benson, 409 U.S. 63, 70 (1972); Cochrane v. Deener, 94 U.S. 780, 787-788 (1876). If neither of these requirements is met by the claim, the method is not a patent eligible process under §101 and is non-statutory subject matter. Thus, to qualify as a statutory process, the claim should positively recite the other statutory class (the thing or product) to which it is tied, for example, by identifying the apparatus that accomplishes the method steps, or positively recite the subject matter that is being transformed, for example, by identifying the material that is being changed to a different state. The applicant is advised to implement a particular apparatus such as a computer in the body of the claim language to perform the claimed method in order to overcome the rejections.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 3 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Regarding Claims 3 – The claim recites: sampling the second data sample comprising the correct labels based on a Bernoulli distribution. There is insufficient antecedent basis for “the second data sample.”
Examiner Remarks
No art rejection has been applied to the instant set of claims.
The limitations not disclosed by the identified prior art are:
For independent claims 1, 14 – training a first model to predict confidences of labels for data samples in a training dataset, including using a corrected data sample obtained by correcting an incorrect label based on a corresponding confidence detected by the first model and an estimated corrected label generated by a second model; estimating a correct other label corresponding to another incorrect label detected based on a corresponding confidence generated by the first model with respect to the other incorrect label;
For independent claim 11 – detect the incorrect label comprised in the data sample based on confidence of the label.
For independent claim 15 – train the first model to predict the confidences, including using the corrected data sample generated by correcting the incorrect label based on a corresponding confidence detected by the first model and an estimated corrected label generated by the second model; train the second model to estimate correct labels for the data samples, including estimating a corrected other label corresponding to another incorrect label detected based on a corresponding confidence generated by the first model with respect to the other incorrect label; and automatically correct the other incorrect label with the estimated correct other label.
The prior art made of record and not relied upon which, however, is considered pertinent to applicant's disclosure:
US 20200117991 A1 Suzuki; Kanata et al. LEARNING APPARATUS, DETECTING APPARATUS, LEARNING METHOD, AND DETECTING METHOD A feature model, which calculates a feature value of an input image, is trained on a plurality of first images. First feature values corresponding one-to-one with the first images are calculated using the feature model, and feature distribution information representing a relationship between a plurality of classes and the first feature values is generated. When a detection model which determines, in an input image, each region with an object and a class to which the object belongs is trained on a plurality of second images, second feature values corresponding to regions determined within the second images by the detection model are calculated using the feature model, an evaluation value, which indicates class determination accuracy of the detection model, is modified using the feature distribution information and the second feature values, and the detection model is updated based on the modified evaluation value.
US 20220327808 A1 Bergen; Leon et al. Systems And Methods For Evaluating The Error Rate Of Human-Generated Data Systems, apparatuses, and methods for more efficiently and effectively determining the accuracy with which a human evaluates a set of data, as this may reduce the error in the assessment of a model's performance. This can be helpful in situations where human inputs are used to confirm the output of a machine learning generated classification and in situations where it is desired to evaluate the accuracy of data that may have been labeled or annotated by a human curator. This may assist in reducing the need for new validation/test data when evaluating a new model. The system and methods described can be used to evaluate the accuracy of a trained Machine Learning (ML) model, and as a result, allow a comparison between models based on different ML algorithms.
US 20210406644 A1 SALMAN; Nader et al. ACTIVE LEARNING FRAMEWORK FOR MACHINE-ASSISTED TASKS An active learning framework is provided that employs a plurality of machine learning components that operate over iterations of a training phase followed by an active learning phase. In each iteration of the training phase, the machine learning components are trained from a pool of labeled observations. In the active learning phase, the machine learning components are configured to generate metrics used to control sampling of unlabeled observations for labeling such that newly labeled observations are added to a pool of labeled observations for the next iteration of the training phase. The machine learning components can include an inspection (or primary) learning component that generates a predicted label and uncertainty score for an unlabeled observation, and at least one additional component that generates a quality metric related to the unlabeled observation or the predicted label. The uncertainty score and quality metric(s) can be combined for efficient sampling of observations for labeling.
US 20210245659 A1 Golov; Gil ARTIFICIAL INTELLIGENCE-BASED PERSISTENCE OF VEHICLE BLACK BOX DATA The disclosed embodiments are directed to improving the persistence of pre-accident data in vehicles. In one embodiment a method is disclosed comprising receiving events broadcast over a vehicle bus; classifying the events using a machine learning model, the classifying comprising indicating that a collision is imminent; and copying data from a cyclic buffer of a black box device into a long-term storage device in response to the classifying.
US 20220224659 A1 El Ghazzal; Sammy AUTOMATED MESSAGING REPLY-TO An automated messaging reply-to system can automatically select which message a potential reply message is replying to. The automated messaging reply-to system can obtain a message thread, a potential reply message, and a context. The automated messaging reply-to system can filter the message thread and generate model inputs based on the remaining messages, the potential reply message, and the context. The automated messaging reply-to system can apply the model input to a machine learning model, which can generate reply scores for the remaining messages. After generating reply scores, the automated messaging reply-to system can determine whether the remaining message with the highest reply score qualifies as an originating message being replied to. The automated messaging reply-to system can cause display of the potential reply message as a reply-to for the determined originating message.
US 20200311616 A1 Rajkumar; Nareshkumar et al. EVALUATING ROBOT LEARNING Methods, systems, and apparatus, including computer programs encoded on computer storage media for evaluating robot learning. In some implementations, one or more computers receive object classification examples from a plurality of robots. Each object classification example includes (i) an embedding that a robot generated using a machine learning model, and (ii) an object classification corresponding to the embedding. The object classification examples are evaluated based on a similarity of the received embeddings with respect to other embeddings. A subset of the object classification examples is selected based on the evaluation of the quality of the embeddings. The subset of the object classification examples is distributed to the robots in the plurality of robots.
US 20230315879 A1 JAVIDI; Bahram et al. SYSTEMS AND METHODS FOR PROTECTING MACHINE LEARNING MODELS AGAINST ADVERSARIAL ATTACKS Embodiments pertain to systems configured to and methods for analyzing a scene comprising one or more objects. The system may be configured to perform the following: obtaining a set of optically encrypted image data describing a scene, including applying an optical manipulation to light incoming to an image acquisition device, whereby the image acquisition device outputs the set of optically encrypted image data, and wherein the optical manipulation is based on an encryption key; providing the set of optically encrypted image data to a machine learning model trained in accordance with the encryption key; and receiving from the machine learning model a prediction related to the scene.
US 20220058347 A1 Singaraju; Gautam et al. TECHNIQUES FOR PROVIDING EXPLANATIONS FOR TEXT CLASSIFICATION A chatbot system is configured to execute code to perform determining, by the chatbot system, a classification result for an utterance and one or more anchors each anchor of the one or more anchors corresponding to one or more anchor words of the utterance. For each anchor of the one or more anchors, one or more synthetic utterances are generated, and one or more classification results for the one or more synthetic utterances are determined. A report is generated by the chatbot system comprising a representation of a particular anchor of the one or more anchors, the particular anchor corresponding to a highest confidence value among the one or more anchors. The one or more synthetic utterances may be used to generate a new training dataset for training a machine-learning model. The training dataset may be refined according to a threshold confidence values to filter out datasets for training.
US 20210174231 A1 SAWADA; Azusa et al. MODEL GENERATION DEVICE, MODEL GENERATION METHOD, AND NON-TRANSITORY RECODING MEDIUM Disclosed is a model generation device capable of mitigating the risk of overlooking a phenomenon of interest in machine learning. The model generation device determines whether or not a label of a first data is similar to a label of a second data. The model generation device assigns the label of the second data to the first data when determining that the label of the first data is similar to the label of the second data based on a degree of similarity between observation information representing a state where the first data is observed and observation information representing a state where the second data is observed. The model generation device calculates model representing a relevance between data information containing the first data and the second data and label information containing the assigned label and the label of the second data.
US 10305766 B1 Zhang; Ce et al. Coexistence-insensitive presence detection A system and method include processing logic receiving, from a wireless transceiver of a first device, first data indicative of channel state information (CSI) of a first communication link between the wireless transceiver and a wireless transmitter of a second device, the first device and the second device being located in a building. The logic pre-preprocesses the first data to generate input vectors composed of statistical parameter values derived from sets of discrete samples of the first data. The logic processes, through a long short-term memory (LSTM) layer of a neural network, the input vectors to generate multiple hidden state values of the neural network. The logic processes, through a set of additional layers of the neural network, respective hidden state values of the multiple hidden state values to determine that a human is present in the building.
US 12148417 B1 Cardella; Aidan Thomas et al. Label confidence scoring Devices and techniques are generally described for confidence score generation for label generation. In some examples, first data may be received from a first computing device. In various further examples, first label data classifying at least one aspect of the first data may be received. First metadata associated with how the first label data was generated may be received. In some cases, the first label data may be generated by a first user. In various examples, a first machine learning model may generate a first confidence score associated with the first label data based at least in part on the first data and second data related to label generation by the first person. In various examples, output data comprising the first confidence score may be sent to the first computing device.
US 11678011 B1 Fu; Sai-Wai et al. Mobile distributed security response The invention concerns a video feed monitoring app configured to be implemented with an interface and a processor. The interface may be configured to receive requests from an operator and present visual content to a display. The processor may be configured to process the requests and update the display. The video feed monitoring app may be configured to receive video streams from smart security devices, receive a priority signal, select a subset of the video streams in response to the priority signal and a user preference and arrange the subset of the video streams on the display. The priority signal may be generated externally based on events detected. The subset of the video streams may be updated based on the priority signal. Updating the subset of the video streams may comprise displaying the video streams that comprise events and removing the video streams that do not comprise events.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Radu Andrei whose telephone number is 313.446.4948. The examiner can normally be reached on Monday – Friday 8:30am – 5pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick McAtee can be reached at 571.272.7575. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:/www.uspto.gov/interviewpractice.
As disclosed in MPEP 502.03, communications via Internet e-mail are at the discretion of the applicant. Without a written authorization by applicant in place, the USPTO will not respond via Internet e-mail to any Internet correspondence which contains information subject to the confidentiality requirement as set forth in 35 U.S.C. 122. A paper copy of such correspondence will be placed in the appropriate patent application. The following is a sample authorization form which may be used by applicant:
“Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with me concerning any subject matter of this application by electronic mail. I understand that a copy of these communications will be made of record in the application file.”
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center information webpage. Status information for unpublished applications is available to registered users through Patent Center information webpage only.
To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov.
Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in USA or CANADA) or 571-272-1000.
Any response to this action should be mailed to:
Commissioner of Patents and Trademarks
P.O. Box 1450
Alexandria, VA 22313-1450
or faxed to 571-273-8300
/Radu Andrei/
Primary Examiner, AU 3698