DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to communication filed on 12 March 2026. Claims 23-42 are pending in the case. Claims 23, 25, 26, 28, 29, 30, 32, 33, 35, 36, 37, 39, 40, and 42 was amended. Claims 23, 30, and 37 are the independent claims. This action is non-final.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 12th, 2026 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 23-42 are rejected under 35 U.S.C. 103 as being unpatentable over Ramos et al. (US 2019/0244113 A1) in view of Williams et al. (US 2015/0254555 A1), further in view of Kwant et al. (US 10,452,956 B2).
Regarding claim 23, Ramos teaches a computer-implemented method, comprising:
obtaining, at a cloud computing environment, a request to initiate an interactive labeling session for at least a portion of machine learning data set comprising images of items, wherein the interactive labeling session presents the images of the items, and generates labels for the items, via one or more programmatic interfaces (see Ramos, Figure 1-3, Paragraphs [0033], [0035], [0036], [0038], “Upon selection of a data item 108 in the exploration pane 100, the user may, for instance, be allowed to view the contents of the data item and/or its associated prediction score and (if existent) label, and to assign a label to an unlabeled data item or correct a label of an already labeled data item. … The method 200 involves setting up a learning algorithm and defining a feature set to be used in the classification (operation 202). The data items within a dataset 204 provided as input to the method 200 can then be “featurized,” i.e., their respective feature values can be computed (operation 206). Further, to form a training dataset, a subset of the data items in dataset 204 are labeled, e.g., by manually classifying them (in the case of a binary classifier, as either positive or negative for the target concept) (operation 208).” [A user feedback session maybe be initiated to label the data items.]);
However, Ramos does not explicitly teach:
obtaining, at a cloud computing environment, a request to initiate an interactive labeling session for at least a portion of machine learning data set comprising images of items, wherein the interactive labeling session presents the images of the items, and generates labels for the items, via one or more programmatic interfaces;
Williams teaches:
obtaining, at a cloud computing environment, a request to initiate an interactive labeling session for at least a portion of machine learning data set comprising images of items, wherein the interactive labeling session presents the images of the items, and generates labels for the items, via one or more programmatic interfaces (see Williams, Paragraphs [0046], [0223], [0230], “classification server computer 116, and data sensor computer 118, and enterprise server computer 120 may be implemented using one or more cloud instances in one or more cloud networks. … The Training Corpus 508 and Testing Corpus 510 may be populated using digital photographs of the types and specific brands of goods that are likely to be sold by a group of sellers using the system. … The system may also record and present uncertain matches for a Domain Expert to administratively review through User Interface 532, independently of the review by sellers. Domain Expert decision and data adjustments may be recorded and incorporated into Training Corpus 508 or the item database, whichever is appropriate.” [A cloud network (i.e., cloud computing environment) is incorporated. The training corpus (i.e., machine learning data set) may include digital photographs (i.e., images of items). The system may present the uncertain matches in order to label them.]);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Ramos (teaching interactive semantic data exploration for error discovery) in view of Williams (teaching classifying data with deep learning neural records incrementally refined through expert input), and arrived at a method that incorporates a cloud computing environment. One of ordinary skill in the art would have been motivated to make such a combination for the purposes of improving classifying of data with a deep learning neural network (see Williams, Paragraph [0003]). In addition, both the references (Ramos and Williams) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as machine learning. The close relation between both the references highly suggests an expectation of success.
The combination of Ramos, and Williams further teaches:
generating, based at least in part on one or more machine learning algorithms, one or more labels for at least a first item of the portion of the machine learning data set (see Ramos, Figure 1-3, Paragraphs [0033], [0035], [0036], [0038], “The learning algorithm is then used to “train” the concept classifier model, that is, to select a classification function and/or set the values of its adjustable parameters, based on the labeled data items and their associated sets of feature values (operation 210). The trained concept classifier model can be used to make classification predictions for unlabeled as well as labeled data items (operation 212), and the results can be inspected by a human model developer to discover actual and potential prediction errors (operation 214);” [A machine learning algorithm may be used to generate labels for a data item.]);
presenting, from the cloud computing environment, via one or more programmatic interfaces subsequent to receiving the request, (a) images of at least first and second items of the portion of the machine learning data set (see Ramos, Figure 1-3, Paragraphs [0033], [0035], [0036], [0038], “the user interface of FIGS. 1A and 1B may serve visual data exploration for this purpose. To improve the performance of the classifier model and correct discovered errors, the human model developer can add labeled data items to the training set and/or features to the feature set. The method 200 may continue in a loop until the desired classifier performance is achieved.” [Images representing the data items from a dataset may be presented.] Also, see Williams, Paragraph [0230], “The system may also record and present uncertain matches for a Domain Expert to administratively review through User Interface 532, independently of the review by sellers. Domain Expert decision and data adjustments may be recorded and incorporated into Training Corpus 508 or the item database, whichever is appropriate.” [The system may present the uncertain matches in order to label them.]);
receiving, via the one or more programmatic interfaces, selection of at least the first and second items (see Ramos, Figure 1-3, Paragraphs [0033], [0035], [0036], [0038], “Selection of a cluster of data items may allow the user to view a list of items within the cluster and/or the general composition of the cluster (which may be updated, e.g., in cluster composition field 114), or to navigate into the selected cluster within the star coordinate space 102.” [A selection of a cluster of data items (i.e., selection of at least the first and second items) may be received]);
However, the combination of Ramos, and Williams does not explicitly teach:
receiving, via the one or more programmatic interfaces, an indication that at least a first label is to be applied to the currently selected at least first and second items;
Kwant teaches:
receiving, via the one or more programmatic interfaces, an indication that at least a first label is to be applied to the currently selected at least first and second items (see Kwant, [Column 10, Lines 13-23], “Alternatively, the feature detector 103 can select a subset of the data items or images in the training data set for automatic labeling. By selecting a representative subset, the feature detector 103 can advantageously reduce the number of data items or images it needs to process, thereby reducing the computing, bandwidth, memory, etc. resources associated with evaluating training data sets according to the various embodiments.” [A plurality of data items may be labeled. The number of data items is reduced with evaluating training data sets (i.e., indication at a least a first label is to be applied to the currently selected at least first and second items).]);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Ramos (teaching interactive semantic data exploration for error discovery) in view of Williams (teaching classifying data with deep learning neural records incrementally refined through expert input), further in view of Kwant (teaching method, apparatus, and system for providing quality assurance for training a feature prediction model), and arrived at a method that incorporates labeling a plurality of data items. One of ordinary skill in the art would have been motivated to make such a combination for the purposes of reducing computing, bandwidth, and memory resources (see Kwant, [Column 10, Lines 13-23]). In addition, the references (Ramos, Williams, and Kwant) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as machine learning. The close relation between the references highly suggests an expectation of success.
The combination of Ramos, Williams, and Kwant further teaches:
and storing, at the cloud computing environment and based at least in part on the received indication, an indication of a relationship between the at least a first label and the at least first and second items (see Ramos, Figure 1-3, Paragraphs [0033], [0035], [0036], [0038], “The method 200 may continue in a loop until the desired classifier performance is achieved.” [The method being continued in a loop indicates that the label associated with the data items are stored in order to achieve the desired classifier performance.] Also, see Kwant, [Column 10, Lines 13-23], “Alternatively, the feature detector 103 can select a subset of the data items or images in the training data set for automatic labeling. By selecting a representative subset, the feature detector 103 can advantageously reduce the number of data items or images it needs to process, thereby reducing the computing, bandwidth, memory, etc. resources associated with evaluating training data sets according to the various embodiments.” [The number of data items is reduced with evaluating training data sets (i.e., storing an indication of a relationship between the at least a first label and the at least first and second items).]).
Regarding claim 24, Ramos in view of Williams, further in view of Kwant teaches all the limitations of claim 23. Ramos further teaches:
assigning respective ranks to a plurality of items of the portion of the machine learning data set, including the first item, based at least in part on estimated learning contributions of individual ones of the plurality of items to a training iteration of the machine learning model; and determining, based at least in part on a rank assigned to the first item, to present an indication of the first item via the one or more programmatic interfaces, wherein the indication comprises the image (see Ramos, Figure 1, Paragraphs [0002], [0028], [0034], [0036], [0042], “In association with the thumbnail images 152 or other representations of the individual data items, further information about the respective data items may be displayed; such information may include, e.g., the prediction scores assigned by the classifier (as shown as element 156) and/or associated confidence scores, the human-assigned labels (indicating whether an item is positive or negative for the target concept), anchor concepts to which the data items are attracted, and the like. The thumbnail images 152 or other representations of the data items may be ordered by the prediction score (as shown in FIG. 1B), or by some other parameter. … The user interface depicted in FIGS. 1A and 1B can be beneficial for data exploration in the context of interactively building a concept classifier. Given a set of features (i.e., functions that map data items onto scalar real values) to be used for a given classification task, a concept classifier generates classification predictions (e.g., for a binary classifier, in the form of prediction scores in conjunction with a decision boundary that demarcates positive and negative predictions for the target concept) for data items based on their associated feature values. In other words, a classifier is a function that maps sets of feature values onto respective prediction scores. … The confidence associated with a prediction, which can be quantified in a confidence score and generally increases the farther away a prediction score is from the decision boundary, can be a useful criterion.” [The data items may be ordered based on a prediction score assigned by a classifier model associated with a confidence score (i.e., respective rank based at least in part on estimated learning contributions). The data items may be presented as images.]).
Regarding claim 25, Ramos in view of Williams, further in view of Kwant teaches all the limitations of claim 23. Ramos further teaches:
storing a second label, received via the one or more programmatic interfaces, for a third item of the portion of the machine learning data set, wherein the second label corresponds to as second class of the plurality of classes (see Ramos, Figure 1-3, Paragraphs [0033], [0035], [0036], [0038], “Further, to form a training dataset, a subset of the data items in dataset 204 are labeled, e.g., by manually classifying them (in the case of a binary classifier, as either positive or negative for the target concept) (operation 208). … To improve the performance of the classifier model and correct discovered errors, the human model developer can add labeled data items to the training set and/or features to the feature set. The method 200 may continue in a loop until the desired classifier performance is achieved.” [The labeled data items may be labeled (i.e., wherein the second label corresponds to as second class of the plurality of classes).]);
and presenting, from the cloud computing environment via the one or more programmatic interfaces, prior to receiving the second label, an indication of a correlation between an attribute of the third item and membership of the third item in a particular class of the plurality of classes (see Ramos, Paragraphs [0038], [0045], “The trained concept classifier model can be used to make classification predictions for unlabeled as well as labeled data items (operation 212), and the results can be inspected by a human model developer to discover actual and potential prediction errors (operation 214); the user interface of FIGS. 1A and 1B may serve visual data exploration for this purpose. To improve the performance of the classifier model and correct discovered errors, the human model developer can add labeled data items to the training set and/or features to the feature set.” [Figure 1A shows different data items that are associated with different labels (i.e., classes).]).
Regarding claim 26, Ramos in view of Williams, further in view of Kwant teaches all the limitations of claim 23. Ramos further teaches:
storing a second label, received via the one or more programmatic interfaces, for a third item of the portion of the machine learning data set (see Ramos, Figure 1-3, Paragraphs [0033], [0035], [0036], [0038], “The method 200 may continue in a loop until the desired classifier performance is achieved.” Also, see Kwant, [Column 10, Lines 13-23], “Alternatively, the feature detector 103 can select a subset of the data items or images in the training data set for automatic labeling. By selecting a representative subset, the feature detector 103 can advantageously reduce the number of data items or images it needs to process, thereby reducing the computing, bandwidth, memory, etc. resources associated with evaluating training data sets according to the various embodiments.” [The process of labeling and storing may be repeated.]);
and receiving, at the cloud computing environment via the one or more programmatic interfaces, an indication of a justification for the second label (see Ramos, Figures 1A, 1B, Paragraph [0043], “The concept classifier is then retrained based on the expanded training dataset and/or modified feature set, and the visual attributes of the visual representations of the data items are updated to reflect updated classification predications and/or labels (operation 318). The user may continue the process of manipulating anchors (operation 306), inspecting data in the aggregate and individually (operations 310, 312), and providing feedback to the concept classifier (operations 314, 316) in, generally, any order to explore the dataset and improve the concept classifier.” [The labeled feedback may be visualized (i.e., indication of a justification).]).
Regarding claim 27, Ramos in view of Williams, further in view of Kwant teaches all the limitations of claim 26. Ramos further teaches:
presenting, from the cloud computing environment via the one or more programmatic interfaces, the indication of the justification (see Ramos, Figures 1A, 1B, Paragraph [0043], “The concept classifier is then retrained based on the expanded training dataset and/or modified feature set, and the visual attributes of the visual representations of the data items are updated to reflect updated classification predications and/or labels (operation 318). The user may continue the process of manipulating anchors (operation 306), inspecting data in the aggregate and individually (operations 310, 312), and providing feedback to the concept classifier (operations 314, 316) in, generally, any order to explore the dataset and improve the concept classifier.” [The labeled feedback may be visualized (i.e., indication of a justification).]).
Regarding claim 28, Ramos in view of Williams, further in view of Kwant teaches all the limitations of claim 23. Ramos further teaches:
obtaining, at the cloud computing environment from a first label provider via the one or more programmatic interfaces, a filter criterion to be used to select one or more additional items of the portion of the machine learning data set for which labels are to be obtained; and presenting, by the cloud computing environment to the first label provider via the one or more programmatic interfaces, a representation of a second item, wherein the second item is selected from the portion of the machine learning data set using the filter criterion (see Ramos, Figure 3, Paragraphs [0004], [0006], [0027], [0032]-[0033], [0043], [0044], “The dataset displayed in the user interface may generally include labeled data items (i.e., data items that have been manually classified as positive or negative for the target concept, which may include the training data used to train the classifier model) as well as unlabeled data items. … the user interface may include, alongside an “exploration pane” showing the dataset in the star coordinate space, an “items detail pane” that lists the data items visually represented in the star coordinate space, or a subset thereof, e.g., in the form of thumbnail images with associated prediction scores, and optionally ordered by prediction score. The user may select a data item from the list, or in the exploration pane, to view its contents and label the data item (if previously unlabeled, or to correct a previously assigned incorrect label). Upon assignment of a label, the respective data item may be added to the training dataset (or, in the case of a label correction, the new label may be substituted for the old label), and the concept classifier may be retrained based on the updated training set. The visual representations of the data items can then be updated to reflect updated predictions by the concept classifier.” [The data items may be selected, labeled as positive or negative (i.e., filter criterion), and added to the training set through the user interface.]).
Regarding claim 29, Ramos in view of Williams, further in view of Kwant teaches all the limitations of claim 23. Ramos further teaches:
storing a second label, received via the one or more programmatic interfaces, for the first item of the portion of the machine learning data set (see Ramos, Figure 1-3, Paragraphs [0033], [0035], [0036], [0038], “The method 200 may continue in a loop until the desired classifier performance is achieved.” Also, see Kwant, [Column 10, Lines 13-23], “Alternatively, the feature detector 103 can select a subset of the data items or images in the training data set for automatic labeling. By selecting a representative subset, the feature detector 103 can advantageously reduce the number of data items or images it needs to process, thereby reducing the computing, bandwidth, memory, etc. resources associated with evaluating training data sets according to the various embodiments.” [The process of labeling and storing may be repeated.]);
and training, at the cloud computing environment, in one or more training iterations, the machine learning model using labeled versions of items of the portion of the machine learning data set, wherein a labeled version of the first item which includes the second label is used during a particular training iteration of the one or more training iterations, and wherein the second label is received at the cloud computing environment asynchronously with respect to the particular training iteration (see Ramos, Figure 3, Paragraphs [0004], [0006], [0027], [0032]-[0033], [0043], [0044], “The user may select a data item from the list, or in the exploration pane, to view its contents and label the data item (if previously unlabeled, or to correct a previously assigned incorrect label).” [The data items may be asynchronously selected, labeled, and added to the training set through the user interface.]).
Regarding claims 30-42, Ramos in view of Williams, further in view of Kwant teaches all of the limitations of claims 23-29 in method form rather than in system and non-transitory computer-accessible storage media form. Ramos also discloses a system [0060], and a non-transitory computer-accessible storage media [0060]. Therefore, the supporting rationale of the rejection to claims 23-29 applies equally as well to those elements of claims 30-42.
Response to Arguments
Applicant’s Arguments, filed March 12th, 2026 have been fully considered, but are moot in light of the new grounds of rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUSAM TURKI SAMARA whose telephone number is (571)272-6803. The examiner can normally be reached on Monday - Thursday, Alternate Fridays.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached on (571)-272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUSAM TURKI SAMARA/
Examiner, Art Unit 2161
/APU M MOFIZ/Supervisory Patent Examiner, Art Unit 2161