DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the
first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 05/27/2025 has been entered.
Response to Amendment
The amendments filed 05/27/2025 has been entered. The status of the claims is as follows:
Claims 1-21 remain pending in the application.
Claim 21 is new.
Claims 1 and 9 are amended.
Response to Arguments
In reference to the rejections under 35 U.S.C 101:
Applicant’s arguments, see pg. 6-10, filed 05/27/2025, with respect to the Claim Rejections under 35 U.S.C 101 have been fully considered and are persuasive. The Rejections under 35 U.S.C 101 has been withdrawn.
In reference to the rejections under 35 U.S.C 103:
Applicant’s arguments, see Remarks pg. 10-12, filed 05/27/2025, with respect to the rejection(s) of claim(s) under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Absher et al. (US 10,585,121 B2) and Peterson et al. (US 2017 /0357627 A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-8, 10, 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bakis et al. (“Performance of natural language classifiers in a question-answering system”) (hereafter referred to as “Bakis”), in view of Hossin & Sulaiman (“A Review on Evaluation Metrics for Data Classification Evaluations”) (hereafter referred to as “Hossin”), Ishikawa et al. (US 7,083,091 B2) (hereafter referred to as “Ishikawa”) and further in view of Absher et al. (US 10,585,121 B2) and Peterson et al. (US 2017 /0357627 A1)
Regarding Claim 1, Bakis discloses:
A classifier evaluation device for evaluating a classification devices performing classification of input data, the classifier evaluation device comprising: obtaining a data count of the input data to as a classification target (Bakis, Page 2, Col. 2, ¶[3]: “In the Evaluation phase, the classifier classifies incoming questions into answer classes, and associates each such classification with a confidence value.”, and Page 5, Col. 2, ¶3]: “The baseline accuracy is the mean accuracy of a classifier trained with a random sample of 300 questions.”) [The “random sample of 300 questions” represents the data count of input data to be made a classification target (i.e., the answer classes)]
predicting, by the classification device, using the input data, a classification result, (Bakis, Page 2, Col. 2, ¶[3]: “In Figure 1, we present the overall flow for the classification application, which consists of two phases: i) a training phase where question-answer mappings or answer classes are created, to train the classifier, and ii) an evaluation phase where the classifier is used to classify incoming questions into those answer classes.”) [Examiner’s note: “classification device” i.e., “classification application”, “predicting a classification result” i.e., “classifying incoming questions into answer classes”]
determining, based on the correction information, a correction frequency of the classification result of the classification device; and (Bakis, Page 7, Col. 1, ¶[4]: “Since our task is multi-class classification, a natural tool to quantify the classifier performance is the confusion matrix, which captures counts of how many questions (which belong to class i) have been answered correctly, and how many have been assigned incorrectly to each class j.”) [The number of correctly classified questions is the “correction frequency of the classifiers”, the correct answers given to those questions represent the “correct information on classification results for the classifiers”.]
Bakis fails to disclose:
a processor that executes a program, wherein the processor causes a computer to execute a method comprising:
wherein the classification device comprises a first classification model for determining a class of the input data;
obtaining correction information when the classification result of the classification device is corrected by a user via a correction interface;
replacing the first classification model with a second classification model in which a correction rate calculated from the correction frequency and the data count of input data exceeds a preset threshold, wherein the second classification model is distinct from the first classification model,
wherein the correction interface is an interactive interface in which the user further modifies the classification result
However, Ishikawa explicitly discloses:
a processor that executes a program, wherein the processor causes a computer to execute a program generation method comprising: (Ishikawa, Col. 5, Lines 29-30: “The commodity information management apparatus 10 is provided with a CPU (Central Processing Unit) 10a”, Col. 5, Lines 36-38: “The CPU 10a is a central processing device that controls the entire system of the commodity information management apparatus”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis and Ishikawa. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Ishikawa teaches a user interface designed to manage and refine the outcomes of a classification process, giving the manager the ability to add new classification results, remove incorrect ones, and enter corrected data. One of ordinary skill would have motivation to combine Bakis and Ishikawa because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art.
However, Hossin explicitly discloses:
wherein the classification device comprises a first classification model for determining a class of the input data; (Hossin, Page 2, ¶[2]: “In this case, the evaluation metric task is to determine the best classifier among different types of trained classifiers which focus on the best future performance (optimal model) when tested with unseen data.”, Page 2, ¶[1]: “Through accuracy, the trained classifier is measured based on total correctness which refers to the total of instances that are correctly predicted by the trained classifier when tested with the unseen data.”) [Examiner’s note: Hossin discloses different types of trained classifiers which includes “a first classification model”; “determining a class of the input data” is being interpreted as the trained classifier correctly predicts the total of instances]
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis and Hossin. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Hossin teaches evaluation metrics for data classification evaluations. One of ordinary skill would have motivation to combine Bakis and Hossin to apply the advantages of accuracy or error rate metrics into the process of classifiers evaluation, as this metric is easy to compute with less complexity, applicable for multi-class and multi-label problems, easy-to-use scoring and easy to understand by human. (Hossin, Page 3, Section 2.1, ¶[3])
However, Absher explicitly discloses:
obtaining correction information when the classification result of the classification device is corrected by a user via a correction interface; (Absher, Col. 4, Lines 3-11: “The oscilloscope 110 also includes controls 113 for receiving user input, for example alternating current (AC) or DC coupling controls, trigger level controls, trigger-level hysteresis controls to alter trigger-level hysteresis thresholds and margins, user selections, user over-rides, etc. By employing the controls 113, a user can train or employ the oscilloscope to classify waveforms or buses and accept or reject suggested measurements or actions presented on the display 111 as a result of such classifications.”)
wherein the correction interface is an interactive interface in which the user further modifies the classification result. (Absher, Col. 4, Lines 8-11: “By employing the controls 113, a user can train or employ the oscilloscope to classify waveforms or buses and accept or reject suggested measurements or actions presented on the display 111 as a result of such classifications.”, Col. 5, Lines 8-12: “User controls 213 are coupled at least to the processor 212 and signal analysis circuits 214. User controls 213 may include strobe inputs, gain controls, triggers, display adjustments, power controls, or any other controls employable by a user to display an input signal on display 211.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis and Absher. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Absher teaches system and methods for detecting input waveform types and suggesting corresponding measurement settings for the oscilloscope. One of ordinary skill would have motivation to combine Bakis and Absher because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art.
However, Peterson explicitly discloses:
replacing the first classification model with a second classification model in which a correction rate calculated from the correction frequency and the data count of input data exceeds a preset threshold, wherein the second classification model is distinct from the first classification model, (Peterson, ¶[0289]: “In some embodiments, the first classification is obtained from a local corrections database of classifications based on previous user-specific amendments to fields populated by the autofill process ( e.g., the local field classifications database 620 in FIGS. 6-7).”, ¶[0290]: “In some embodiments, the second classification is obtained from an aggregate corrections database of classifications based on amendments to fields populated by the autofill process that were made by at least a predefined number of other users (e.g., crowd-sourced or aggregated amendments from the aggregate field classifications database 630 in FIGS. 6-7).”, ¶[0292]: “the frequency threshold is satisfied when a same amendment to a field is reported by X users, where X is scaled based on the popularity of the associated domain. As such, for example, the frequency threshold serves as a filtering mechanism that disregards isolated or mistaken user amendments. In some embodiments, the aggregate corrections database pushes indications of crowd-sourced or aggregate amendments that satisfy the frequency threshold to the device according to a predefined schedule (e.g., once per hour, once per day, or upon each update to the device OS, device firmware, or web browser).”, ¶[0295]: “a new classification of the text input field replaces the initial classification of the text input field such switching from the classification for a field from an email address field to a telephone number field. As such, for example, fewer future user amendments to the text input field will be made by the user of the device and also by other users during subsequent visits to the electronic form”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis and Peterson. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Peterson teaches electronic devices with displays and input devices, including but not limited to electronic devices with displays and input devices that classify and populate fields of electronic forms. One of ordinary skill would have motivation to combine Bakis and Peterson to replace an initial classification model with an updated classification model when user corrections occur at a sufficiently high rate, because frequent user corrections are an objective indicator that the current model’s outputs are unreliable for the current input population and replacing the model with an updated model would predictably improve classification accuracy and reduce future correction burden and user effort.
Regarding Claim 2, the combination of Bakis , Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 1 (as shown in the rejection above).
Bakis in view of Ishikaw, Absher, Peterson and Hossin further discloses:
wherein the computer counts the correction frequency made after an update date of the classifiers. (Bakis, Page 8, Col. 1, ¶[2]: “This methodology can be repeated periodically as additional mappings are added to the ground truth.”, and Page 7, Col. 1, ¶[4]: “Since our task is multi-class classification, a natural tool to quantify the classifier performance is the confusion matrix, which captures counts of how many questions (which belong to class i) have been answered correctly, and how many have been assigned incorrectly to each class j.”) [The examiner interprets that the process of training classifiers is iterative, with improvements seen with each repeated updates.]
Regarding Claim 3, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 1 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer deletes the correction information each time the classifiers are updated. (Bakis, Page 5, Col. 1, ¶[2]: “When the system is in production, it is important to continuously update the ground truth so that it is representative of real-world data. This requires adding questions to and/or deleting them from ground truth, as needed, such that the distribution of answer classes in the ground truth is similar to the distribution of target answer classes of production utterances.”, Page 6, Col. 1, ¶[1]: “In this example, we selected 100 questions at a time according to the four strategies, and retrained the classifier to suggest the next batch of a hundred questions”, and Page 6, Col. 1, ¶[2]: “It is necessary for ground truth to continuously evolve and be representative of the real-world data for a supervised classifier to perform well”) [The examiner interprets that the updated ground truth data is used during the retraining or refinement of the classifiers to improve their accuracy and performance overtime. The “correct information” here is interpreted as the data that are used or adjust the classifiers during updates, and the examiner interprets this as the old questions used to train the classifier.]
Regarding Claim 4, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 1 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer counts a frequency of modifications and deletions, or the frequency of additions as the correction frequency. (Bakis, Page 7, Col. 2, ¶[2]: “In particular, we first identified answer classes with low accuracy and low ground truth count. We then harvested questions from production logs for these answer classes using a lexical matching tool and added these production questions to the corresponding question groups ground truth and tested the performance of the new ground truth. We observe that accuracy of all classes, to which new production questions were added, improved or remained same.”) [The examiner interprets the “harvested questions from production logs” , and “added these production questions to the corresponding question groups ground truth” as the “frequency of additions”. The fact that the accuracy of the classifiers was improved when new questions were added discloses the concept of counting the frequency of additions as the correction frequency.]
Regarding Claim 5, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 1 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer issues a notification in a case in which the correction rate exceeds a preset threshold. (Bakis, Page 3, Col. 1, ¶[2]: “a threshold can be set where the answer is returned to the end-user when the confidence exceeds the threshold, and no answer or “I am not trained on that” is returned when the confidence falls below the threshold.”, and Page 2, Col. 2, ¶[4]: “In the Evaluation phase, the classifier classifies incoming questions into answer classes, and associates each such classification with a confidence value. Since confidence is correlated with accuracy, we can apply a confidence threshold filter and only present to the user those answers whose confidence values are above the threshold.”) [The examiner interprets the “correction rate” as the “confidence values” in this context, and the “notification” here is the “I am not trained on that” when the confidence value falls below the threshold (i.e., correction rate exceeds a preset threshold.)]
Regarding Claim 6, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 5 (as shown in the rejection above).
Hossin further discloses:
wherein the classifiers are based on models, and the computer replaces the model with a model trained with the correction information. (Hossin, Page 2, Section 1, ¶[4]: “In general, PS classifier is a generative type of classification algorithms that aim to generate a classifier model by applying sampling technique and simultaneously used the generated model to achieve the highest possible classification accuracy when dealing with the unseen data.”, Page 2, Section 1, ¶[2]: “Second, the evaluation metrics were employed as an evaluator for model selection. In this case, the evaluation metric task is to determine the best classifier among different types of trained classifiers which focus on the best future performance (optimal model) when tested with unseen data. Third, the evaluation metrics were employed as a discriminator to discriminate and select the optimal solution (best solution) among all generated solutions during the classification training. For example, the accuracy metric is employed to discriminate every single solution and select the best solution that produced by a particular classification algorithm. Only the best solution which is believed the optimal model will be tested with the unseen data.”) [The fact that the PS classifier creating a model and using it for the classification tasks means the classifiers are based on model. The examiner interprets that “model trained with the correction information” as the “optimal model” which has the best solution produced by a particular classification algorithm, and the “accuracy metric” is the “correction information”. Multiple classifier model is generated and evaluated by the evaluation metrics, and only the optimal model is selected, which means models are replaced by the optimal one after evaluating.]
Regarding Claim 7, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 1 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer generates the correction information in a case in which the classification result is corrected via a correction interface. (Ishikawa, Col. 11, Lines 49- 57: “At step S304, the CPU 10a executes the process to display a correction check screen on the monitor 10c. Specifically, at step S304, the CPU 10a reads a predetermined screen data included in the software package in the HDD 10g. Then, the CPU 10a acquires the commodity classification code and the commodity classification name that have been entered into the text boxes 41a and 41b on the correction input screen 41 at the timing when the check button 41c was clicked at step S302”, and Col. 12, Lines 3-5: “Then, the CPU 10a displays a correction check screen on the monitor 10c based on the screen data. FIG. 13 shows one example of the correction check screen 42”) [The examiner interprets the “correction interface” as the “correction check screen” in this context, and the “correction information in a case” here is the corrected classification code and the commodity classification name.]
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis, Hossin, Absher, Peterson and Ishikawa. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Hossin teaches evaluation metrics for data classification evaluations. Peterson teaches electronic devices with displays and input devices, including but not limited to electronic devices with displays and input devices that classify and populate fields of electronic forms. Absher teaches system and methods for detecting input waveform types and suggesting corresponding measurement settings for the oscilloscope. Ishikawa teaches a user interface designed to manage and refine the outcomes of a classification process, giving the manager the ability to add new classification results, remove incorrect ones, and enter corrected data. One of ordinary skill would have motivation to combine Bakis, Hossin, Absher, Peterson and Ishikawa to improve the classification management program so that the users can retrieve the classification information effectively via the improved user interface. (Ishikawa, Col. 1, Lines 19-20; Col. 2, Lines 19-24)
Regarding Claim 8, the combination of Bakis, Hossin, Absher, Peterson and Ishikawa discloses all the limitations of Claim 7 (as shown in the rejection above).
Ishikawa further discloses:
wherein the correction interface includes a button for adding a classification result, a button for deleting a classification result, and a region for inputting a post-correction classification result. (Ishikawa, Col. 7, Lines 50-52: “The button 21a is a new registration button that will be clicked by an operator who intends to add a commodity classification in the top level. The buttons 21b are correction buttons that will be clicked by an operator who intends to correct the information about the commodity classifications. The buttons 21c are addition/insertion buttons that will be clicked by an operator who intends to add or insert a commodity classification to a level other than the top level. The buttons 21d are deletion buttons that will be clicked by an operator who intends to delete the commodity classification.”, and Col. 11, Lines 1-5: “The correction input screen 41 shown in FIG. 12 indicates the parent commodity classification name, the level number and the commodity classification code chain, which consists of the codes except the commodity classification code in the same level of the commodity classification to be corrected.”) [The “correction input screen” here is the “region for inputting a post-correction classification result”]
Regarding Claim 10, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 1 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
A non-transitory computer readable recording medium recording a program for causing a computer to function as a classifier evaluation device according to claim1. (Ishikawa, Col. 1, Lines 11-16: “The present invention relates to a commodity information management program for managing commodity information with using commodity classification codes, a computer readable medium that stores the program and a data structure of a commodity classification master database handled by the program.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis, Hossin, Absher, Peterson and Ishikawa. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Hossin teaches evaluation metrics for data classification evaluations. Absher teaches system and methods for detecting input waveform types and suggesting corresponding measurement settings for the oscilloscope. Ishikawa teaches a user interface designed to manage and refine the outcomes of a classification process, giving the manager the ability to add new classification results, remove incorrect ones, and enter corrected data. Peterson teaches electronic devices with displays and input devices, including but not limited to electronic devices with displays and input devices that classify and populate fields of electronic forms. One of ordinary skill would have motivation to combine Bakis, Hossin, Absher, Peterson and Ishikawa to improve the classification management program so that the users can retrieve the classification information effectively via the improved user interface. (Ishikawa, Col. 1, Lines 19-20; Col. 2, Lines 19-24)
Regarding Claim 11, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 2 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer counts a frequency of modifications and deletions, or the frequency of additions as the rectification frequency. (Bakis, Page 7, Col. 2, ¶[2]: “In particular, we first identified answer classes with low accuracy and low ground truth count. We then harvested questions from production logs for these answer classes using a lexical matching tool and added these production questions to the corresponding question groups ground truth and tested the performance of the new ground truth. We observe that accuracy of all classes, to which new production questions were added, improved or remained same.”) [The examiner interprets the “harvested questions from production logs” , and “added these production questions to the corresponding question groups ground truth” as the “frequency of additions”. The fact that the accuracy of the classifiers was improved when new questions were added discloses the concept of counting the frequency of additions as the correction frequency.]
Regarding Claim 12, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 3 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer counts a frequency of modifications and deletions, or the frequency of additions as the rectification frequency. (Bakis, Page 7, Col. 2, ¶[2]: “In particular, we first identified answer classes with low accuracy and low ground truth count. We then harvested questions from production logs for these answer classes using a lexical matching tool and added these production questions to the corresponding question groups ground truth and tested the performance of the new ground truth. We observe that accuracy of all classes, to which new production questions were added, improved or remained same.”) [The examiner interprets the “harvested questions from production logs” , and “added these production questions to the corresponding question groups ground truth” as the “frequency of additions”. The fact that the accuracy of the classifiers was improved when new questions were added discloses the concept of counting the frequency of additions as the correction frequency.]
Regarding Claim 13, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 2 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer issues a notification in a case in which the correction rate exceeds a preset threshold. (Bakis, Page 3, Col. 1, ¶[2]: “a threshold can be set where the answer is returned to the end-user when the confidence exceeds the threshold, and no answer or “I am not trained on that” is returned when the confidence falls below the threshold.”, and Page 2, Col. 2, ¶[4]: “In the Evaluation phase, the classifier classifies incoming questions into answer classes, and associates each such classification with a confidence value. Since confidence is correlated with accuracy, we can apply a confidence threshold filter and only present to the user those answers whose confidence values are above the threshold.”) [The examiner interprets the “correction rate” as the “confidence values” in this context, and the “notification” here is the “I am not trained on that” when the confidence value falls below the threshold (i.e., correction rate exceeds a preset threshold.)]
Regarding Claim 14, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 3 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer issues a notification in a case in which the correction rate exceeds a preset threshold. (Bakis, Page 3, Col. 1, ¶[2]: “a threshold can be set where the answer is returned to the end-user when the confidence exceeds the threshold, and no answer or “I am not trained on that” is returned when the confidence falls below the threshold.”, and Page 2, Col. 2, ¶[4]: “In the Evaluation phase, the classifier classifies incoming questions into answer classes, and associates each such classification with a confidence value. Since confidence is correlated with accuracy, we can apply a confidence threshold filter and only present to the user those answers whose confidence values are above the threshold.”) [The examiner interprets the “correction rate” as the “confidence values” in this context, and the “notification” here is the “I am not trained on that” when the confidence value falls below the threshold (i.e., correction rate exceeds a preset threshold.)]
Regarding Claim 15, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 4 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer issues a notification in a case in which the correction rate exceeds a preset threshold. (Bakis, Page 3, Col. 1, ¶[2]: “a threshold can be set where the answer is returned to the end-user when the confidence exceeds the threshold, and no answer or “I am not trained on that” is returned when the confidence falls below the threshold.”, and Page 2, Col. 2, ¶[4]: “In the Evaluation phase, the classifier classifies incoming questions into answer classes, and associates each such classification with a confidence value. Since confidence is correlated with accuracy, we can apply a confidence threshold filter and only present to the user those answers whose confidence values are above the threshold.”) [The examiner interprets the “correction rate” as the “confidence values” in this context, and the “notification” here is the “I am not trained on that” when the confidence value falls below the threshold (i.e., correction rate exceeds a preset threshold.)]
Regarding Claim 16, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 2 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer generates the correction information in a case in which the classification result is corrected via a correction interface. (Ishikawa, Col. 11, Lines 49- 57: “At step S304, the CPU 10a executes the process to display a correction check screen on the monitor 10c. Specifically, at step S304, the CPU 10a reads a predetermined screen data included in the software package in the HDD 10g. Then, the CPU 10a acquires the commodity classification code and the commodity classification name that have been entered into the text boxes 41a and 41b on the correction input screen 41 at the timing when the check button 41c was clicked at step S302”, and Col. 12, Lines 3-5: “Then, the CPU 10a displays a correction check screen on the monitor 10c based on the screen data. FIG. 13 shows one example of the correction check screen 42”) [The examiner interprets the “correction interface” as the “correction check screen” in this context, and the “correction information in a case” here is the corrected classification code and the commodity classification name.]
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis, Hossin, Absher, Peterson and Ishikawa. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Hossin teaches evaluation metrics for data classification evaluations. Peterson teaches electronic devices with displays and input devices, including but not limited to electronic devices with displays and input devices that classify and populate fields of electronic forms. Absher teaches system and methods for detecting input waveform types and suggesting corresponding measurement settings for the oscilloscope. Ishikawa teaches a user interface designed to manage and refine the outcomes of a classification process, giving the manager the ability to add new classification results, remove incorrect ones, and enter corrected data. One of ordinary skill would have motivation to combine Bakis, Hossin, Absher, Peterson and Ishikawa to improve the classification management program so that the users can retrieve the classification information effectively via the improved user interface. (Ishikawa, Col. 1, Lines 19-20; Col. 2, Lines 19-24)
Regarding Claim 17, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 3 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer generates the correction information in a case in which the classification result is corrected via a correction interface. (Ishikawa, Col. 11, Lines 49- 57: “At step S304, the CPU 10a executes the process to display a correction check screen on the monitor 10c. Specifically, at step S304, the CPU 10a reads a predetermined screen data included in the software package in the HDD 10g. Then, the CPU 10a acquires the commodity classification code and the commodity classification name that have been entered into the text boxes 41a and 41b on the correction input screen 41 at the timing when the check button 41c was clicked at step S302”, and Col. 12, Lines 3-5: “Then, the CPU 10a displays a correction check screen on the monitor 10c based on the screen data. FIG. 13 shows one example of the correction check screen 42”) [The examiner interprets the “correction interface” as the “correction check screen” in this context, and the “correction information in a case” here is the corrected classification code and the commodity classification name.]
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis, Hossin, Absher, Peterson and Ishikawa. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Absher teaches system and methods for detecting input waveform types and suggesting corresponding measurement settings for the oscilloscope. Peterson teaches electronic devices with displays and input devices, including but not limited to electronic devices with displays and input devices that classify and populate fields of electronic forms. Hossin teaches evaluation metrics for data classification evaluations. Ishikawa teaches a user interface designed to manage and refine the outcomes of a classification process, giving the manager the ability to add new classification results, remove incorrect ones, and enter corrected data. One of ordinary skill would have motivation to combine Bakis, Hossin, Absher, Peterson and Ishikawa to improve the classification management program so that the users can retrieve the classification information effectively via the improved user interface. (Ishikawa, Col. 1, Lines 19-20; Col. 2, Lines 19-24)
Regarding Claim 18, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 4 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer generates the correction information in a case in which the classification result is corrected via a correction interface. (Ishikawa, Col. 11, Lines 49- 57: “At step S304, the CPU 10a executes the process to display a correction check screen on the monitor 10c. Specifically, at step S304, the CPU 10a reads a predetermined screen data included in the software package in the HDD 10g. Then, the CPU 10a acquires the commodity classification code and the commodity classification name that have been entered into the text boxes 41a and 41b on the correction input screen 41 at the timing when the check button 41c was clicked at step S302”, and Col. 12, Lines 3-5: “Then, the CPU 10a displays a correction check screen on the monitor 10c based on the screen data. FIG. 13 shows one example of the correction check screen 42”) [The examiner interprets the “correction interface” as the “correction check screen” in this context, and the “correction information in a case” here is the corrected classification code and the commodity classification name.]
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis, Hossin, Absher, Peterson and Ishikawa. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Hossin teaches evaluation metrics for data classification evaluations. Peterson teaches electronic devices with displays and input devices, including but not limited to electronic devices with displays and input devices that classify and populate fields of electronic forms. Absher teaches system and methods for detecting input waveform types and suggesting corresponding measurement settings for the oscilloscope. Ishikawa teaches a user interface designed to manage and refine the outcomes of a classification process, giving the manager the ability to add new classification results, remove incorrect ones, and enter corrected data. One of ordinary skill would have motivation to combine Bakis, Hossin, Absher, Peterson and Ishikawa to improve the classification management program so that the users can retrieve the classification information effectively via the improved user interface. (Ishikawa, Col. 1, Lines 19-20; Col. 2, Lines 19-24)
Regarding Claim 19, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 5 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer generates the correction information in a case in which the classification result is corrected via a correction interface. (Ishikawa, Col. 11, Lines 49- 57: “At step S304, the CPU 10a executes the process to display a correction check screen on the monitor 10c. Specifically, at step S304, the CPU 10a reads a predetermined screen data included in the software package in the HDD 10g. Then, the CPU 10a acquires the commodity classification code and the commodity classification name that have been entered into the text boxes 41a and 41b on the correction input screen 41 at the timing when the check button 41c was clicked at step S302”, and Col. 12, Lines 3-5: “Then, the CPU 10a displays a correction check screen on the monitor 10c based on the screen data. FIG. 13 shows one example of the correction check screen 42”) [The examiner interprets the “correction interface” as the “correction check screen” in this context, and the “correction information in a case” here is the corrected classification code and the commodity classification name.]
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis, Hossin, Absher, Peterson and Ishikawa. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Absher teaches system and methods for detecting input waveform types and suggesting corresponding measurement settings for the oscilloscope. Hossin teaches evaluation metrics for data classification evaluations. Ishikawa teaches a user interface designed to manage and refine the outcomes of a classification process, giving the manager the ability to add new classification results, remove incorrect ones, and enter corrected data. Peterson teaches electronic devices with displays and input devices, including but not limited to electronic devices with displays and input devices that classify and populate fields of electronic forms. One of ordinary skill would have motivation to combine Bakis, Hossin, Absher, Peterson and Ishikawa to improve the classification management program so that the users can retrieve the classification information effectively via the improved user interface. (Ishikawa, Col. 1, Lines 19-20; Col. 2, Lines 19-24)
Regarding Claim 20, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 6 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the computer generates the correction information in a case in which the classification result is corrected via a correction interface. (Ishikawa, Col. 11, Lines 49- 57: “At step S304, the CPU 10a executes the process to display a correction check screen on the monitor 10c. Specifically, at step S304, the CPU 10a reads a predetermined screen data included in the software package in the HDD 10g. Then, the CPU 10a acquires the commodity classification code and the commodity classification name that have been entered into the text boxes 41a and 41b on the correction input screen 41 at the timing when the check button 41c was clicked at step S302”, and Col. 12, Lines 3-5: “Then, the CPU 10a displays a correction check screen on the monitor 10c based on the screen data. FIG. 13 shows one example of the correction check screen 42”) [The examiner interprets the “correction interface” as the “correction check screen” in this context, and the “correction information in a case” here is the corrected classification code and the commodity classification name.]
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis, Hossin, Absher, Peterson and Ishikawa. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Hossin teaches evaluation metrics for data classification evaluations. Ishikawa teaches a user interface designed to manage and refine the outcomes of a classification process, giving the manager the ability to add new classification results, remove incorrect ones, and enter corrected data. Absher teaches system and methods for detecting input waveform types and suggesting corresponding measurement settings for the oscilloscope. Peterson teaches electronic devices with displays and input devices, including but not limited to electronic devices with displays and input devices that classify and populate fields of electronic forms. One of ordinary skill would have motivation to combine Bakis, Hossin, Absher, Peterson and Ishikawa to improve the classification management program so that the users can retrieve the classification information effectively via the improved user interface. (Ishikawa, Col. 1, Lines 19-20; Col. 2, Lines 19-24)
Regarding Claim 21, the combination of Bakis, Ishikawa, Absher, Peterson and Hossin discloses all the limitations of Claim 6 (as shown in the rejection above).
Bakis in view of Hossin, Absher, Peterson and Ishikawa further discloses:
wherein the preset threshold is determined based on the correction rate of a previously used classification model. (Peterson, ¶[0209]: “As one example, the remote server only pushes aggregate/ crowd-sourced indications to the device when the frequency threshold is satisfied. For example, the frequency threshold is satisfied when a same amendment to a field is reported by X users, where X is scaled based on the popularity of the associated domain.”)
Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over Bakis et al. (“Performance of natural language classifiers in a question-answering system”) (hereafter referred to as “Bakis”), in view of Hossin & Sulaiman (“A Review on Evaluation Metrics for Data Classification Evaluations”) (hereafter referred to as “Hossin”) and further in view of Absher et al. (US 10,585,121 B2) and Peterson et al. (US 2017 /0357627 A1)
Regarding Claim 9, Bakis discloses:
A classifier evaluation device for evaluating a classification devices performing classification of input data, the classifier evaluation device comprising: obtaining a data count of the input data to as a classification target (Bakis, Page 2, Col. 2, ¶[3]: “In the Evaluation phase, the classifier classifies incoming questions into answer classes, and associates each such classification with a confidence value.”, and Page 5, Col. 2, ¶3]: “The baseline accuracy is the mean accuracy of a classifier trained with a random sample of 300 questions.”) [The “random sample of 300 questions” represents the data count of input data to be made a classification target (i.e., the answer classes)]
predicting, by the classification device, using the input data, a classification result, (Bakis, Page 2, Col. 2, ¶[3]: “In Figure 1, we present the overall flow for the classification application, which consists of two phases: i) a training phase where question-answer mappings or answer classes are created, to train the classifier, and ii) an evaluation phase where the classifier is used to classify incoming questions into those answer classes.”) [Examiner’s note: “classification device” i.e., “classification application”, “predicting a classification result” i.e., “classifying incoming questions into answer classes”]
determining, based on the correction information, a correction frequency of the classification result of the classification device; and (Bakis, Page 7, Col. 1, ¶[4]: “Since our task is multi-class classification, a natural tool to quantify the classifier performance is the confusion matrix, which captures counts of how many questions (which belong to class i) have been answered correctly, and how many have been assigned incorrectly to each class j.”) [The number of correctly classified questions is the “correction frequency of the classifiers”, the correct answers given to those questions represent the “correct information on classification results for the classifiers”.]
Bakis fails to disclose:
wherein the classification device comprises a first classification model for determining a class of the input data;
obtaining correction information when the classification result of the classification device is corrected by a user via a correction interface;
replacing the first classification model with a second classification model in which a correction rate calculated from the correction frequency and the data count of input data exceeds a preset threshold, wherein the second classification model is distinct from the first classification model,
wherein the correction interface is an interactive interface in which the user further modifies the classification result.
However, Hossin explicitly discloses:
wherein the classification device comprises a first classification model for determining a class of the input data; (Hossin, Page 2, ¶[2]: “In this case, the evaluation metric task is to determine the best classifier among different types of trained classifiers which focus on the best future performance (optimal model) when tested with unseen data.”, Page 2, ¶[1]: “Through accuracy, the trained classifier is measured based on total correctness which refers to the total of instances that are correctly predicted by the trained classifier when tested with the unseen data.”) [Examiner’s note: Hossin discloses different types of trained classifiers which includes “a first classification model”; “determining a class of the input data” is being interpreted as the trained classifier correctly predicts the total of instances]
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis and Hossin. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Hossin teaches evaluation metrics for data classification evaluations. One of ordinary skill would have motivation to combine Bakis and Hossin to apply the advantages of accuracy or error rate metrics into the process of classifiers evaluation, as this metric is easy to compute with less complexity, applicable for multi-class and multi-label problems, easy-to-use scoring and easy to understand by human. (Hossin, Page 3, Section 2.1, ¶[3])
However, Absher explicitly discloses:
obtaining correction information when the classification result of the classification device is corrected by a user via a correction interface; (Absher, Col. 4, Lines 3-11: “The oscilloscope 110 also includes controls 113 for receiving user input, for example alternating current (AC) or DC coupling controls, trigger level controls, trigger-level hysteresis controls to alter trigger-level hysteresis thresholds and margins, user selections, user over-rides, etc. By employing the controls 113, a user can train or employ the oscilloscope to classify waveforms or buses and accept or reject suggested measurements or actions presented on the display 111 as a result of such classifications.”)
wherein the correction interface is an interactive interface in which the user further modifies the classification result. (Absher, Col. 4, Lines 8-11: “By employing the controls 113, a user can train or employ the oscilloscope to classify waveforms or buses and accept or reject suggested measurements or actions presented on the display 111 as a result of such classifications.”, Col. 5, Lines 8-12: “User controls 213 are coupled at least to the processor 212 and signal analysis circuits 214. User controls 213 may include strobe inputs, gain controls, triggers, display adjustments, power controls, or any other controls employable by a user to display an input signal on display 211.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis and Absher. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Absher teaches system and methods for detecting input waveform types and suggesting corresponding measurement settings for the oscilloscope. One of ordinary skill would have motivation to combine Bakis and Absher because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art.
However, Peterson explicitly discloses:
replacing the first classification model with a second classification model in which a correction rate calculated from the correction frequency and the data count of input data exceeds a preset threshold, wherein the second classification model is distinct from the first classification model, (Peterson, ¶[0289]: “In some embodiments, the first classification is obtained from a local corrections database of classifications based on previous user-specific amendments to fields populated by the autofill process ( e.g., the local field classifications database 620 in FIGS. 6-7).”, ¶[0290]: “In some embodiments, the second classification is obtained from an aggregate corrections database of classifications based on amendments to fields populated by the autofill process that were made by at least a predefined number of other users (e.g., crowd-sourced or aggregated amendments from the aggregate field classifications database 630 in FIGS. 6-7).”, ¶[0292]: “the frequency threshold is satisfied when a same amendment to a field is reported by X users, where X is scaled based on the popularity of the associated domain. As such, for example, the frequency threshold serves as a filtering mechanism that disregards isolated or mistaken user amendments. In some embodiments, the aggregate corrections database pushes indications of crowd-sourced or aggregate amendments that satisfy the frequency threshold to the device according to a predefined schedule (e.g., once per hour, once per day, or upon each update to the device OS, device firmware, or web browser).”, ¶[0295]: “a new classification of the text input field replaces the initial classification of the text input field such switching from the classification for a field from an email address field to a telephone number field. As such, for example, fewer future user amendments to the text input field will be made by the user of the device and also by other users during subsequent visits to the electronic form”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bakis and Peterson. Bakis teaches different metrics used to characterize the performance of classifiers, along with a tool to optimize user experience with classifier based Q/A systems. Peterson teaches electronic devices with displays and input devices, including but not limited to electronic devices with displays and input devices that classify and populate fields of electronic forms. One of ordinary skill would have motivation to combine Bakis and Peterson to replace an initial classification model with an updated classification model when user corrections occur at a sufficiently high rate, because frequent user corrections are an objective indicator that the current model’s outputs are unreliable for the current input population and replacing the model with an updated model would predictably improve classification accuracy and reduce future correction burden and user effort.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMY TRAN whose telephone number is (571)270-0693. The examiner can normally be reached Monday - Friday 7:30 am - 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMY TRAN/Examiner, Art Unit 2126
/DAVID YI/Supervisory Patent Examiner, Art Unit 2126