DETAILED ACTION
This action is in response to the application filed 23 November 2022.
Claims 1–7 are pending. Claims 1, 6, and 7 are independent.
Claims 1–7 are rejected.
Notice of Pre-AIA or AIA Status
The present application, filed on or after 16 March 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections—35 U.S.C. § 112
The following is a quotation of 35 U.S.C. § 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 3 and 4 are rejected under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Regarding dependent claim 3, this claim recites “the selecting processing preferentially selects, from among the plurality of pieces of training data included in the training data set, training data in which the determination result made by the determination model matches a label, the number of data items presented to the user as an evaluation target is small, and an absolute value of a predicted value based on the determination result is the largest”. It is unclear whether the training data selected is singular or plural; furthermore, the phrase “the number of data items presented to the user as an evaluation target is small” is unclear; e.g., it is unclear whether the “data items” refers to the training data. Based on the specification, the examiner interprets the claim to mean selecting a subset of training data where the number of times each item of training data has been presented to the user with an explanation to be evaluated is smaller relative to other pieces of training data. See, e.g., specification paras. 61, 65, or 66.
Regarding dependent claims 3 and 4, the term “small” is a relative term which renders the claims indefinite. The term “small” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Although the specification describes selecting data having “a number of times of evaluation” that is small, the specification fails to provide a standard for what a “small” number of times is.
Claim Rejections—35 U.S.C. § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention.
Claims 1, 2, 5, 6, and 7 are rejected under 35 U.S.C. § 103 as being unpatentable over Hind et al. (US 2019/0354805 A1) [hereinafter Hind] in view of Itou et al. (US 2021/0182712 A1) [hereinafter Itou].
Regarding independent claim 1, Hind teaches [a] non-transitory computer-readable storage medium storing a model training program for causing a computer to execute processing comprising: selecting, from among a plurality of pieces of training data included in a training data set used to train a determination model, training data that have caused the determination model to output a correct determination result during the training of the determination model; A recommendation component uses a classifier, trained using a set of training data indicating correct classes, to generate recommendations for a user (Hind, ¶¶ 4, 5, 35, 39, 41). presenting, to a user, the correct determination result and a data item that has contributed to the correct determination result among data items included in the selected training data; The recommendation is presented with an explanation for why it was recommended [i.e., why it was classified as recommended] (Hind, ¶ 34). receiving, from the user, an evaluation of ease of interpretation for the presented data item; and The user can score the explanation, e.g., based on how well it explains the decision or how easily understandable the explanation is (Hind, ¶¶ 68, 73).
Hind teaches improving an AI model based on feedback regarding the ease of understanding an explanation thereof (Hind, ¶ 73) but does not expressly teach a loss function. However, performing, based on a loss function adjusted in accordance with the received evaluation, training of the determination model by using the training data set. A model learns to generate interpretations based on a loss function, wherein the loss function includes how well the interpretation fits a user (Itou, ¶¶ 71–72).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Hind with those of Itou. Doing so would have been a matter of simple substitution of one known element [the improvement method of Hind] for another [the improvement via loss method of Itou] to obtain predictable results [a model wherein interpretations are improved based on direct user feedback factoring into a loss function].
Regarding dependent claim 2, the rejection of claim 1 is incorporated and Hind/Itou further teaches: until the trained determination model satisfies a user requirement, repeatedly performing the presenting, to the user, of the data item that has contributed to the determination and the determination result, the receiving the evaluation of the ease of the interpretation for the data item, adjusting of the loss function, and the training the determination model according to the evaluation result; and The explanation can be modified, presented, scored, improved, and additional or modified explanations provided (Hind, ¶ 73). in a case where the trained determination model satisfies the user requirement, outputting the trained determination model. The model can operate in a training mode or an application mode, wherein the application mode uses the trained model to produce recommendations and explanations based on real world data (Hind, ¶ 25).
Regarding dependent claim 5, the rejection of claim 1 is incorporated and Hind/Itou further teaches: wherein, regarding a classification error and a weight penalty included in the loss function, the training processing changes the weight penalty to a smaller value for the data item that is evaluated as easy to interpret, changes the weight penalty to a larger value for the data item that is evaluated as difficult to interpret, and optimizes the changed loss function so as to train the determination model. The user can score the explanation, e.g., from 1 to 10, wherein a lower score indicates a more difficult to understand explanation; the score is used to improve the AI system and provide modified explanations (Hind, ¶ 73).
Regarding independent claim 6, this claim recites limitations similar to those of claim 1, and is rejected for the same reasons.
Regarding independent claim 7, this claim recites limitations similar to those of claim 1, and is rejected for the same reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tyler Schallhorn whose telephone number is 571-270-3178. The examiner can normally be reached Monday through Friday, 8:30 a.m. to 6 p.m. (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in the USA or Canada) or 571-272-1000.
/Tyler Schallhorn/Examiner, Art Unit 2144
/TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144