DETAILED ACTION
This Office Action is sent in response to the Applicant’s Communication received on 11/10/2025 for application number 17/983,837. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract, Oath/Declaration, IDS, and Claims.
Claims 1, 2, 4, 5, 7, and 8 are amended.
Claims 3, 6, and 9 are canceled.
Claims 1, 2, 4, 5, 7, and 8 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
35 USC 101
On page 7 of the remarks section, the Applicant argues that the underlined operations of "specifying, in accordance with ... a natural number" and "displaying a plurality of ... a displaying device" require interaction with hardware and cannot be performed mentally. Accordingly, the claims do not fall within the "mental processes" grouping of abstract ideas.
The Examiner respectfully disagrees. The claimed limitation “specifying, in accordance with the plurality of classification results, a first plurality of attributes among a plurality of attributes included in a first plurality of data pieces classified into the accepted group and a second plurality of data pieces classified into the rejected group” was analyzed under Step 2A Prong One as being an abstract idea. The action of specifying attributes classified into the accepted group and a plurality of data pieces classified into the rejected group is a limitation recited at a high-level generality and, under the Broadest Reasonable Interpretation (BRI), can be interpreted as a procedure of human observation, evaluation, judgement, or opinion. The claimed limitation can be performed mentally with the aid of pen and paper, and is therefore a mental process. Furthermore, the claimed limitation “the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number” was analyzed under Step 2A Prong Two as an additional element that merely limits the judicial exception to a particular field of use or technological environment of machine learning classification. The cited additional element does not integrate the judicial exception into a practical application. Additionally, the claimed limitation “displaying a plurality of indices representing a combination of the first attributes on a displaying device” was analyzed under Step 2A Prong Two as insignificant extra-solution activity that can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. The claimed limitation amounts to necessary data outputting and does not integrate the judicial exception into a practical application.
Applicant further argues that since claims 1, 4, and 7 are applied in a technical field of a user interface in which technical field a machine learning model that determines accept or rejection presents the fairness that the user 2 desires and contribute to a specific technical improvement. Accordingly, an improvement that the background technique has expected is reflected in Claims 1, 4, and 7. For the above, in relation to reciting of additional elements (for example, at least operations of "specifying, in accordance with ... a natural number" and "displaying a plurality of ... a displaying device''), these processes of claims 1, 4 and 7 are not "implements the abstract idea on a computer or "other machinery" to carry out the abstract method." Hence, claims 1, 4 and 7 are not directed to a judicial exception in view of Step 2A Prong Two. Regarding Step 2B, in addition to the argument above, at least the above noted features of claims 1, 4 and 7 provide the technical improvements to the functioning of the computer itself or any other technology or technical field. In other words, claims 1, 4 and 7 recite the additional elements that amount to significantly more than the judicial exception. Additionally, Applicant cites paragraphs [0133]-[0138] to be the support for the alleged improvement steps.
The Examiner respectfully disagrees. As detailed above, “specifying, in accordance with the plurality of classification results, a first plurality of attributes among a plurality of attributes included in a first plurality of data pieces classified into the accepted group and a second plurality of data pieces classified into the rejected group” was analyzed under Step 2A Prong One as being an abstract idea under mental process, “the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number” was analyzed under Step 2A Prong Two as an additional element that merely limits the judicial exception to a particular field of use or technological environment of machine learning classification, and “displaying a plurality of indices representing a combination of the first attributes on a displaying device” was analyzed under Step 2A Prong Two as insignificant extra-solution activity as mere data outputting. Although, instant application paragraphs [0133]-[0138] have detailed descriptions of implementing steps that provide technological improvement, these details are not explicitly present in the current claim. Specifically, “the user… operate an index value by means of a metrics searching process”, “allows user… to quickly reach the fair model that the user… believes in a single interaction”, “clarify an appropriate metric based on the subjectivity of the user 2 by the user 2 selecting one or more metrics from among a variety of metrics that the AI
system 1 presents”, and “a variety of fairness evaluation standards”, are not explicitly stated in the claim. Therefore, the claimed elements or combination of elements do not reflect a technical improvement as alleged. If the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification.
Therefore, the 35 USC 101 rejection is maintained.
35 USC 103
On pages 11 and 12 of the remarks section, the Applicant argues Kobayashi appears to merely describe "multiple attributes and polyvalent attributes", but does not appear to describe "specifying, in accordance with the plurality of classification results, a first plurality of attributes among a plurality of attributes included in a first plurality of data pieces classified into the accepted group and a second plurality of data pieces classified into the rejected group, the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number". Hence, the noted feature of claims 1, 4 and 7 is a distinction over Kobayashi. The noted feature also is a distinction over McCourt as evidenced, e.g., by the Office Action. That is, the Office Action does not assert McCourt as disclosing the noted feature.
The Examiner respectfully disagrees. The claimed limitation "specifying, in accordance with the plurality of classification results, a first plurality of attributes among a plurality of attributes included in a first plurality of data pieces classified into the accepted group and a second plurality of data pieces classified into the rejected group” is indeed taught by Kobayashi in section 3, paragraph 1: “𝐶 and 𝑍 are binary classes {+, −} (accordance with the plurality of classification results), where + is a favorable class…, whereas − is an unfavorable class. Hereinafter, we consider a favorable class as a positive class (included in a first plurality of data pieces classified into the accepted group) and an unfavorable class as a negative class (a second plurality of data pieces classified into the rejected group). We assume that the sensitive attributes 𝑆 ∈ {S1 × . . . × S𝑙 }, and nonsensitive attributes 𝐴 ∈ {A1 × . . . × A𝑚} have multiple attributes (e.g., race, gender, age) (a first plurality of attributes among a plurality of attributes) and polyvalent attributes (e.g., race, those that have more than two values… We also define 𝑆 (specifying) as a set of subgroups”.
The Examiner has considered the Applicant’s arguments regarding limitations “the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number”, but the arguments are moot because the new ground of rejection of the amended limitation is under newly cited prior art Sabes.
Therefore the 35 USC 103 rejection is maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 4, 5, 7, and 8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 and 2 are directed to a non-transitory computer-readable recording medium. Claims 4 and 5 are directed to a method. Claims 7 and 8 are directed to an apparatus. Therefore, all claims are directed one of the four categories of patent eligible subject matter.
Claim 1
Step 2A Prong 1:
Claim 1 recites:
“[the machine learning model] classifying an inputted data piece into an accepted group or a rejected group;” Classifying an inputted data piece into an accepted group or a rejected group is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process.
“specifying, in accordance with the plurality of classification results, a first plurality of attributes among a plurality of attributes included in a first plurality of data pieces classified into the accepted group and a second plurality of data pieces classified into the rejected group;” Specifying, in accordance with the plurality of classification results, a first plurality of attributes among a plurality of attributes included in a first plurality of data pieces classified into the accepted group and a second plurality of data pieces classified into the rejected group is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process.
“determining labels of the plurality of data pieces based on first indices;” Determining labels of the plurality of data pieces based on a first index representing a combination of the first plurality of attributes is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process.
Step 2A Prong Two
This judicial exception is not integrated into a practical application because the additional elements are as follows:
“A non-transitory computer-readable recording medium having stored therein a machine learning program executable by one or more computers;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
“an instruction for;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
“the machine learning model;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
“obtaining a plurality of classification results of classification of a plurality of data pieces;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
“the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)).
“the plurality of classification results being outputted from a machine learning model into which the plurality of data pieces have been inputted;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
“displaying a plurality of indices representing a combination of the first attributes on a displaying device;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
“receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
“training the machine learning model, using the determined labels and the plurality of data pieces;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
“A non-transitory computer-readable recording medium having stored therein a machine learning program executable by one or more computers;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept.
“an instruction for;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept.
“the machine learning model;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept.
“obtaining a plurality of classification results of classification of a plurality of data pieces;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
“the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept.
“the plurality of classification results being outputted from a machine learning model into which the plurality of data pieces have been inputted;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
“displaying a plurality of indices representing a combination of the first attributes on a displaying device;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
“receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
“training the machine learning model, using the determined labels and the plurality of data pieces;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept.
Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible.
Claim 2
Step 2A Prong 1:
Claim 2 recites:
“wherein the determining comprises determining the labels based on the first indices selected from among the plurality of indices represented by at least one of four fundamental arithmetic operations on the first plurality of attributes;” Determining the labels based on the first index represented by at least one of four fundamental arithmetic operations on the first plurality of attributes is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process.
Step 2A Prong Two and Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim is ineligible.
Claim 4
Step 2A Prong 1:
Claim 4 recites:
“[the machine learning model] classifying an inputted data piece into an accepted group or a rejected group;” Classifying an inputted data piece into an accepted group or a rejected group is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process.
“specifying, in accordance with the plurality of classification results, a first plurality of attributes among a plurality of attributes included in a first plurality of data pieces classified into the accepted group and a second plurality of data pieces classified into the rejected group;” Specifying, in accordance with the plurality of classification results, a first plurality of attributes among a plurality of attributes included in the accepted plurality of data pieces classified into the accepted group and a second plurality of data pieces classified into a second group is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process.
“determining labels of the plurality of data pieces based on first indices;” Determining labels of the plurality of data pieces based on a first index representing a combination of the first plurality of attributes is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process.
Step 2A Prong Two
This judicial exception is not integrated into a practical application because the additional elements are as follows:
“A computer-implemented machine learning method;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
“the machine learning model;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
“obtaining a plurality of classification results of classification of a plurality of data pieces;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)). “the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)).
“the plurality of classification results being outputted from a machine learning model into which the plurality of data pieces have been inputted;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
“displaying a plurality of indices representing a combination of the first attributes on a displaying device;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
“receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
“training the machine learning model, using the determined labels and the plurality of data pieces;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
“A computer-implemented machine learning method;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept.
“the machine learning model;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
“obtaining a plurality of classification results of classification of a plurality of data pieces;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
“the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept.
“the plurality of classification results being outputted from a machine learning model into which the plurality of data pieces have been inputted;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
“displaying a plurality of indices representing a combination of the first attributes on a displaying device;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
“receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
“training the machine learning model, using the determined labels and the plurality of data pieces;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept.
Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible.
Claim 5 is a computer-implemented machine learning method claim that recites identical limitations to claim 2. Therefore, claim 5 is rejected using the same rationale as claim 2.
Claim 7
Step 2A Prong 1:
Claim 7 recites:
“[the machine learning model] classifying an inputted data piece into an accepted group or a rejected group;” Classifying an inputted data piece into an accepted group or a rejected group is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process.
“perform specification of, in accordance with the plurality of classification results, a first plurality of attributes among a plurality of attributes included in a first plurality of data pieces classified into the accepted group and a second plurality of data pieces classified into the rejected group;” Specifying, in accordance with the plurality of classification results, a first plurality of attributes among a plurality of attributes included in a first plurality of data pieces classified into the accepted group and a second plurality of data pieces classified into the rejected group is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process.
“determining labels of the plurality of data pieces based on first indices;” Determining labels of the plurality of data pieces based on a first index representing a combination of the first plurality of attributes is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process.
Step 2A Prong Two
This judicial exception is not integrated into a practical application because the additional elements are as follows:
“An information processing apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform a process comprising;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
“obtaining a plurality of classification results of classification of a plurality of data pieces;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
“the machine learning model;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
“the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)).
“the plurality of classification results being outputted from a machine learning model into which the plurality of data pieces have been inputted;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
“displaying a plurality of indices representing a combination of the first attributes on a displaying device;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)).
“receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)).
“training the machine learning model, using the determined labels and the plurality of data pieces;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows:
“An information processing apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform a process comprising;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept.
“the machine learning model;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept.
“obtaining a plurality of classification results of classification of a plurality of data pieces;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
“the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept.
“the plurality of classification results being outputted from a machine learning model into which the plurality of data pieces have been inputted;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
“displaying a plurality of indices representing a combination of the first attributes on a displaying device;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi).
“receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept.
“training the machine learning model, using the determined labels and the plurality of data pieces;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept.
Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible.
Claim 8 is a computer-implemented machine learning method claim that recites identical limitations to claims 2. Therefore, claim 8 is rejected using the same rationale as claim 2.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2, 4, 5, 7, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi et al. (One-vs.-One Mitigation of Intersectional Bias: A General Method to Extend Fairness-Aware Binary Classification, published 2020), hereinafter Kobayashi, in view of McCourt (US 20230055948 A1), hereinafter McCourt, Sabes et al. (US 11817214 B1), hereinafter Sabes, Belyaev et al. (US 20190102703 A1), hereinafter Belyaev, and Mallikarjun et al. (US 20230237409 A1), hereinafter Mallikarjun.
Regarding claim 1, Kobayashi teaches,
an instruction [Sect 4, Algorithm 1] for obtaining (Sect 3, para 1, take) a plurality of classification results (Sect 3, para 1, binary classification tasks) of classification of a plurality of data pieces (Sect 3, para 1, dataset), the plurality of classification results being outputted (Sect 3, para 1, result) from a machine learning model (Sect 3, para 1, specific processing (e.g., prediction, mitigation)) into which the plurality of data pieces have been inputted [Sect 3, para 1, We consider binary classification tasks. We take a dataset whose instance 𝑋𝑖 includes a true class 𝐶, a determined class 𝑍, sensitive attributes 𝑆, and non-sensitive attributes 𝐴. 𝑍 is the class obtained as a result of a specific processing (e.g., prediction, mitigation). 𝐶 and 𝑍 are binary classes {+, −}, where + is a favorable class (e.g., accepted in loan applications, passing recruitment examinations), whereas − is an unfavorable class. Hereinafter, we consider a favorable class as a positive class and an unfavorable class as a negative class.],
the machine learning model classifying an inputted data piece into an accepted group or a rejected group [Sect 3, para 1, We consider binary classification tasks. We take a dataset whose instance 𝑋𝑖 includes a true class 𝐶, a determined class 𝑍… 𝐶 and 𝑍 are binary classes {+, −}, where + is a favorable class…, whereas − is an unfavorable class. Hereinafter, we consider a favorable class as a positive class and an unfavorable class as a negative class.];
an instruction [Sect 4, Algorithm 1] for specifying (Sect 3, para 1, define), in accordance with the plurality of classification results, a first plurality of attributes (Sect 3, para 1, (e.g., race, gender, age)) among a plurality of attributes (Sect 3, para 1, (e.g., race, gender, age) included in a first plurality of data pieces (Sect 3, para 1, classified in a positive class) classified into the accepted group (Sect 3, para 1, favorable class) and a second plurality of data pieces (Sect 3, para 1, classified in a negative class) classified into the rejected group (Sect 3, para 1, unfavorable class) [Sect 3, para 1, 𝐶 and 𝑍 are binary classes {+, −}, where + is a favorable class…, whereas − is an unfavorable class. Hereinafter, we consider a favorable class as a positive class and an unfavorable class as a negative class. We assume that the sensitive attributes 𝑆 ∈ {S1 × . . . × S𝑙 }, and nonsensitive attributes 𝐴 ∈ {A1 × . . . × A𝑚} have multiple attributes (e.g., race, gender, age) and polyvalent attributes (e.g., race, those that have more than two values… We also define 𝑆 as a set of subgroups.],
an instruction [Sect 4.1, Algorithm 1] for determining (Sect 4.1, para 4, categorized) labels (Sect 4.1, para 4, 𝑍𝑖) of the plurality of data pieces (Sect 4.1, para 4, 𝑋𝑖) based on a first indices (Algorithm 1, line 5, foreach
D
i
,
j
p
∈
D
i
p
do) [Sect 4.1, para 2, For example, when there are four subgroups (𝐼 : (female, non-white), 𝐼𝐼 : (female, white), 𝐼𝐼𝐼 : (male, non-white), 𝐼𝑉 : (male, white))… Then, for each instance 𝑋𝑖, our method identifies
D
i
p
; Sect 4.1, para 4, 𝑋𝑖 is categorized into the favorable class, the determined class 𝑍𝑖 = +. Otherwise, 𝑍𝑖 = −];
Kobayashi teaches the above limitations of claim 1 including the determined labels (Kobayashi, Sect 4.1, para 4).
Kobayashi does not teach A non-transitory computer-readable recording medium having stored therein a machine learning program executable by one or more computers, the machine learning program comprising: the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number; an instruction for training the machine learning model, using the labels and the plurality of data pieces; an instruction for displaying a plurality of indices representing a combination of the first attributes on a displaying device; an instruction for receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user.
McCourt teaches,
A non-transitory computer-readable recording medium (Para 0059, computers) having stored (Para 0061, stores) therein a machine learning program (Para 0061, classification model) executable by one or more computers (Para 0061, when executed by one or more processors, cause the server computer 110 to compute) [Para 0059, The server computer 110 may be implemented using a server-class computer or other computer having one or more processor cores, co-processors, or other computers. The server computer 110 may be a physical server computer and/or virtual server instance stored in a data center, such as through cloud computing; Para 0061, server computer 110 stores a classification model. The classification model comprises computer readable instructions which, when executed by one or more processors, cause the server computer 110 to compute one or more output outcomes or labels based on input call transcripts], the machine learning program comprising:
an instruction (Para 0061, computer readable instructions) for training (Para 0066, trained) the machine learning model (Para 0066, The model), using labels (Para 0066, labeled data) and the plurality of data pieces (Para 0066, labeled and unlabeled data) [Para 0066, The model may be trained using labeled data or a mixture of labeled and unlabeled data in a semi-supervised mode.].
McCourt is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of McCourt and provide a computer performing training with labeled data [McCourt, Para 0012] in order to classify data in datasets to improve machine learning model predictions.
Kobayashi-McCourt teach the above limitations of claim 1 including the first plurality of attributes, the first plurality of data pieces, and the second plurality of data pieces (Kobayashi, Sect 3, para 1).
Kobayashi-McCourt do not teach attributes corresponding to top N attributes having first to N-th largest differences between data pieces and second data pieces than other attributes, where N is a natural number; displaying a plurality of indices representing a combination of attributes on a displaying device; receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user.
Sabes teaches,
attributes corresponding to top N attributes having first to N-th largest differences between data pieces (Col 20, lines 11-20, top q features) and second data pieces (Col 20, lines 11-20, features with a zero coefficient) than other attributes, where N is a natural number [Col 20, lines 11-20, Operation 410 may, in some examples, select a top q features 412 from a ranked list of features 414 (where q is a positive integer), ranked according to scores determined based at least in part on a feature selection and/or feature reduction algorithm. The feature reduction and/or feature selection algorithm may comprise, for example, Spearman’s rank correlation between each individual probe and the target label, retaining only the features with a non-zero coefficient from an ElasticNet model].
Sabes is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of Sabes and provide top N attributes in order to optimize an algorithm by only selecting the best training samples.
Kobayashi-McCourt-Sabes teach the above limitations of claim 1 including an instruction (McCourt, para 0061) and the first attributes (Kobayashi, Sect 3, para 1).
Kobayashi-McCourt-Sabes do not teach displaying a plurality of indices representing a combination of attributes on a displaying device; receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user.
Belyaev teaches,
displaying a plurality of indices representing a combination of attributes on a displaying device [Para 0076, a Gini decrease index may be ascertained for one or more features during training. A Gini decrease index indicates importance of a feature in that it accounts for a feature's effect on a classifier's functioning relative to how much influence other features have in driving a generated prediction... for a given report reported in the form of a GUI, the GUI may display features of only such features whose importance meets or exceeds a predetermined minimum importance threshold. For example, a report might include identifiers of only such features whose importance, represented numerically as Gini decrease index, squared exceeds 0.1].
Belyaev is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of Belyaev and provide displaying a plurality of indices representing a combination of attributes on a displaying device in order to improve user experience when analyzing data attributes.
Kobayashi-McCourt-Sabes-Belyaev do not teach receiving information indicative of one or more first indices selected from among plurality of indices displayed on the displaying device by a user.
Mallikarjun teaches,
receiving information indicative of one or more first indices (Para 0057, final risk index score) selected from among plurality of indices displayed on the displaying device (Para 0057, computer display devices) by a user (Para 0057, users can request different kinds of visualizations) [Para 0057, the process of FIG. 3 is programmed to asynchronously generate and visually present one or more graphical visualizations of the classification output and/or final risk index score… the process of FIG. 3 could be integrated into an interactive program with a user interface by which the analyst computer 120 (FIG. 1) or other computers of end users can request different kinds of visualizations. In an embodiment, block 316 can be programmed to generate code capable of rendering in a browser, using a display driver, using a graphics library, or using other programs to cause writing, outputting, or otherwise presenting graphical images, charts, tables, plots, or other representations of the individual classification outputs, and/or the final risk index score, in computer display devices].
Mallikarjun is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of Mallikarjun and provide receiving information indicative of one or more first indices selected from among plurality of indices displayed on the displaying device by a user in order to improve user experience when analyzing data attributes.
Regarding claim 2, Kobayashi-McCourt-Sabes-Belyaev- Mallikarjun teach the limitations of claim 1 including the determining.
Kobayashi further teaches,
determining the labels (Sect 4.1, para 4, 𝑍𝑖) based on the indices selected from among the plurality of indices (Sect 4.1, Algorithm 1, line 5, foreach
D
i
,
j
p
∈
D
i
p
do) represented by at least one of four fundamental arithmetic operations (Sect 4.1, Algorithm 1, line 5, foreach loop) on the first plurality of attributes (Sect 4.1, Algorithm 1,
D
i
p
) [Sect 4.1, para 4, 𝑋𝑖 is categorized into the favorable class, the determined class 𝑍𝑖 = +. Otherwise, 𝑍𝑖 = −].
Regarding claim 4, Kobayashi teaches,
A computer-implemented machine learning method [Abstract, we propose a method called One-vs.-One Mitigation by applying a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification] comprising: obtaining (Sect 3, para 1, take) a plurality of classification results (Sect 3, para 1, binary classification tasks) of classification of a plurality of data pieces (Sect 3, para 1, dataset), the plurality of classification results being outputted (Sect 3, para 1, result) from a machine learning model (Sect 3, para 1, specific processing (e.g., prediction, mitigation)) into which the plurality of data pieces have been inputted [Sect 3, para 1, We consider binary classification tasks. We take a dataset whose instance 𝑋𝑖 includes a true class 𝐶, a determined class 𝑍, sensitive attributes 𝑆, and non-sensitive attributes 𝐴. 𝑍 is the class obtained as a result of a specific processing (e.g., prediction, mitigation). 𝐶 and 𝑍 are binary classes {+, −}, where + is a favorable class (e.g., accepted in loan applications, passing recruitment examinations), whereas − is an unfavorable class. Hereinafter, we consider a favorable class as a positive class and an unfavorable class as a negative class.];
the machine learning model classifying an inputted data piece into an accepted group or a rejected group [Sect 3, para 1, We consider binary classification tasks. We take a dataset whose instance 𝑋𝑖 includes a true class 𝐶, a determined class 𝑍… 𝐶 and 𝑍 are binary classes {+, −}, where + is a favorable class…, whereas − is an unfavorable class. Hereinafter, we consider a favorable class as a positive class and an unfavorable class as a negative class.];
Specifying (Sect 3, para 1, define), in accordance with the plurality of classification results, a first plurality of attributes (Sect 3, para 1, (e.g., race, gender, age)) among a plurality of attributes (Sect 3, para 1, (e.g., race, gender, age) included in a first plurality of data pieces (Sect 3, para 1, classified in a positive class) classified into the accepted group (Sect 3, para 1, favorable class) and a second plurality of data pieces (Sect 3, para 1, classified in a negative class) classified into the rejected group (Sect 3, para 1, unfavorable class) [Sect 3, para 1, 𝐶 and 𝑍 are binary classes {+, −}, where + is a favorable class…, whereas − is an unfavorable class. Hereinafter, we consider a favorable class as a positive class and an unfavorable class as a negative class. We assume that the sensitive attributes 𝑆 ∈ {S1 × . . . × S𝑙 }, and nonsensitive attributes 𝐴 ∈ {A1 × . . . × A𝑚} have multiple attributes (e.g., race, gender, age) and polyvalent attributes (e.g., race, those that have more than two values… We also define 𝑆 as a set of subgroups.];
determining (Sect 4.1, para 4, categorized) labels (Sect 4.1, para 4, 𝑍𝑖) of the plurality of data pieces (Sect 4.1, para 4, 𝑋𝑖) based on first indices (Algorithm 1, line 5, foreach
D
i
,
j
p
∈
D
i
p
do) representing a combination of the first plurality of attributes (Sect 4.1, para 2, (𝐼 : (female, non-white), 𝐼𝐼 : (female, white), 𝐼𝐼𝐼 : (male, non-white), 𝐼𝑉 : (male, white))) [Sect 4.1, para 2, For example, when there are four subgroups (𝐼 : (female, non-white), 𝐼𝐼 : (female, white), 𝐼𝐼𝐼 : (male, non-white), 𝐼𝑉 : (male, white))… Then, for each instance 𝑋𝑖, our method identifies
D
i
p
; Sect 4.1, para 4, 𝑋𝑖 is categorized into the favorable class, the determined class 𝑍𝑖 = +. Otherwise, 𝑍𝑖 = −];
Kobayashi teaches the above limitations of claim 1 including the determined labels (Kobayashi, Sect 4.1, para 4).
Kobayashi does not teach training the machine learning model, using the labels, the plurality of data pieces, the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number; displaying a plurality of indices representing a combination of the first attributes on a displaying device; receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user.
McCourt teaches,
training (Para 0066, trained) the machine learning model (Para 0066, The model), using labels (Para 0066, labeled data) and the plurality of data pieces (Para 0066, labeled and unlabeled data) [Para 0066, The model may be trained using labeled data or a mixture of labeled and unlabeled data in a semi-supervised mode.].
McCourt is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of McCourt and provide training with labeled data [McCourt, Para 0012] in order to classify data in datasets to improve machine learning model predictions.
Kobayashi-McCourt teach the above limitations of claim 1 including the first plurality of attributes, the first plurality of data pieces, and the second plurality of data pieces (Kobayashi, Sect 3, para 1).
Kobayashi-McCourt do not teach attributes corresponding to top N attributes having first to N-th largest differences between data pieces and second data pieces than other attributes, where N is a natural number; displaying a plurality of indices representing a combination of attributes on a displaying device; receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user.
Sabes teaches,
attributes corresponding to top N attributes having first to N-th largest differences between data pieces (Col 20, lines 11-20, top q features) and second data pieces (Col 20, lines 11-20, features with a zero coefficient) than other attributes, where N is a natural number [Col 20, lines 11-20, Operation 410 may, in some examples, select a top q features 412 from a ranked list of features 414 (where q is a positive integer), ranked according to scores determined based at least in part on a feature selection and/or feature reduction algorithm. The feature reduction and/or feature selection algorithm may comprise, for example, Spearman’s rank correlation between each individual probe and the target label, retaining only the features with a non-zero coefficient from an ElasticNet model].
Sabes is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of Sabes and provide top N attributes in order to optimize an algorithm by only selecting the best training samples.
Kobayashi-McCourt-Sabes teach the above limitations of claim 1 including the first attributes (Kobayashi, Sect 3, para 1).
Kobayashi-McCourt-Sabes do not teach displaying a plurality of indices representing a combination of attributes on a displaying device; receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user.
Belyaev teaches,
displaying a plurality of indices representing a combination of attributes on a displaying device [Para 0076, a Gini decrease index may be ascertained for one or more features during training. A Gini decrease index indicates importance of a feature in that it accounts for a feature's effect on a classifier's functioning relative to how much influence other features have in driving a generated prediction... for a given report reported in the form of a GUI, the GUI may display features of only such features whose importance meets or exceeds a predetermined minimum importance threshold. For example, a report might include identifiers of only such features whose importance, represented numerically as Gini decrease index, squared exceeds 0.1].
Belyaev is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of Belyaev and provide displaying a plurality of indices representing a combination of attributes on a displaying device in order to improve user experience when analyzing data attributes.
Kobayashi-McCourt-Sabes-Belyaev do not teach receiving information indicative of one or more first indices selected from among plurality of indices displayed on the displaying device by a user.
Mallikarjun teaches,
receiving information indicative of one or more first indices (Para 0057, final risk index score) selected from among plurality of indices displayed on the displaying device (Para 0057, computer display devices) by a user (Para 0057, users can request different kinds of visualizations) [Para 0057, the process of FIG. 3 is programmed to asynchronously generate and visually present one or more graphical visualizations of the classification output and/or final risk index score… the process of FIG. 3 could be integrated into an interactive program with a user interface by which the analyst computer 120 (FIG. 1) or other computers of end users can request different kinds of visualizations. In an embodiment, block 316 can be programmed to generate code capable of rendering in a browser, using a display driver, using a graphics library, or using other programs to cause writing, outputting, or otherwise presenting graphical images, charts, tables, plots, or other representations of the individual classification outputs, and/or the final risk index score, in computer display devices].
Mallikarjun is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of Mallikarjun and provide receiving information indicative of one or more first indices selected from among plurality of indices displayed on the displaying device by a user in order to improve user experience when analyzing data attributes.
Claim 5 is a computer-implemented machine learning method claim that recites identical limitations to claim 2. Therefore, claim 5 is rejected using the same rationale as claim 2.
Regarding claim 7, Kobayashi teaches,
obtaining (Sect 3, para 1, take) a plurality of classification results (Sect 3, para 1, binary classification tasks) of classification of a plurality of data pieces (Sect 3, para 1, dataset), the plurality of classification results being outputted (Sect 3, para 1, result) from a machine learning model (Sect 3, para 1, specific processing (e.g., prediction, mitigation)) into which the plurality of data pieces have been inputted [Sect 3, para 1, We consider binary classification tasks. We take a dataset whose instance 𝑋𝑖 includes a true class 𝐶, a determined class 𝑍, sensitive attributes 𝑆, and non-sensitive attributes 𝐴. 𝑍 is the class obtained as a result of a specific processing (e.g., prediction, mitigation). 𝐶 and 𝑍 are binary classes {+, −}, where + is a favorable class (e.g., accepted in loan applications, passing recruitment examinations), whereas − is an unfavorable class. Hereinafter, we consider a favorable class as a positive class and an unfavorable class as a negative class.],
the machine learning model classifying an inputted data piece into an accepted group or a rejected group [Sect 3, para 1, We consider binary classification tasks. We take a dataset whose instance 𝑋𝑖 includes a true class 𝐶, a determined class 𝑍… 𝐶 and 𝑍 are binary classes {+, −}, where + is a favorable class…, whereas − is an unfavorable class. Hereinafter, we consider a favorable class as a positive class and an unfavorable class as a negative class.];
specifying (Sect 3, para 1, define), in accordance with the plurality of classification results, a first plurality of attributes (Sect 3, para 1, (e.g., race, gender, age)) among a plurality of attributes (Sect 3, para 1, (e.g., race, gender, age) included in a first plurality of data pieces (Sect 3, para 1, classified in a positive class) classified into the accepted group (Sect 3, para 1, favorable class) and a second plurality of data pieces (Sect 3, para 1, classified in a negative class) classified into the rejected group (Sect 3, para 1, unfavorable class) [Sect 3, para 1, 𝐶 and 𝑍 are binary classes {+, −}, where + is a favorable class…, whereas − is an unfavorable class. Hereinafter, we consider a favorable class as a positive class and an unfavorable class as a negative class. We assume that the sensitive attributes 𝑆 ∈ {S1 × . . . × S𝑙 }, and nonsensitive attributes 𝐴 ∈ {A1 × . . . × A𝑚} have multiple attributes (e.g., race, gender, age) and polyvalent attributes (e.g., race, those that have more than two values… We also define 𝑆 as a set of subgroups.];
determining (Sect 4.1, para 4, categorized) labels (Sect 4.1, para 4, 𝑍𝑖) of the plurality of data pieces (Sect 4.1, para 4, 𝑋𝑖) based on first indices (Algorithm 1, line 5, foreach
D
i
,
j
p
∈
D
i
p
do) representing a combination of the first plurality of attributes (Sect 4.1, para 2, (𝐼 : (female, non-white), 𝐼𝐼 : (female, white), 𝐼𝐼𝐼 : (male, non-white), 𝐼𝑉 : (male, white))) [Sect 4.1, para 2, For example, when there are four subgroups (𝐼 : (female, non-white), 𝐼𝐼 : (female, white), 𝐼𝐼𝐼 : (male, non-white), 𝐼𝑉 : (male, white))… Then, for each instance 𝑋𝑖, our method identifies
D
i
p
; Sect 4.1, para 4, 𝑋𝑖 is categorized into the favorable class, the determined class 𝑍𝑖 = +. Otherwise, 𝑍𝑖 = −];
Kobayashi teaches the above limitations of claim 1 including the determined labels (Kobayashi, Sect 4.1, para 4).
Kobayashi does not teach an information processing apparatus comprising: a memory; and a processor coupled to the memory, to perform a process comprising the first plurality of attributes corresponding to top N attributes having first to N-th largest differences between the first plurality of data pieces and the second plurality of data pieces than other attributes, where N is a natural number; an instruction for training the machine learning model, using the labels and the plurality of data pieces; an instruction for displaying a plurality of indices representing a combination of the first attributes on a displaying device; an instruction for receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user; training the machine learning model, using the labels and the plurality of data pieces.
McCourt teaches,
An information processing apparatus (Para 0059, The server computer 110 may be implemented using… computers) comprising: a memory (Para 0061, server computer 110 stores a classification model); and a processor coupled to the memory (Para 0061, The classification model comprises computer readable instructions which, when executed by one or more processors, cause the server computer 110 to compute) [Para 0059, The server computer 110 may be implemented using a server-class computer or other computer having one or more processor cores, co-processors, or other computers. The server computer 110 may be a physical server computer and/or virtual server instance stored in a data center, such as through cloud computing; Para 0061, server computer 110 stores a classification model. The classification model comprises computer readable instructions which, when executed by one or more processors, cause the server computer 110 to compute one or more output outcomes or labels based on input call transcripts],
training (Para 0066, trained) the machine learning model (Para 0066, The model), using labels (Para 0066, labeled data) and the plurality of data pieces (Para 0066, labeled and unlabeled data) [Para 0066, The model may be trained using labeled data or a mixture of labeled and unlabeled data in a semi-supervised mode.].
McCourt is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of McCourt and provide performing training with labeled data [McCourt, Para 0012] in order to classify data in datasets to improve machine learning model predictions.
Kobayashi-McCourt teach the above limitations of claim 1 including the first plurality of attributes, the first plurality of data pieces, and the second plurality of data pieces (Kobayashi, Sect 3, para 1).
Kobayashi-McCourt do not teach attributes corresponding to top N attributes having first to N-th largest differences between data pieces and second data pieces than other attributes, where N is a natural number; displaying a plurality of indices representing a combination of attributes on a displaying device; receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user.
Sabes teaches,
attributes corresponding to top N attributes having first to N-th largest differences between data pieces (Col 20, lines 11-20, top q features) and second data pieces (Col 20, lines 11-20, features with a zero coefficient) than other attributes, where N is a natural number [Col 20, lines 11-20, Operation 410 may, in some examples, select a top q features 412 from a ranked list of features 414 (where q is a positive integer), ranked according to scores determined based at least in part on a feature selection and/or feature reduction algorithm. The feature reduction and/or feature selection algorithm may comprise, for example, Spearman’s rank correlation between each individual probe and the target label, retaining only the features with a non-zero coefficient from an ElasticNet model].
Sabes is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of Sabes and provide top N attributes in order to optimize an algorithm by only selecting the best training samples.
Kobayashi-McCourt-Sabes teach the above limitations of claim 1 including the first attributes (Kobayashi, Sect 3, para 1).
Kobayashi-McCourt-Sabes do not teach displaying a plurality of indices representing a combination of attributes on a displaying device; receiving information indicative of one or more first indices selected from among the plurality of indices displayed on the displaying device by a user.
Belyaev teaches,
displaying a plurality of indices representing a combination of attributes on a displaying device [Para 0076, a Gini decrease index may be ascertained for one or more features during training. A Gini decrease index indicates importance of a feature in that it accounts for a feature's effect on a classifier's functioning relative to how much influence other features have in driving a generated prediction... for a given report reported in the form of a GUI, the GUI may display features of only such features whose importance meets or exceeds a predetermined minimum importance threshold. For example, a report might include identifiers of only such features whose importance, represented numerically as Gini decrease index, squared exceeds 0.1].
Belyaev is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of Belyaev and provide displaying a plurality of indices representing a combination of attributes on a displaying device in order to improve user experience when analyzing data attributes.
Kobayashi-McCourt-Belyaev do not teach receiving information indicative of one or more first indices selected from among plurality of indices displayed on the displaying device by a user.
Mallikarjun teaches,
receiving information indicative of one or more first indices (Para 0057, final risk index score) selected from among plurality of indices displayed on the displaying device (Para 0057, computer display devices) by a user (Para 0057, users can request different kinds of visualizations) [Para 0057, the process of FIG. 3 is programmed to asynchronously generate and visually present one or more graphical visualizations of the classification output and/or final risk index score… the process of FIG. 3 could be integrated into an interactive program with a user interface by which the analyst computer 120 (FIG. 1) or other computers of end users can request different kinds of visualizations. In an embodiment, block 316 can be programmed to generate code capable of rendering in a browser, using a display driver, using a graphics library, or using other programs to cause writing, outputting, or otherwise presenting graphical images, charts, tables, plots, or other representations of the individual classification outputs, and/or the final risk index score, in computer display devices].
Mallikarjun is analogous to the claimed invention as they both relate to ML classification. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kobayashi’s teachings to incorporate the teachings of Mallikarjun and provide receiving information indicative of one or more first indices selected from among plurality of indices displayed on the displaying device by a user in order to improve user experience when analyzing data attributes.
Regarding claim 8, Kobayashi-McCourt teach the limitations of claim 7 including the determination.
Claim 8 is an information processing apparatus claim that recites identical limitations to claim 2. Therefore, claim 8 is rejected using the same rationale as claim 2.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED RAYHAN AHMED whose telephone number is (571)270-0286. The examiner can normally be reached Mon-Fri ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SYED RAYHAN AHMED/Examiner, Art Unit 2126
/DAVID YI/Supervisory Patent Examiner, Art Unit 2126