DETAILED ACTION
This action is in response to the filing on 08/29/2022. Claims 1-20, are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
Para. 72, line 1 recites "the identified features group", should recite -- the identified feature group --.
Para. 73, line 1 recites "the identified features group", should recite -- the identified feature group --.
Para. 101, line 6 recites "the features in Sare set", should recite -- the features in S are set --.
Para. 108, lines 1-2 recites "the identified features group", should recite -- the identified feature group --.
Para. 109, lines 1-2 recites "the identified features group", should recite -- the identified feature group --.
Appropriate corrections are required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-12, and 16-19 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites the limitation “the identified features group” on line 1. There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination, it will be interpreted as "the identified feature group".
Claim 3 recites the limitation “the identified features group” on line 1. There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination, it will be interpreted as "the identified feature group".
Claim 8 recites the limitation "the mutual information" in line 1. There is insufficient antecedent basis for this limitation in the claim.
Claim 9 recites the limitation "the mutual information" in lines 1-2. There is insufficient antecedent basis for this limitation in the claim.
Claim 12 recites the limitation “an information theoretic dependence metric” on line 2, it is unclear what an information theoretic dependence metric is. For the purpose of examination, it will be interpreted as “a dependence metric”.
Claim 16 recites the limitation “the identified features group” on line 1. There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination, it will be interpreted as "the identified feature group".
Claim 17 recites the limitation “the identified features group” on line 1. There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination, it will be interpreted as "the identified feature group".
Claims 4-7, 10-11, and 18-19 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre‐AIA ), second paragraph, as being indefinite for depending upon an indefinite parent claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 20 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 20 recites a “tangible machine-readable storage medium”, which is defined in the specification, in para. 62, as including any medium that is capable of storing, encoding, or carrying instructions for execution by the computing machine and cause the computing machine to perform any one or more of the techniques of the present disclosure. Para. 62 further provides examples of non-transitory mediums in para. 62, however, it does not explicitly exclude may include transitory forms of signal transmission also known as signals per se, and thus is not eligible under 35 U.S.C. 101 Step 1 (see MPEP, 2106.03(I)). Claim 20 could be amended to recite, for example, “A non-transitory machine-readable storage medium” to fall within one of the four statutory categories, and thus, the analysis under 35 U.S.C. 101 will continue.
Claims 1-20 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Independent Claims 1, 15, and 20
Step 1:
Claims 1, 15, and 20 recite a method, system, and manufacture respectively; therefore, they are directed to one of the four categories of statutory subject matter (process/method, machine/product/apparatus, manufacture, or composition of matter).
Step 2A Prong 1:
Claims 1, 15, and 20 recite a method, system, and manufacture comprising:
identifying a feature group used by the artificial intelligence model — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
the feature group comprising at least two features having a similarity with one another exceeding a similarity threshold — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
determining an overall influence value for the feature group on an output of the artificial intelligence model applied to the dataset — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Step 2A Prong 2:
This judicial exception is not integrated into a practical application.
Claim 1 recites the additional elements of:
accessing, at a computing machine, an artificial intelligence model and a dataset for the artificial intelligence model, the dataset comprising at least one datapoint — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computing machine with access to an AI model and a dataset.
wherein the feature group comprises a subset of the features used by the artificial intelligence model — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), storing and retrieving information in memory).
providing an output representing the overall influence value — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), performing repetitive calculations).
Claim 15 recites the additional elements of:
A system comprising: a memory comprising instructions; and one or more computer processors, wherein the instructions, when executed by the one or more computer processors, cause the system to perform operations comprising — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic system with generic computer components.
accessing, at a computing machine, an artificial intelligence model and a dataset for the artificial intelligence model, the dataset comprising at least one datapoint — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computing machine with access to an AI model and a dataset.
wherein the feature group comprises a subset of the features used by the artificial intelligence model — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), storing and retrieving information in memory).
providing an output representing the overall influence value — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), performing repetitive calculations).
Claim 20 recites the additional elements of:
A tangible machine-readable storage medium (analyzed as “A non-transitory machine-readable storage medium” as indicated above in the 35 U.S.C. 101 non-statutory subject matter rejection) including instructions that, when executed by a machine, cause the machine to perform operations comprising — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computer component.
accessing, at a computing machine, an artificial intelligence model and a dataset for the artificial intelligence model, the dataset comprising at least one datapoint — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computing machine with access to an AI model and a dataset.
wherein the feature group comprises a subset of the features used by the artificial intelligence model — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), storing and retrieving information in memory).
providing an output representing the overall influence value — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), performing repetitive calculations).
Step 2B:
The claims do not contain significantly more than the judicial exception.
Claim 1 recites the additional elements of:
accessing, at a computing machine, an artificial intelligence model and a dataset for the artificial intelligence model, the dataset comprising at least one datapoint — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computing machine with access to an AI model and a dataset.
wherein the feature group comprises a subset of the features used by the artificial intelligence model — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), storing and retrieving information in memory).
providing an output representing the overall influence value — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), performing repetitive calculations).
Claim 15 recites the additional elements of:
A system comprising: a memory comprising instructions; and one or more computer processors, wherein the instructions, when executed by the one or more computer processors, cause the system to perform operations comprising — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic system with generic computer components.
accessing, at a computing machine, an artificial intelligence model and a dataset for the artificial intelligence model, the dataset comprising at least one datapoint — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computing machine with access to an AI model and a dataset.
wherein the feature group comprises a subset of the features used by the artificial intelligence model — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), storing and retrieving information in memory).
providing an output representing the overall influence value — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), performing repetitive calculations).
Claim 20 recites the additional elements of:
A tangible machine-readable storage medium (analyzed as “A non-transitory machine-readable storage medium” as indicated above in the 35 U.S.C. 101 non-statutory subject matter rejection) including instructions that, when executed by a machine, cause the machine to perform operations comprising — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computer component.
accessing, at a computing machine, an artificial intelligence model and a dataset for the artificial intelligence model, the dataset comprising at least one datapoint — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computing machine with access to an AI model and a dataset.
wherein the feature group comprises a subset of the features used by the artificial intelligence model — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), storing and retrieving information in memory).
providing an output representing the overall influence value — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), performing repetitive calculations).
As such claims 1, 15, and 20 are not patent eligible.
Dependent Claims 2-14 and 16-19
Step 1:
Claims 2-14 and 16-19 recite a method and system respectively; therefore, they are directed to one of the four categories of statutory subject matter (process/method, machine/product/apparatus, manufacture, or composition of matter).
Step 2A Prong 1:
Claims 2-14 and 16-19 merely narrow the previously cited abstract idea limitations. For the reasons described above with respect to independent claims 1 and 15, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claim(s) disclose similar limitations described for the independent claim(s) above and do not provide anything more than the abstract idea.
Claims 2 and 16 recite a method and system comprising:
wherein the identified features group (the identified features group interpreted as the identified feature group per 35 U.S.C. 112(b) rejection above) is manually identified by a user of the computing machine — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)).
Claims 3 and 17 recite a method and system comprising:
wherein the identified features group (the identified features group interpreted as the identified feature group per 35 U.S.C. 112(b) rejection above) is identified semi-automatically or fully-automatically — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claims 4 and 18 recite a method and system comprising:
wherein the feature group is identified, at the computing machine, using an undirected weighted graph data structure — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claims 5 and 19 recite a method and system comprising:
wherein the graph data structure comprises vertices representing features and edge weights representing similarities between features — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 6 recites a method comprising:
wherein the similarities between the features correspond to an absolute value of a correlation between the features — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 7 recites a method comprising:
wherein the similarities between the features correspond to a measure of mutual information between the features or an information gain from a first feature from among the features to a second feature from among the features — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 8 recites a method comprising:
wherein the similarities between the features correspond to a symmetrized version of the mutual information — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 9 recites a method comprising:
wherein the symmetrized version of the mutual information comprises a symmetric uncertainty — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 10 recites a method comprising:
wherein the similarities between the features correspond to a feature similarity metric — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 11 recites a method comprising:
wherein the similarity between the features corresponds to a statistical correlation metric — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 12 recites a method comprising:
wherein the similarity between the features corresponds to an information theoretic dependence metric (an information theoretic dependence metric interpreted as a dependence metric under 35 U.S.C. 112(b) rejection above) — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 13 recites a method comprising:
wherein determining the overall influence value for the feature group on the output of the artificial intelligence model applied to the dataset comprises: determining a Shapley value of each and every feature in the feature group; and summing the determined Shapley values to compute the overall influence value — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 14 recites a method comprising:
wherein determining the overall influence value for the feature group on the output of the artificial intelligence model applied to the dataset comprises: computing a Shapley value of the feature group as the overall influence value — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Step 2A Prong 2:
This judicial exception is not integrated into a practical application. Claims 2-14 and 16-19 do not recite any additional elements.
Step 2B:
The claims do not contain significantly more than the judicial exception. Claims 2-14 and 16-19 do not recite any additional elements.
As such claims 2-14 and 16-19 is/are not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 13-17, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Le Biannic (US 2021/0182698 A1), hereinafter LB.
Regarding claim 1, LB teaches a method comprising (Techniques and solutions are described for analyzing results of a machine learning model. [see LB, Abstract]):
accessing, at a computing machine, an artificial intelligence model and a dataset for the artificial intelligence model, the dataset comprising at least one datapoint (LB discloses a computing environment capable of implementing the described disclosure [see LB, para. 165, FIG. 14], including a dataset used for a machine learning model [see LB, para. 141; FIG. 12]);
identifying a feature group used by the artificial intelligence model, the feature group comprising at least two features having a similarity with one another exceeding a similarity threshold, wherein the feature group comprises a subset of the features used by the artificial intelligence model (LB discloses a first plurality of features used by the machine learning model, and performing a variety of methods, alone or in combination, used to identify feature groups including a proper subset of the first plurality of features [see LB, para. 9]. LB further discloses determining feature groups based on relationships between features, using a variety of techniques, one technique involving determining mutual information between pairs of features [see LB, para. 112], and that this mutual information reflects the dependence between features [see LB, para. 113], and that the dependency information can be used to define feature groups, with features having a dependency within a given threshold considered as part of a common feature group [see LB, para. 115]);
determining an overall influence value for the feature group on an output of the artificial intelligence model applied to the dataset (LB discloses identifying the contribution of the feature groups on an output of the machine learning model result [see LB, para. 9]);
providing an output representing the overall influence value (LB discloses a UI with a panel for listing the plurality of feature groups and their respective contribution percentage to the result of the machine learning model [see LB, para. 137; FIG. 11]).
Regarding claim 2, LB as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein the identified features group (the identified features group interpreted as the identified feature group per 35 U.S.C. 112(b) rejection above) is manually identified by a user of the computing machine (A user can manually assign features to feature groups, including changing features in a suggested feature group, breaking a feature group into sub groups, or combining two or more groups into a larger feature group. For example, feature groups may be initially selected by data relationships (e.g., being in a common table or related table) or data access consideration (e.g., joins). A user may then determine that features should be added to or removed from these groups, that two groups should be combined, a single group split into two or more groups, etc. [see LB, para. 41]).
Regarding claim 3, LB as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein the identified features group (the identified features group interpreted as the identified feature group per 35 U.S.C. 112(b) rejection above) is identified semi-automatically (LB discloses that the feature groups can be identified semi-automatically with the user manually aiding the identification before or after the automatic identification [see LB, para. 117]) or fully-automatically (LB discloses determining feature groups based on relationships between features, using a variety of techniques, one technique involving determining mutual information between pairs of features [see LB, para. 112], and that this mutual information reflects the dependence between features [see LB, para. 113], and that the dependency information can be used to define feature groups, with features having a dependency within a given threshold considered as part of a common feature group [see LB, para. 115]).
Regarding claim 13, LB as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein determining the overall influence value for the feature group on the output of the artificial intelligence model applied to the dataset comprises: determining a Shapley value of each and every feature in the feature group; and summing the determined Shapley values to compute the overall influence value (LB discloses aggregating the influence of the feature group by summing SHAP (Shapley additive explanation, [see LB, para. 52]) values of each feature in the feature group [see LB, para. 133-134]).
Regarding claim 14, LB as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein determining the overall influence value for the feature group on the output of the artificial intelligence model applied to the dataset comprises: computing a Shapley value of the feature group as the overall influence value (LB discloses that a single variable or overall SHAP (Shapley additive explanation, [see LB, para. 52]) contribution can be calculated [see LB, para. 54]. Thus, it is possible to calculate the SHAP contribution of the overall feature group instead of a single variable in the feature group).
Regarding claim 15, claim 15 contains substantially similar limitations to those found in claim 1. Therefore it is rejected for the same reason as claim 1 above. Additionally, LB further teaches:
A system comprising: a memory comprising instructions; and one or more computer processors, wherein the instructions, when executed by the one or more computer processors, cause the system to perform operations comprising (With reference to FIG. 14, the computing system 1400 includes one or more processing units 1410, 1415 and memory 1420, 1425. In FIG. 14, this basic configuration 1430 is included within a dashed line. The processing units 1410, 1415 execute computer-executable instructions, such as for implementing the technologies described in Examples 1-15. A processing unit can be a general-purpose central processing unit (CPU), processor in an application-specific integrated circuit (ASIC), or any other type of processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. For example, FIG. 14 shows a central processing unit 1410 as well as a graphics processing unit or co-processing unit 1415. The tangible memory 1420, 1425 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two, accessible by the processing unit(s) 1410, 1415. The memory 1420, 1425 stores software 1480 implementing one or more innovations described herein, in the form of computer-executable instructions suitable for execution by the processing unit(s) 1410, 1415. [see LB, para. 166 and FIG. 14]).
Regarding claim 16, claim 16 contains substantially similar limitations to those found in claim 2 above. Consequently, claim 16 is rejected for the same reasons.
Regarding claim 17, claim 17 contains substantially similar limitations to those found in claim 3 above. Consequently, claim 17 is rejected for the same reasons.
Regarding claim 20, claim 20 contains substantially similar limitations to those found in claim 1. Therefore it is rejected for the same reason as claim 1 above. Additionally, LB further teaches:
A tangible machine-readable storage medium (interpreted as A non-transitory machine-readable storage medium per 35 U.S.C. 112(b) rejection above) including instructions that, when executed by a machine, cause the machine to perform operations comprising (The present disclosure also includes computing systems and tangible, non-transitory computer readable storage media configured to carry out, or including instructions for carrying out, an above-described method. As described herein, a variety of other features and advantages can be incorporated into the technologies as desired. [see LB, para. 10]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4-8 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Le Biannic (US 2021/0182698 A1), hereinafter LB, as applied in claim 3 above, in view of Moradi et al. (A graph theoretic approach for unsupervised feature selection), hereinafter Moradi.
Regarding claim 4, LB as applied in claim 3 teaches all the limitations of claim 3 and further teaches:
wherein the feature group is identified, at the computing machine (LB discloses a computing environment capable of implementing the described disclosure [see LB, para. 165, FIG. 14], including determining feature groups [see LB, para. 141; FIG. 12]).
However, LB fails to teach wherein the feature group is identified using an undirected weighted graph data structure.
In the same field of endeavor, Moradi teaches:
wherein the feature group is identified using an undirected weighted graph data structure (In the first step the feature set is represented as a weighted graph in which each node in the graph denotes a feature and each edge weight indicates the similarity value between its corresponding features. In the second step, the features are divided into several clusters using a specific community detection method. The goal of clustering features is to group most correlated features into the same cluster. [see Moradi, Section 4, para. 1]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein the feature group is identified using an undirected weighted graph data structure as suggested in Moradi into LB because both methods perform feature grouping (see LB, Abstract; see Moradi, Abstract). Incorporating the techniques of Moradi into LB would produce consistently better classification accuracies (see Moradi, Abstract).
Regarding claim 5, the combination of LB and Moradi as applied in claim 4 teaches all the limitations of claim 4 and further teaches:
wherein the graph data structure comprises vertices representing features and edge weights representing similarities between features (In the first step the feature set is represented as a weighted graph in which each node in the graph denotes a feature and each edge weight indicates the similarity value between its corresponding features. In the second step, the features are divided into several clusters using a specific community detection method. The goal of clustering features is to group most correlated features into the same cluster. [see Moradi, Section 4, para. 1]).
Regarding claim 6, the combination of LB and Moradi as applied in claim 5 teaches all the limitations of claim 5 and further teaches:
wherein the similarities between the features correspond to an absolute value of a correlation between the features (Moradi discloses using the Pearson product–moment correlation coefficient, which is an absolute value correlation [see Moradi, Section 4.1, para. 1; Equation 6]).
Regarding claim 7, the combination of LB and Moradi as applied in claim 5 teaches all the limitations of claim 5 and further teaches:
wherein the similarities between the features correspond to a measure of mutual information between the features or an information gain from a first feature from among the features to a second feature from among the features (LB discloses determining feature groups based on relationships between features, using a variety of techniques, one technique involving determining mutual information between pairs of features [see LB, para. 112], and that this mutual information reflects the dependence between features [see LB, para. 113], and that the dependency information can be used to define feature groups, with features having a dependency within a given threshold considered as part of a common feature group [see LB, para. 115]).
Regarding claim 8, the combination of LB and Moradi as applied in claim 5 teaches all the limitations of claim 5 and further teaches:
wherein the similarities between the features correspond to a symmetrized version of the mutual information (LB discloses that the similarities between the features correspond to mutual information between one another [see LB, para. 112]. It would have been obvious to one of ordinary skill in the art that the mutual information is symmetrized because when determining the mutual information between each feature, for any two feature for example X and Y, when determining the relationship between X and Y, as well as Y and X, the mutual information would be the same and thus the similarity between X and Y is a symmetrized version of mutual information. In other words, the mutual information between Y and X is symmetrical to the mutual information between X and Y).
Regarding claim 10, the combination of LB and Moradi as applied in claim 5 teaches all the limitations of claim 5 and further teaches:
wherein the similarities between the features correspond to a feature similarity metric (LB discloses determining feature groups based on relationships between features, using a variety of techniques, one technique involving determining mutual information between pairs of features [see LB, para. 112], and that this mutual information reflects the dependence between features [see LB, para. 113], and that the dependency information can be used to define feature groups, with features having a dependency within a given threshold considered as part of a common feature group [see LB, para. 115]).
Regarding claim 11, the combination of LB and Moradi as applied in claim 5 teaches all the limitations of claim 5 and further teaches:
wherein the similarity between the features corresponds to a statistical correlation metric (In some embodiments, feature groups can be determined by evaluating relationships between features. These relationships can be determined by various techniques, including using various statistical techniques. [see LB, para. 112]).
Regarding claim 12, the combination of LB and Moradi as applied in claim 5 teaches all the limitations of claim 5 and further teaches:
wherein the similarity between the features corresponds to an information theoretic dependence metric (an information theoretic dependence metric interpreted as a dependence metric per 35 U.S.C. 112(b) rejection above) (LB discloses determining feature groups based on relationships between features, using a variety of techniques, one technique involving determining mutual information between pairs of features [see LB, para. 112], and that this mutual information reflects the dependence between features [see LB, para. 113], and that the dependency information can be used to define feature groups, with features having a dependency within a given threshold considered as part of a common feature group [see LB, para. 115]).
Regarding claim 18, claim 18 contains substantially similar limitations to those found in claim 4 above. Consequently, claim 18 is rejected for the same reasons.
Regarding claim 19, claim 19 contains substantially similar limitations to those found in claim 5 above. Consequently, claim 19 is rejected for the same reasons.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Le Biannic (US 2021/0182698 A1), hereinafter LB, in view of Moradi et al. (A graph theoretic approach for unsupervised feature selection), hereinafter Moradi, as applied in claim 8 above, and further in view of Zhang et al. (Feature Selection Methods Based on Symmetric Uncertainty Coefficients and Independent Classification Information), hereinafter Zhang.
Regarding claim 9, the combination of LB and Moradi as applied in claim 8 above teaches all the limitations of claim 8 and further teaches:
the symmetrized version of the mutual information (LB discloses that the similarities between the features correspond to mutual information between one another [see LB, para. 112]. It would have been obvious to one of ordinary skill in the art that the mutual information is symmetrized because when determining the mutual information between each feature, for any two feature for example X and Y, when determining the relationship between X and Y, as well as Y and X, the mutual information would be the same and thus the similarity between X and Y is a symmetrized version of mutual information. In other words, the mutual information between Y and X is symmetrical to the mutual information between X and Y).
However, the combination of LB and Moradi as applied in claim 8 above fails to teach the mutual information comprises a symmetric uncertainty.
In the same field of endeavor, Zhang teaches:
the mutual information comprises a symmetric uncertainty (Zhang discloses incorporating symmetric uncertainty to compensate for the bias of mutual information [see Zhang, Section III, Subsection A, part 1, para. 1]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate the mutual information comprises a symmetric uncertainty as suggested in Zhang into the combination of LB and Moradi because both methods measure mutual information (see LB, para. 112; see Zhang, Section III, Subsection A, part 1, para. 1). It would have been obvious to one of ordinary skill in the art to incorporate symmetric uncertainty as taught by Zhang in the mutual information as taught by LB because the entropy value changes very much and the uncertain entropy value will cause the mutual information value to be unreasonable, which can be compensated for by using symmetric uncertainty [see Zhang, Section III, Subsection A, part 1, para. 1].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
MIURA et al. (US 2023/0032011 A1) teaches grouping features into groups of features such that the user can manually group features or the system can auto classify each feature, and the system can determine impact of each feature group.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAKE BREEN whose telephone number is (571)272-0456. The examiner can normally be reached Monday - Friday, 7:00 AM - 3:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.T.B./Examiner, Art Unit 2143
/JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143