DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-20 are pending and examined herein.
Claims 1-20 are rejected.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claims 1-20 are not granted the claim to the benefit of foreign priority to JP 2020-022822 filed 13 February 2020 or benefit of priority to PCT/JP2021/004193 because an English translation of these documents have not been received. Thus, the effective filling date of claims 1-20 is 28 July 2022.
Information Disclosure Statement
The information disclosure statements (IDS) were received on 27 October 2022, 08 August 2023, and 06 May 2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner.
Drawings
The drawings are objected to because Fig. 16 shows mislabeled partial views. The MPEP states partial views intended to form one complete view, on one or several sheets, must be identified by the same number followed by a capital letter (see MPEP 608.02(V) 37 C.F.R. 1.84(u)(1)). Therefore, Fig. 16 should be labeled Fig. 16A and Fig. 16B. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Interpretation
Claim 10 recites “wherein, in the quantification step, in a case in which a feature amount of the unknown sample belonging to any of pairwise-coupled classes is given under a threshold value set with reference to the learning data set, a probability of correctly discriminating a class to which the unknown sample belongs by the given feature amount is used” which is a contingent limitation because the use of a probability of correctly discriminating a class which the unknown sample belongs by the given feature amount is contingent on the a case in which a feature amount of the unknown sample belonging to any pairwise-coupled class is under a threshold. The MPEP states at 2111.04(II) “The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met.”. Therefore, the use of a probability of correctly discriminating a class to which the unknown sample belongs by the given feature amount is not required by the method because there exists an embodiment where this condition is not met.
Claim 18 recites “wherein, in a case in which, by a pairwise coupling... discriminable by at least one feature amount in all the pairwise couplings”, claim 19 recites “wherein, in a case in which, by the pairwise coupling… discriminable by at least 5 or more feature amounts in all the pairwise couplings”, and claim 20 recites “wherein, in a case in which, by the pairwise coupling… discriminable by at least 10 or more feature amounts in all the pairwise couplings” which constitutes as Nonfunctional Descriptive Material (see MPEP 2111.05). Independent claim 18 is directed to a feature amount set (which is a dataset/information) that is used by a multi-class classification device and encompasses data stored in a computer-readable medium. MPEP 2111.05(I)(B)(III) states “the computer-readable medium merely serves as a support for information or data, no functional relationship exists. For example, a claim to a memory stick containing tables of batting averages, or tracks of recorded music, utilizes the intended computer system merely as a support for the information. Such claims are directed toward conveying meaning to the human reader rather than towards establishing a functional relationship between recorded data and the computer”. Similarly, the wherein clauses in claims 18-20 describe the data inside a dataset which encompasses a computer readable medium which merely serves as a support for information or data.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 16 recite “selecting a combination of the feature amount groups” line 14 of claim 1 and lines 17-18 in claim 16. There is insufficient antecedent basis for this limitation in the claim. The indefiniteness arises because the claim does not make clear what “the feature amount groups” in this step are referring to. It is unclear because the claim previously recites a singular “feature amount group” for the known sample group in the learning set but does not provide a sufficient basis for multiple “feature amount groups”. Dependent claims 2-15 and 17 are rejected by virtue of their dependency on a rejected claim without alleviating the indefiniteness. For the sake of furthering examination, this limitation will be interpreted as selecting a combination of feature amounts in the selected feature amount group.
Claims 1 and 16 recite the limitation "the quantified discrimination possibilities for all the pairwise couplings " in lines 14-15 of claim 1 and lines 16-17 of claim 16. There is insufficient antecedent basis for this limitation in the claim. The indefiniteness arises because the claim does not make clear what “the quantified discrimination possibilities for all the pairwise couplings” are. The claim provides quantifying, by a pairwise coupling that combines two classes among the N classes, a singular discrimination possibility between the two classes in accordance with each feature amount of the selected feature amount group but does not provide basis for multiple discrimination possibilities for multiple pairwise couplings. Dependent claims 2-15 and 17 are rejected by virtue of their dependency on a rejected claim without alleviating the indefiniteness. For the sake of furthering examination, claims 1 and 16 are interpreted as requiring quantifying discrimination possibilities for each pairwise coupling of each pair of two classes in the N classes to produce a quantified discrimination possibility for all pairwise couplings of two classes among the N classes.
Claim 2 recites “the given classes” which renders the metes and bounds of the claim indefinite. The indefiniteness arises because it is unclear which subset of classes of the N (two or more) classes the limitation “the given classes” is referring to. For the sake of furthering examination, this limitation will be interpreted as “the classes”.
Claim 3 recites “the feature amount to be selected” line 6 of claim 3. There is insufficient antecedent basis for this limitation in the claim. The indefiniteness arises because the claim does not make clear what “the feature amount to be selected” in this step is referring to. Dependent claims 4 and 5 are rejected by virtue of their dependency on a rejected claim without alleviating the indefiniteness. For the sake of furthering examination, this limitation will be interpreted as a feature amount of the feature amounts to be selected in the selected feature amount group.
Claim 7 recites “inputting importance of the class” which renders the metes and bounds of the claim indefinite. The indefiniteness arises because it is unclear which class “the class” is referring to. For the sake of furthering examination, this limitation will be interpreted as “a class”.
Claim 8 recites “the pairwise coupling” which renders the metes and bounds of the claim indefinite. The indefiniteness arises because it is unclear which pairwise coupling of the multiple pairwise couplings in claim 1 that “the pairwise coupling” is referring to. Dependent claims 9-15 are rejected by virtue of their dependency on a rejected claim without alleviating the indefiniteness. For the sake of furthering examination this limitation will be interpreted as one pairwise coupling of the pairwise couplings.
Claim 10 recites “the given feature amount” which renders the metes and bounds of the claim indefinite. The indefiniteness arises because it is unclear which feature amount the limitation “the given feature amount” is referring to. For the sake of furthering examination, this limitation will be interpreted as “the feature amount under the threshold value”.
Claim 11 recites “the discrimination possibility” which renders the metes and bounds of the claim indefinite. The indefiniteness arises because it is unclear which discrimination possibility of the multiple discrimination possibilities in claim 1 that “the discrimination possibility” is referring to. For the sake of furthering examination this limitation will be interpreted as one discrimination possibility of the discrimination possibilities.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
The claims do not fall within at least one of the four categories of patent eligible subject matter because these claims are directed to a feature amount set (which is information). MPEP 2106.03(I) provides examples of claims that are not directed to any of the statutory categories which include products that do not have a physical or tangible form, such as information (often referred to as "data per se") or a computer program per se (often referred to as "software per se") when claimed as a product without any structural recitations. Therefore, claims 18-20 do not fall within at least one of the four categories because these claims are directed to data per se for claiming information claimed as a product without any structural recitations.
Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
(Step 1)
Claims 1-15 fall under the statutory category of a process. Claims 16 and 17 fall under the statutory category of a machine.
(Step 2A Prong 1)
Under the BRI, the instant claims recite judicial exceptions that are an abstract idea of the type that is in the grouping of a “mental process”, such as procedures for evaluating, analyzing or organizing information, and forming judgement or an opinion. The instant claims further recite judicial exceptions that are an abstract idea of the type that is in the grouping of a “mathematical concept”, such as mathematical relationships and mathematical equations.
Claim 1 recites mental processes of “ selecting a feature amount group to be used for determining which of N (two or more) classes a sample belongs to…”, “an input step of inputting a learning data set including a known sample group belonging to a class…”, “a selection step of selecting a feature amount group needed for class determination for an unknown sample…”, “ wherein the selection step includes a quantification step of quantifying, by a pairwise coupling that combines two classes among the N classes, a discrimination possibility between the two classes…” and “an optimization step of totalizing the quantified discrimination possibilities for all the pairwise couplings and selecting a combination of the feature amount groups for which a result of the totalization is optimized”.
Claim 1 recites mathematical concepts of “a quantification step of quantifying, by a pairwise coupling that combines two classes among the N classes, a discrimination possibility between the two classes…” and “an optimization step of totalizing the quantified discrimination possibilities for all the pairwise couplings”.
Claim 16 recites mental processes of “selection processing of selecting a feature amount group needed for class determination for an unknown sample of which a belonging class is unknown…”, “selection processing of selecting a feature amount group needed for class determination for an unknown sample…”, “the selection processing includes a quantification processing of quantifying, by a pairwise coupling that combines two classes among the N classes, a discrimination possibility between the two classes…” and “optimization processing of totalizing the quantified discrimination possibilities for all the pairwise couplings and selecting a combination of the feature amount groups for which a result of the totalization is optimized”.
Claim 16 recites mathematical concepts of “the selection processing includes a quantification processing of quantifying, by a pairwise coupling that combines two classes among the N classes, a discrimination possibility between the two classes…” and “optimization processing of totalizing the quantified discrimination possibilities for all the pairwise couplings and selecting a combination of the feature amount groups for which a result of the totalization is optimized”.
Dependent claim 2 recites mental processes of “wherein the selection step further includes a first marking step of marking a part of the given classes as first discrimination unneeded class groups …” and “a first exclusion step of excluding the pairwise coupling in the marked first discrimination unneeded class groups…”. Dependent claim 3 recites mental processes of “wherein the selection step further includes a similarity evaluation step of evaluating similarity between the feature amounts based on the discrimination possibility for each pairwise coupling of each feature amount” and “a priority setting step of setting a priority of the feature amount to be selected, based on an evaluation result of the similarity”. Dependent claim 6 recites a mental process and mathematical concept of “a selected number input step of inputting a selected number M of the features amounts in the selection step, wherein the optimization is maximizing a minimum value of a totalized value in all the pairwise couplings…”, Dependent claim 7 recites a mental process and mathematical concept of “wherein the optimization step includes an importance input step of inputting importance of the class or pairwise discrimination” and “a weighting step of performing weighting based on the importance in a case of the totalization”. Dependent claim 8 recites mental process of “determining, in a case in which N is an integer of 2 or more, which of N classes a sample belongs to, from a feature amount of the sample…”, “the input step and the selection step executed by using the feature amount selection method according to claim 1”, “a determination step of performing the class determination for the unknown sample which includes an acquisition step of acquiring a feature amount value of the selected feature amount group, and a class determination step of performing the class determination based on the acquired feature amount value” and “wherein, in the determination step, the class determination for the unknown sample is performed by configuring a multi-class discriminator…”. Dependent claim 12 recites mental processes of “a subclass setting step of clustering one or more samples belonging to the classes based on a given feature amount…”, “setting the formed cluster to a subclass in each class”, “a second marking step of marking each subclass in each class as second discrimination unneeded class groups…”, and “a second exclusion step of excluding the pairwise coupling in the marked second discrimination unneeded class groups…”. Dependent claim 14 recites mental processes of “a target threshold value input step of inputting a target threshold value T of a totalized value indicating a result of the totalization, wherein the optimization is setting a minimum value of the totalized value…”. Dependent claim 17 recites mental processes of “selection processing”, “determination processing of performing the class determination for the unknown sample based on the selected feature amount group, which includes acquisition processing of acquiring a feature amount value of the selected feature amount group and class determination processing of performing the class determination based on the acquired feature amount value” and “in the determination processing, the class determination for the unknown sample is performed by configuring a multi-class discriminator …”.
The claims recite mental processes of analyzing/evaluating data as selecting feature subsets, quantifying discriminatory power of features, totalizing the discriminatory power of features, marking classes based on a criteria, evaluating similarity between features, inputting importance of a class or pairwise discrimination into the analysis, performing multi-class classification, clustering data into subclasses and marking subclasses based on criteria, inputting thresholds into the analysis. The claims recite mathematical concepts of mathematical calculations of quantifying discriminatory possibility of features and totalizing this quantified discriminatory possibility which is a mathematical process which is a series of calculations (see instant disclosure [0096]-[0104]). It is noted that the inputting steps of the method claims is interpreted as part of the abstract idea itself of analyzing the data because the BRI encompass inputting data into a abstract process. Dependent claims 4, 5, 9-11, 13, 15 further limit the mental process/mathematical concept recited in the independent claim but do not change their nature as a mental process/mathematical concept.
(Step 2A Prong 2)
Claims found to recite a judicial exception under Step 2A, Prong 1 are then further analyzed to determine if the claims as a whole integrate the recited judicial exception into a practical application or not (Step 2A, Prong 2). Integration into a practical application is evaluated by identifying whether there are any additional elements recited in the claim and evaluating those additional elements to determine whether they integrate the exception into a practical application.
The additional element in claims 16 and 17 of a generic computer (i.e., the devices which comprise first processor and second process) for performing judicial exceptions does not integrate the judicial exceptions into a practical application because this is applying the judicial exceptions to a generic computer without an improvement to computer technology. The additional element of the generic computer only interacts with the judicial exceptions in a manner which the generic computer is used as a tool to perform the judicial exceptions.
The additional element in claim 16 of performing input processing of inputting a learning data set including a known sample group belong to a given class, which is a target, and a feature amount group of the known sample group and the additional element in claim 17 of performing the input processing does not integrate the judicial exceptions into a practical application because this is adding insignificant extra solution activity of data gathering. These additional elements only interact with the judicial exceptions by providing data to the judicial exceptions to be processed. It is noted that the content of the data falls under the abstract idea itself and the content of the data does not change the active step of inputting data into a computer environment.
Thus, the additional elements do not integrate the judicial exceptions into a practical application and claims 1-17 are directed to the abstract idea.
(Step 2B)
Claims found to be directed to a judicial exception are then further evaluated to determine if the claims recite an inventive concept that provides significantly more than the judicial exception itself (Step 2B). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because:
The additional element in claims 16 and 17 of a generic computer is conventional as shown by MPEP 2106.05(b) and MPEP 2106.05(d)(II).
The additional element in claim 16 of performing input processing of inputting a learning data set including a known sample group belong to a given class, which is a target, and a feature amount group of the known sample group and the additional element in claim 17 of performing the input processing are conventional as shown by MPEP 2106.05(b) and MPEP 2106.05(d)(II). It is noted that the content of the data falls under the abstract idea itself and the content of the data does not change the active step of inputting data into a computer environment.
Thus, the additional elements are not sufficient to amount to significantly more than the judicial exception because they are conventional.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 6, 8-11, and 13-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Song et al. (Expert Systems with Applications 81 (2017): 22-27).
Claim 1 is directed to a feature amount selection method of selecting a feature amount group to be used for determining which of N (two or more) classes a sample belongs to, the method comprising: an input step of inputting a learning data set including a known sample group belonging to a given class, which is a target, and a feature amount group of the known sample group;
Song et al. shows a feature selection method for selecting an optimal feature subset for determining which class of two or more classes a sample belongs to (Song et al. pages 23-24 section “Multi-class F-score with FDA” and page 25 left col. Figure 2). Song et al. shows inputting a learning data set (denoted as D) including a known sample group belonging to a given class (which includes samples grouped by given classes), which is a target, and a feature amount group of the known sample group (which includes features in groups of the grouped samples) (Song et al. page 23 right col., page 24 left col., and page 25 left col. Figure 2).
and a selection step of selecting a feature amount group needed for class determination for an unknown sample of which a belonging class is unknown, from the feature amount group based on the learning data set,
Song et al. shows a selection process of selecting features needed for class determination of unknown samples from the feature groups based on the learning data set (Song et al. page 25 left col. Figure 2).
wherein the selection step includes a quantification step of quantifying, by a pairwise coupling that combines two classes among the N classes, a discrimination possibility between the two classes in accordance with each feature amount of the feature amount group by using the learning data set,
Song et al. shows this selection process includes calculating a discrimination possibility as computing the average distance for a feature amount between the center locations of classes on a pairwise basis in accordance with each feature amount in the feature amount group by utilizing the means of each feature for each class of the particular pair of classes, the number of samples in each respective class, and the total number of samples of the data set (Song et al. page 23 right col.).
and an optimization step of totalizing the quantified discrimination possibilities for all the pairwise couplings and selecting a combination of the feature amount groups for which a result of the totalization is optimized.
Song et al. shows totalizing discrimination possibilities for all the pairwise couplings by calculating the evaluation criterion which reflects how well a particular feature is correlated with a particular class (Song et al. page 24 left col.). Song et al. shows an iterative process which selects an optimal feature subset by selecting a feature for which a result of the totalization is optimized (i.e., above a threshold value which is iteratively set based on classification accuracy and the feature ranking criterion) (Song et al. page 24 left col.- right col. and page 25 left col. Figure 2).
Claim 16 is directed to a device comprising a processor (which encompasses a generic computer) for performing the steps of claim 1.
Song et al. shows utilizing a computer with a processor for implementing the steps of the method of claim 1 (Song et al. page 24 right col.).
Claim 6 is directed to a selected number input step of inputting a selected number M of the feature amounts in the selection step, wherein the optimization is maximizing a minimum value of a totalized value in all the pairwise couplings in accordance with M selected feature amounts.
Song et al. shows a process of inputting a selected number of features 1-M to compute the feature ranking criterion of each feature (Song et al. page 24 right col. step 3 of the algorithm). Song et al. shows optimizing the minimum cutoff value of the totalized value in accordance with the selected feature amounts (Song et al. page 24 right col. step 4 - step 7 of the algorithm).
Claim 8 is directed to determining, in a case in which N is an integer of 2 or more, which of N classes a sample belongs to, from a feature amount of the sample, the method comprising: the input step and the selection step executed by using the feature amount selection method according to claim 1;
Song et al. shows the input step and the selection step by using the feature selection method according claim 1 as shown above (Song et al. page 23 right col., page 24 right col. - left col., and page 25 left col. Figure 2).
and a determination step of performing the class determination for the unknown sample based on the selected feature amount group, which includes an acquisition step of acquiring a feature amount value of the selected feature amount group and a class determination step of performing the class determination based on the acquired feature amount value,
Song et al. shows performing class determination for an unknown sample based on the selected feature amount group by which includes acquiring feature amount values of the selected feature amount group and a class determination step of performing class determination based on the acquired feature amount value as performing testing using the multi-class support vector machine (SVM) classification algorithm utilizing the feature amount values of the identified optimal feature subset (Song et al. page 24 right col. and page 25 Table 3).
wherein, in the determination step, the class determination for the unknown sample is performed by configuring a multi-class discriminator that uses the selected feature amount group in association with the pairwise coupling.
Song et al. shows performing class determination for an unknown sample by configuring a multi-class classification algorithm with a one-against-one approach with support vector machine models (Song et al. page 24 right col.).
Claim 17 is directed to a device comprising a processor (which encompasses a generic computer) for performing the steps of claim 8.
Song et al. shows utilizing a computer with a processor for implementing the steps of the method of claim 8 (Song et al. page 24 right col.).
Claim 9 is directed to wherein, in the quantification step, a statistically significant difference in the feature amounts in the learning data set between pairwise-coupled classes is used.
Song et al. shows using a statistically significant difference in the features amounts between pairwise-coupled classes as the FDAF-score which finds the most discriminative feature amounts, which is interpreted as a statistically significant difference, in the learning data utilizing average between-class distance and within class scatter matrix (Song et al. page 24 left col.).
Claim 10 is directed to wherein, in the quantification step, in a case in which a feature amount of the unknown sample belonging to any of pairwise-coupled classes is given under a threshold value set with reference to the learning data set, a probability of correctly discriminating a class to which the unknown sample belongs by the given feature amount is used.
The BRI of the method does not require using a probability of correctly discriminating a class in this manner because it is contingent on a case in which a feature amount is below a threshold and there exists an embodiment of the method where the condition of a feature amount being below a threshold is not met. Further, this claim is anticipated by Song et al. because Song et al. shows the required method steps.
Claim 11 is directed to in the quantification step, a quantification value of the discrimination possibility is a value obtained by performing multiple test correction on a statistical probability value by the number of feature amounts.
Song et al. shows the difference of means of a feature for two classes is multiplied by the summation of the number of samples in the two classes divided by total number of samples which is interpreted as a multiple test correction on a statistical probability value using the number of feature amounts because this is a correction factor of accounting for relative number of samples in each of the two-class comparison (Song et al. page 23 right col.).
Claim 13 is directed to wherein the totalization is calculating a total value or an average value of quantitative values of the discrimination possibility.
Song et al. shows calculating a total value of the discrimination possibility (Song page 24 left col.).
Claim 14 is directed to a target threshold value input step of inputting a target threshold value T of a totalized value indicating a result of the totalization, wherein the optimization is setting a minimum value of the totalized value in all the pairwise couplings by a selected feature amount to be equal to or more than the target threshold value T.
Song et al. shows inputting a target threshold value λ of a totalized value where the optimization process readjusts λ based on classification performance to find the minimum value of λ to make selections of optimal features using the inequality of J(xk) ≥ λ which illustrates that the feature xk is the most discriminative (Song et al. page 24 right col.).
Claim 15 is directed to in the determination step, binary-class discriminators that each use the selected feature amount group in association with each pairwise coupling are configured, and the binary-class discriminators are combined to configure the multi-class discriminator.
Song et al. shows employing a SVM (support vector machine) classification algorithm with the one-against-one practice as the stranded multi-class classifier (Song et al. page 24 right col.). The one-against-one practice builds multiple binary classification models (one for each pair of classes) which are combined to form a multi-class classification model.
Claim 18 is directed to a feature amount data set of the sample belonging to each class, which is a target, wherein, in a case in which, by a pairwise coupling that combines two classes among the N classes, a discrimination possibility between the two classes in accordance with each feature amount of the selected feature amount group is quantified with reference to the feature amount data set, the feature amount set is marked to be discriminable by at least one feature amount in all the pairwise couplings. Claim 19 is directed to wherein, in a case in which, by the pairwise coupling that combines the two classes among the N classes, the discrimination possibility between the two classes in accordance with each feature amount of the selected feature amount group is quantified with reference to the feature amount data set, the feature amount set is marked to be discriminable by at least 5 or more feature amounts in all the pairwise couplings. Claim 20 is directed to wherein, in a case in which, by the pairwise coupling that combines the two classes among the N classes, the discrimination possibility between the two classes in accordance with each feature amount of the selected feature amount group is quantified with reference to the feature amount data set, the feature amount set is marked to be discriminable by at least 10 or more feature amounts in all the pairwise couplings.
Song et al. shows a feature amount dataset (denoted as D) including a known sample group belonging to a given class (which includes samples grouped by given classes), which is a target, and a feature amount group of the known sample group (which includes features in groups of the grouped samples) (Song et al. page 23 right col., page 24 left col., and page 25 left col. Figure 2). It is noted that the wherein clauses of claims 18-20 are nonfunctional descriptive material as described in the claim interpretation section above.
Claims 1-5 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ji et al. (Electronics Letters, Vol. 36, No. 6, 2000).
A feature amount selection method of selecting a feature amount group to be used for determining which of N (two or more) classes a sample belongs to, the method comprising: an input step of inputting a learning data set including a known sample group belonging to a given class, which is a target, and a feature amount group of the known sample group;
Ji et al. shows inputting a learning data set including a known sample group belonging to a given class as showing datasets specific to classes containing a set of elements which are associated with a given class and shows feature values for the elements in a given class (Ji et al. page 1 left col.).
and a selection step of selecting a feature amount group needed for class determination for an unknown sample of which a belonging class is unknown, from the feature amount group based on the learning data set,
Ji et al. shows a selection process for selecting features needed for class determination (which are used for determine the class of an unknown sample) from the features based on the learning data set (Ji et al. page 1 left col. – right col. i.e. steps 1-4 of the algorithm).
wherein the selection step includes a quantification step of quantifying, by a pairwise coupling that combines two classes among the N classes, a discrimination possibility between the two classes in accordance with each feature amount of the selected feature amount group by using the learning data set,
Ji et al. shows calculating the discriminatory power of feature Fk to discriminate classes Ci and Cj (Ji et al. page 1 left col.). Ji et al. shows the discriminatory power to discriminate these classes depends on the distribution of values in Vi(k) which is equal to the set of Fk(x) (the feature value for Fk) such that x is an element of the data set of Ci (class “i”) and Vj(k) which is equal to the set of Fk(x) (the feature value for Fk) such that x is an element of the dataset of Cj (class “j”) (Ji et al. page 1 left col.). Ji et al. shows that the discriminatory power of Fk for the pair of classes Ci and Cj is calculated as Pij(k) which is equal to the absolute difference of the mean of Vi(k) and the mean of Vj(k) divided by the summation of the variance of Vi(k) and the variance of Vj(k) (Ji et al. page 1 left col.).
and an optimization step of totalizing the quantified discrimination possibilities for all the pairwise couplings and selecting a combination of the feature amount groups for which a result of the totalization is optimized.
Ji et al. shows totalizing the quantified possibilities for all the pairwise couplings as Tk which indicated the relative discriminatory power of feature Fk for all pairs which is a process of totalizing the quantified discrimination possibilities for all the pairwise couplings (i.e., the summation of the discriminatory power of a feature across all pairs of classes) (Ji et al. page 1 left col.). Ji et al. shows a selection of a combination of features for which a result of the totalization is optimized as if the largest value appears in more than one feature, then choose the feature which has the largest relative discriminatory power across all pairs of classes and mark the feature as selected (Ji et al. page 1 right col.).
Claim 2 is directed to a first marking step of marking a part of the given classes as first discrimination unneeded class groups that do not need to be discriminated from each other, and a first exclusion step of excluding the pairwise coupling in the marked first discrimination unneeded class groups from pairwise couplings to be expanded.
Ji et al. shows the number of classes is 10 and therefore the number of class pairs is 45 (= [10× (10 – 1)]/2) which shows excluding redundant class pairs that do not need to be expanded (i.e., both C12 and C21 are not required because the information from one class pair would be represented in the other and therefore one of these class pairs would be excluded from being expanded) (Ji et al. page 1 right col.).
Claim 3 is directed to wherein the selection step further includes a similarity evaluation step of evaluating similarity between the feature amounts based on the discrimination possibility for each pairwise coupling of each feature amount, and a priority setting step of setting a priority of the feature amount to be selected, based on an evaluation result of the similarity.
Ji et al. shows evaluating the similarity between feature amounts based on the discrimination possibility for each pairwise coupling of each feature amount as examining the discrimination possibilities for each pairwise comparison (Ji et al. page 1 right col.). Ji et al. shows setting a priority of the feature amount to be selected based on this evaluation of similarity as if at least one of the features with the largest value in the column has already been selected in order to cover the other pairs, then do nothing (i.e., do not select the feature) then go back to step 3 of the algorithm and if only one feature has largest value in the column, then select the feature and mark the feature as selected (Ji et al. page 1 right col.).
Claims 4 is directed to wherein the similarity is an overlap relationship and/or an inclusion relationship of the discrimination possibility for each pairwise coupling.
Ji et al. shows that this similarity is an inclusion relationship of the discrimination possibility for each pairwise coupling as examining if the features with the largest values has already been selected in order to cover the other pairs (Ji et al. page 1 right col.).
Claim 5 is directed to wherein the similarity is a distance between discrimination possibility vectors for each pairwise coupling or a metric value in accordance with the distance.
Ji et al. shows examining the similarity of the discriminatory possibility of features utilizing a metric value Pij(k) which is equal to the absolute difference of the mean of Vi(k) and the mean of Vj(k) divided by the summation of the variance of Vi(k) and the variance of Vj(k) (Ji et al. page 1 left col.- right col.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 7 are rejected under 35 U.S.C. 103 as being unpatentable over Song et al. (Expert Systems with Applications 81 (2017): 22-27) as applied to claim 1 under 35 U.S.C. 102 above, in view of Xu et al. (2012 15th International Conference on Information Fusion. IEEE, 2012).
Claim 7 is directed to an importance input step of inputting importance of the class or pairwise discrimination, and a weighting step of performing weighting based on the importance in a case of the totalization.
Song et al. does not an importance input step of inputting importance of the class or pairwise discrimination, and a weighting step of performing weighting based on the importance in a case of the totalization.
Like Song et al., Xu et al. shows selecting optimal feature subsets. Xu et al. shows ensuring that the distinguishing features for each of the class pairs are selected unless the discriminating features are not available by selecting remaining features that discriminate maximum number of class pairs that non previously selected features are able to discriminate (Xu et al. page 4 right col. and page 5 left col.). This shows that an important of the pairwise discrimination is integrated into the selection process of for a feature by weighting a feature to be selected in each iteration (after the first selection) to select a feature that is discriminatory over an important pairwise discrimination (a class pair which no previous feature was able to discriminate).
An invention would have been obvious to one or ordinary skill in the art if some motivation in the prior art would have led that person to modify reference teachings to arrive at the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to have modified the feature selection process of Song et al. to use the importance criteria for selecting features of Xu et al. because this would allow for a method that selects features based on their ability to discriminate class pairs that non previously selected features are able to discriminate which increases the precision of minority class classification (Xu et al. page 1 right col. and page 4 right col. -page 5 left col.). One would have a reasonable expectation of success for this modification because Song et al. shows an iterative feature selection process while Xu et al. shows an iterative feature selection process which weights features based on their ability to discriminate class pairs that non previously selected features are able to discriminate.
Claims 12 are rejected under 35 U.S.C. 103 as being unpatentable over Song et al. (Expert Systems with Applications 81 (2017): 22-27) as applied to claim 1 under 35 U.S.C. 102 above, in view of Wan et al. (IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 2, pp. 409-422, 1 Feb. 2018).
Claim 12 is directed to a subclass setting step of clustering one or more samples belonging to the classes based on a given feature amount from the learning data set to form a cluster and setting the formed cluster to a subclass in each class; a second marking step of marking each subclass in each class as second discrimination unneeded class groups that do not need to be discriminated from each other in each class; and a second exclusion step of excluding the pairwise coupling in the marked second discrimination unneeded class groups from pairwise couplings to be expanded.
Song et al. does not show clustering one or more samples belonging to the classes based on the given feature amounts, a second marking step of marking subclasses in each class that do not need to be discriminated from each other in each class, and an exclusion step of excluding the pairwise coupling in the marked as not needing to be discriminated from each other in each class.
Like Song et al., Wan et al. shows a process of performing discriminant analysis for separating classes. Wan et al. shows performing clustering samples in classes to generate subclasses of each class (Wan et al. page 412 right col. – page 413 left col.). Wan et al. shows a performing discriminate analysis by measuring between-class scatter by the difference between subclass means and a single point of reference (i.e., the mean of all data instances) which shows that the pairwise couplings are a combination of each subclass and a reference and the difference between subclasses in each class are excluded from the pairwise coupling (Wan et al. page 414).
An invention would have been obvious to one or ordinary skill in the art if some motivation in the prior art would have led that person to modify reference teachings to arrive at the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to have modified the discriminate analysis of Song et al. to use the hierarchical clustering to divide a class into subclasses before applying discriminate analysis using redefined scatter matrices of Wan et al. because this would allow for a method that selects a subset of features using subclass discrimination that achieves higher class separation and reduces overlap of subclasses in classification tasks (Wan et al. page 409 abstract). One would have a reasonable expectation of success for this modification because Song et al. shows utilizing discriminate analysis for selecting a feature subset for classification while Wan et al. shows a process of performing clustering before performing discriminate analysis with re-defined scatter matrices.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 2, 8, 10, 13, 15-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1, 2, 4, 13, and 20 of copending Application No. 18/183832 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the instant claims are anticipated by claims 1, 2, 4, 13, and 20 of the copending application.
Regarding instant claim 1, the copending application shows the steps of inputting a learning data set of a known group belonging to a given class (claim 1 lines 5-6 copending application), selecting a feature group for class determination for an unknown sample (claim 1 lines 7-9 copending application), where the selection step includes a quantifying step of quantifying by a pairwise coupling that combined two classes among the N classes quantifying a discrimination possibility (claim 1 lines 10-14 copending application) and optimizing step of totalizing the quantified discrimination possibilities for all the pairwise couplings and selecting a group of features for which a result of the totalization is to be optimized (claim 1 lines 15-17 copending application).
Regarding instant claim 2, the copending application shows marking a part of the given classes as first discrimination unneeded class groups that do not need to be discriminated from each other, and a first exclusion step of excluding the a pair wise coupling of the marked first discrimination unneeded class groups from pairwise couplings to be expanded (claim 2 lines 3-6 copending application).
Regarding instant claim 8, the copending application shows a multi-class classification method with an acquisition step of acquiring a feature amount value of the selected feature group and performing the class determination based on the acquired feature amount value using a multi-class classification (copending claims 4 and 17).
Regarding instant claim 10, the BRI of the method does not require using a probability of correctly discriminating a class in this manner because it is contingent on a case in which a feature amount is below a threshold and there exists an embodiment of the method where the condition of a feature amount being below a threshold is not met. Further, this claim is anticipated by copending claim 1 which shows all the required steps of instant claim 10.
Regarding instant claim 13, the copending application shows a step of totalizing the quantified discrimination possibilities for all the pairwise couplings which would give a total value (copending claim 4 which includes the totalization step in claim 1).
Regarding instant claim 15, the copending application shows a multi-class classification which includes binary-class classification using a binary-class classifier associated with a pairwise coupling (copending claim 4).
Regarding instant claims 16 and 17, copending application shows a feature selection device which performs the feature selection method (copending claim 20) and a multi-class classification device which performs the classification method (copending claim 13).
Regarding instant claims 18-20, the copending application shows a feature amount data set of the sample belonging to each class, which is a target (copending claim 1 lines 5-6). It is noted that the wherein clauses of claims 18-20 are nonfunctional descriptive material as described in the claim interpretation section above.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claims 1 and 3-5 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of copending Application No. 18/183832 in view of Ji et al. (Electronics Letters, Vol. 36, No. 6, 2000).
Regarding instant claim 1, the copending application shows the steps of instant claim 1 as described above (copending claim 1).
The copending application does not show the limitations of claim 3 of a similarity evaluation step of evaluating similarity between the feature amounts and a priority setting step of setting a priority, the limitation of claim 4 of wherein the similarity is an overlap relationship and/or an inclusion relationship of the discrimination possibility for each pairwise coupling, the limitation of claim 5 of wherein the similarity is a distance between discrimination possibility vectors for each pairwise coupling or a metric value in accordance with the distance.
Regarding instant claim 3, Ji et al. shows evaluating the similarity between feature amounts based on the discrimination possibility for each pairwise coupling of each feature amount as examining the discrimination possibilities for each pairwise comparison (Ji et al. page 1 right col.). Ji et al. shows setting a priority of the feature amount to be selected based on this evaluation of similarity as if at least one of the features with the largest value in the column has already been selected in order to cover the other pairs, then do nothing (i.e., do not select the feature) then go back to step 3 of the algorithm and if only one feature has largest value in the column, then select the feature and mark the feature as selected (Ji et al. page 1 right col.).
Regarding instant claim 4, Ji et al. shows that this similarity is an inclusion relationship of the discrimination possibility for each pairwise coupling as examining if the features with the largest values has already been selected in order to cover the other pairs (Ji et al. page 1 right col.).
Regarding instant claim 5, Ji et al. shows examining the similarity of the discriminatory possibility of features utilizing a metric value Pij(k) which is equal to the absolute difference of the mean of Vi(k) and the mean of Vj(k) divided by the summation of the variance of Vi(k) and the variance of Vj(k) (Ji et al. page 1 left col.- right col.).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to have combined the method of feature amount selection of the copending application 18/183832 with the similarity evaluation and priority setting steps of Ji et al. because this allows for process of analyzing class pairs and prioritizing the selection of features based on similarity of the discriminatory possibility of each feature in a manner that selects the best features while covering all class pairs (see Ji et al. page 1 right col.). One would have a reasonable expectation of success because copending application 18/183832 provides a pairwise coupling analysis utilizing discriminatory possibility while Ji et al. shows a selection process through examining similarities between quantified discriminatory possibilities and sets priorities based on the similarities of the quantified discriminatory possibilities.
This is a provisional nonstatutory double patenting rejection.
Claims 1 and 7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 of copending Application No. 18/183832 in view of Xu et al. (2012 15th International Conference on Information Fusion. IEEE, 2012).
Regarding instant claim 1, the copending application shows the steps of instant claim 1 as described above (copending claim 1).
The copending application does not show the limitations in claim 7 of an importance input step of inputting importance of the class or pairwise discrimination, a weighting step of performing weighting based on the importance in a case of the totalization.
Regarding instant claim 7, Xu et al. shows ensuring that the distinguishing features for each of the class pairs are selected unless the discriminating features are not available by selecting remaining features that discriminate maximum number of class pairs that non previously selected features are able to discriminate (Xu et al. page 4 right col. and page 5 left col.). This shows that an important of the pairwise discrimination is integrated into the selection process of for a feature by weighting a feature to be selected in each iteration (after the first selection) to select a feature that is discriminatory over an important pairwise discrimination (a class pair which no previous feature was able to discriminate).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to have modified the feature selection process of copending application 18/183832 to use the importance criteria for selecting features of Xu et al. because this would allow for a method that selects features based on their ability to discriminate class pairs that non previously selected features are able to discriminate which increases the precision of minority class classification (Xu et al. page 1 right col. and page 4 right col. -page 5 left col.). One would have a reasonable expectation of success for this modification because copending application 18/183832 shows a feature selection process while Xu et al. shows an iterative feature selection process which weights features based on their ability to discriminate class pairs that non previously selected features are able to discriminate.
Claims 1, 6, 8, 9, 11, and 14 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 4 of copending Application No. 18/183832 in view of Song et al. (Expert Systems with Applications 81 (2017): 22-27).
Regarding instant claim 1, the copending application shows the steps of instant claim 1 as described above (copending claim 1).
Regarding instant claim 8, the copending application shows the steps of instant claim 8 as described above (copending claim 4).
The copending application does not show the limitations of claim 6 inputting a selected number of the feature amounts, wherein the optimization is maximizing a minimum value of a totalized value in all the pairwise couplings in accordance with the selected feature amounts, the limitation in claim 9 of using a statistical significant different in the feature amounts, the limitation in claim 11 of performing multiple test correction on a statistical probability by the number of feature amounts, the limitations in claim 14 of inputting a target threshold value of a totalized value indicating a result of the totalization and setting a minimum value of the totalized value in all the pairwise couplings by a selected feature amount to be equal to or more than the target threshold value.
Regarding instant claim 6, Song et al. shows a process of inputting a selected number of features 1-M to compute the feature ranking criterion of each feature (Song et al. page 24 right col. step 3 of the algorithm). Song et al. shows optimizing the minimum cutoff value of the totalized value in accordance with the selected feature amounts (Song et al. page 24 right col. step 4 - step 7 of the algorithm).
Regarding instant claim 9, Song et al. shows using a statistically significant difference in the features amounts between pairwise-coupled classes as the FDAF-score which finds the most discriminative feature amounts, which is interpreted as a statistically significant difference, in the learning data utilizing average between-class distance and within class scatter matrix (Song et al. page 24 left col.).
Regarding instant claim 11, Song et al. shows the difference of means of a feature for two classes is multiplied by the summation of the number of samples in the two classes divided by total number of samples which is interpreted as a multiple test correction on a statistical probability value using the number of feature amounts because this is a correction factor of accounting for relative number of samples in each of the two-class comparison (Song et al. page 23 right col.)
Regarding instant claim 14, Song et al. shows inputting a target threshold value λ of a totalized value where the optimization process readjusts λ based on classification performance to find the minimum value of λ to make selections of optimal features using the inequality of J(xk) ≥ λ which illustrates that the feature xk is the most discriminative (Song et al. page 24 right col.).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to have modified the selection process of features by quantifying discriminatory possibilities and selecting features based on the quantified discriminatory possibilities of copending application 18/183832 to use the feature ranking and selection process of Song et al. because this would allow for an iterative feature selection process which optimizes a minimum cutoff value of the totalized value for feature selection, using statistical significance for feature selection, and using multiple test correction for selecting an optimal set of features (Song et al. page 23 right col. and page 24 left col. – right col.). One would have a reasonable expectation of success for this modification because copending application 18/183832 shows utilizing discriminate analysis for selecting a feature subset for classification while Song et al. shows a process of performing an iterative feature selection process using statistical methods and optimized threshold cutoffs for adding features to an optimal subset which represent the most discriminatory features.
Claims 1, 8, and 12 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 4 of copending Application No. 18/183832 in view of Wan et al. (IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 2, pp. 409-422, 1 Feb. 2018).
Regarding instant claim 1, the copending application shows the steps of instant claim 1 as described above (copending claim 1).
Regarding instant claim 8, the copending application shows the steps of instant claim 8 as described above (copending claim 4).
The copending application does not show the limitations of claim 12 of a subclass setting step of clustering one or more samples in belong to the classes to form a subclass, marking step of marking subclass in each class that do not need to be discriminated against, and an exclusion step of excluding the pairwise coupling that do not need to be discriminated against.
Regarding instant claim 12, Wan et al. shows performing clustering samples in classes to generate subclasses of each class (Wan et al. page 412 right col. – page 413 left col.). Wan et al. shows a performing discriminate analysis by measuring between-class scatter by the difference between subclass means and a single point of reference (i.e., the mean of all data instances) which shows that the pairwise couplings are a combination of each subclass and a reference and the difference between subclasses in each class are excluded from the pairwise coupling (Wan et al. page 414).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to have modified the discriminate analysis of copending application 18/183832 to use the hierarchical clustering to divide a class into subclasses before applying discriminate analysis using redefined scatter matrices of Wan et al. because this would allow for a method that selects a subset of features using subclass discrimination that achieves higher class separation and reduces overlap of subclasses in classification tasks (Wan et al. page 409 abstract). One would have a reasonable expectation of success for this modification because copending application 18/183832 shows utilizing discriminate analysis for selecting a feature subset for classification while Wan et al. shows a process of performing clustering before performing discriminate analysis.
Conclusion
No claims are allowed.
This Office action is a Non-Final action. A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN EDWARD HAYES whose telephone number is (571)272-6165. The examiner can normally be reached M-F 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Olivia Wise can be reached at 571-272-2249. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.E.H./Examiner, Art Unit 1685
/KAITLYN L MINCHELLA/Primary Examiner, Art Unit 1685