Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The Amendment filed on 11/07/25 has been received and entered. Application No. 18/482,975 of which claims 5, 12, and 19 are canceled. Claims 1-4, 6-11, 13-18, and 20-25 are pending in the application, all of which are ready for examination by the examiner.
Continued Examination under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/07/2025 has been entered.
Response to Amendment
Applicant’s arguments and remarks necessitated new grounds of rejection.
Applicant’s response, filed on 10/22/25, with respect to 101 rejections of claims 1-4, 6-11, 13-18, and 20-25 have been fully considered but are not persuasive. The rejections are maintained.
Response to Arguments
Applicant's arguments with respect to 35 USC § 101 rejections of claims 1-4, 6-11, 13-18, and 20-25 have been fully considered but they are not persuasive. Applicant made the following arguments:
Regarding claims 1, 11 and 20, Applicant argues Regarding claims 1, 8 and 15, Applicant argues “Claim 1 further indicates "wherein a weight is applied to each dimensional distance in which the weight is based on a cluster quality associated with each dimension, wherein the cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering models." These claim limitations further limit the practical application of improving the accuracy of the ensemble distance by using the first clustering model and the second clustering model having the different types of the machine learning techniques. This allows the simulated data to be utilized for various useful purposes”.
Examiner respectfully disagrees. The cited limitations relating “an accuracy of the ensemble distance is improved by using the first clustering model and the second clustering model having the different types of the machine learning techniques, wherein the ensemble distance between first data set and the second data set is calculated as a weighted average of the dimensional distance for each dimension, wherein a weight is applied to each dimensional distance in which the weight is based on a cluster quality associated with each dimension, wherein the cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering models” amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The cited limitations do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Furthermore, Examiner points that the improvement cannot be part of the abstract idea itself. That is, other than reciting “computer-implemented method…,” the claims recite a mathematical concept or merely limitations that are based on or involve a mathematical concept. For example, but for the “computer-implemented method…,” “of “calculating…,” in the context of mathematical relationships and formulas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation of a mathematical concept but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Examiner points that improvement cannot be part of the abstract idea itself. Accordingly, the claims recite an abstract idea. Therefore, Applicant’s arguments are not persuasive. The claims are not patent eligible.
Applicant’s arguments with respect to 35 USC § 103 rejections of claims 1-4, 6-11, 13-18, and 20-25 have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection.
Claim Rejections - 35 USC §101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 6-11, 13-18, and 20-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims 1, 8, and 15 recite identifying a mapping between elements of the first plurality of cluster vectors and elements the second plurality of cluster vectors having a same dimension; calculating, for each dimension based at least in part on the mapping, a dimensional distance between the first data set and the second data set; and calculating, based at least in part on the dimensional distances, an ensemble distance between first data set and the second data set, wherein an accuracy of the ensemble distance is improved by using the first clustering model and the second clustering model having the different types of the machine learning techniques, wherein the ensemble distance between first data set and the second data set is calculated as a weighted average of the dimensional distance for each dimension, wherein a weight is applied to each dimensional distance in which the weight is based on a cluster quality associated with each dimension, wherein the cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering models.
The limitations of identifying…, as drafted, are processes that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “computer-implemented method…,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “computer-implemented method…,” “of “identifying…,” in the context of these claims encompass the user manually identifying a mapping between elements. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
The limitations of calculating…, as drafted, are processes that, under its broadest reasonable interpretation, covers performance of the mathematical calculation. That is, other than reciting “computer-implemented method…,” the claims recite a mathematical concept or merely limitations that are based on or involve a mathematical concept. For example, but for the “computer-implemented method…,” “of “calculating…,” in the context of mathematical relationships and formulas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation of a mathematical concept but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – obtaining…, inputting…, storing…. The “inputting” limitation amounts to mere instructions to apply an exception (see MPEP 2106.05f). The “obtaining” and “storing” limitations are insignificant extra-solution activity (mere data gathering, please see MPEP 2106.05g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. “Inputting” amounts to mere instructions to apply an exception (see MPEP 2106.05f). The additional elements of “obtaining” and “storing” is a well-understood, routine, and conventional activity (storing or data gathering, see MPEP 2106.05d). The additional elements, individually and in combination, also do not amount to significantly more than the abstract idea.
The claims 2, 9, 16, and 23 recite wherein each of the elements of the first plurality of cluster vectors and the elements of the second plurality of cluster vectors each include a data cluster and wherein the mapping is identified based on a centroid for each data cluster. The limitations only recite additional elements at a high level of generality. These limitations are recited at a high-level of generality (i.e., identified) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The additional elements, individually and in combination, also do not amount to significantly more than the abstract idea.
The claims 3, 10, 17 recite wherein the dimensional distance between the first data set and the second data set for each dimension is calculated based on a size of the first data set, a size of the second data set, and a distance between the centroid of mapped elements of the first plurality of cluster vectors and elements of the second plurality of cluster vectors. The limitations only recite additional elements at a high level of generality. These limitations are recited at a high-level of generality (i.e., calculated) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The additional elements, individually and in combination, also do not amount to significantly more than the abstract idea.
The claims 4, 11, 18 recite wherein a first portion of the first plurality of cluster vectors is based on output from the first clustering model and another first portion of the first plurality of cluster vectors is based on output from the second clustering model; a second portion of the second plurality of cluster vectors is based on output from the first clustering model and another second portion of the second plurality of cluster vectors is based on output from the second clustering model; a first model distance is based on the first portion and the second portion, while a second model distance is based on the another first portion and the another second portion; and the ensemble distance between first data set and the second data set is calculated as an average of the first model distance and the second model distance. The limitations only recite additional elements at a high level of generality. These limitations are recited at a high-level of generality (i.e., calculated) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The additional elements, individually and in combination, also do not amount to significantly more than the abstract idea.
The claims 6, 13, 20 recite removing cluster vectors from the first plurality of cluster vectors and the second plurality of cluster vectors having a cluster quality below a threshold value. The limitations only recite additional elements at a high level of generality. These limitations are recited at a high-level of generality (i.e., removing) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The additional elements, individually and in combination, also do not amount to significantly more than the abstract idea.
The claims 7 and 14 recite wherein the plurality of clustering models are K-means clustering models. The limitations only recite additional elements recited at a high level of generality. Accordingly, this additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The additional elements, individually and in combination, also do not amount to significantly more than the abstract idea.
The claim 21 recites identifying a mapping between elements of the first plurality of vectors and elements the second plurality of feature vectors created by a same model of the plurality of models; calculating, for each of the plurality of models based at least in part on the mapping, a model distance between the first data set and the second data set; and calculating, based at least in part on the model distances, an ensemble distance between first data set and the second data set, wherein an accuracy of the ensemble distance is improved by using the first clustering model and the second clustering model having the different types of the machine learning techniques.
The limitations of identifying…, as drafted, are processes that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “computer-implemented method…,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “computer-implemented method…,” “of “identifying…,” in the context of these claims encompass the user manually identifying a mapping between elements. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
The limitations of calculating…, as drafted, are processes that, under its broadest reasonable interpretation, covers performance of the mathematical calculation. That is, other than reciting “computer-implemented method…,” the claims recite a mathematical concept or merely limitations that are based on or involve a mathematical concept. For example, but for the “computer-implemented method…,” “of “calculating…,” in the context of mathematical relationships and formulas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation of a mathematical concept but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – obtaining…, creating…. The “obtaining” and “creating” limitations amount to mere instructions to apply an exception (see MPEP 2106.05f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. “Obtaining” and “creating” amount to mere instructions to apply an exception (see MPEP 2106.05f). The additional elements, individually and in combination, also do not amount to significantly more than the abstract idea.
The claim 22 recites identifying a first mapping between elements of the first cluster vector and elements the third cluster vector and a second mapping between elements of the second cluster vector and elements the fourth cluster vector; calculating a first dimensional distance between the first cluster vector and the third cluster vector based on at least in part on the first mapping; calculating a second dimensional distance between the second cluster vector and the fourth cluster vector based on at least in part on the second mapping; and calculating, based at least in part on the first dimensional distance and the second dimensional distance, an ensemble distance between first data set and the second data set, wherein an accuracy of the ensemble distance is improved by using the first clustering model and the second clustering model having the different types of the machine learning techniques.
The limitations of identifying…, as drafted, are processes that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “computer-implemented method…,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “computer-implemented method…,” “of “identifying…,” in the context of these claims encompass the user manually identifying a mapping between elements. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
The limitations of calculating…, as drafted, are processes that, under its broadest reasonable interpretation, covers performance of the mathematical calculation. That is, other than reciting “computer-implemented method…,” the claims recite a mathematical concept or merely limitations that are based on or involve a mathematical concept. For example, but for the “computer-implemented method…,” “of “calculating…,” in the context of mathematical relationships and formulas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation of a mathematical concept but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – obtaining…, creating…. The “obtaining” and “creating” limitations amount to mere instructions to apply an exception (see MPEP 2106.05f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. “Obtaining” and “creating” amount to mere instructions to apply an exception (see MPEP 2106.05f). The additional elements, individually and in combination, also do not amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-11, 13-18, and 20-25 are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (U.S. Pub 2022/0309353; hereinafter “Han”) in view of Bradley et al. (U.S. Patent 6,449,612; hereinafter “Bradley”) and further in view of Shan et al. (U.S. Patent 10,242,019; hereinafter “Shan”) and further in view of Williams et al. (U.S. PGPub 2016/0110442; hereinafter “Williams”).
As per claims 1, 8, and 15, Han discloses a computer-implemented method for data difference evaluation, the computer-implemented method comprising:
obtaining a first data set and a second data set; (See Fig. 6, paras. 7, 67, wherein plurality of training data sets, receiving data set are disclosed; as taught by Han.)
inputting the first data set into each of a plurality of clustering models, wherein each of the plurality of clustering models separates the first data set into a different number of clusters; (See Fig. 2, paras. 7, 69, wherein clustering process for plurality of clusters, assigning cluster ID codes to clusters are disclosed; as taught by Han.)
inputting the second data set into each of the plurality of clustering models, wherein each of the plurality of clustering models separates the second data set into a different number of clusters; (See Fig. 2, paras. 7, 69, wherein clustering process for plurality of clusters, assigning cluster ID codes to clusters are disclosed; as taught by Han.)
identifying a mapping between elements of the first plurality of cluster vectors and elements the second plurality of cluster vectors having a same dimension; (See Fig. 4C, paras. 2, 46-49, 56, wherein combination of dimensions and dimension values are disclosed; as taught by Han.)
calculating, for each dimension based at least in part on the mapping, a dimensional distance between the first data set and the second data set; (See paras. 6, 22, 44, 77, wherein clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
and calculating, based at least in part on the dimensional distances, an ensemble distance between first data set and the second data set. (See paras. 6, 22, 44, 77, wherein clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
However, Han fails to disclose storing an output of each of the plurality of clustering models corresponding to the first data set into a first plurality of cluster vectors, where each of the first plurality of cluster vectors has a dimension that corresponds to the number of clusters; storing the output of each of the plurality of clustering models corresponding to the second data set into a second plurality of cluster vectors, where each of the second plurality of cluster vectors has a dimension that corresponds to the number of clusters.
On the other hand, Bradley teaches storing an output of each of the plurality of clustering models corresponding to the first data set into a first plurality of cluster vectors, where each of the first plurality of cluster vectors has a dimension that corresponds to the number of clusters; (See Fig. 6C, Table 1, col. 5, ll 35-67, col. 6, ll 10-65, wherein vectors are disclosed, also See col. 8, ll 1-23, wherein dimensions correspond to clusters are disclosed; as taught by Bradley.)
storing the output of each of the plurality of clustering models corresponding to the second data set into a second plurality of cluster vectors, where each of the second plurality of cluster vectors has a dimension that corresponds to the number of clusters; (See Fig. 6C, Table 1, col. 5, ll 35-67, col. 6, ll 10-65, wherein vectors are disclosed, also See col. 8, ll 1-23, wherein dimensions correspond to clusters are disclosed; as taught by Bradley.)
Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Bradley teachings in the Han system. Skilled artisan would have been motivated to incorporate the system for varying cluster number in a scalable clustering system for use with large databases to taught by Bradley in the Han system for effective dimension reduction in the context of unsupervised learning. In addition, both of the references (Han and Bradley) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as vector quantisation. This close relation between both of the references highly suggests an expectation of success.
However, the combination Han and Bradley fails to disclose wherein the first data set is input into at least a first clustering model and a second clustering model of the plurality of clustering models, wherein the first and second clustering models are different types of machine learning techniques; wherein the second data set is input into at least the first clustering model and the second clustering model with the different types of the machine learning techniques; wherein an accuracy of the ensemble distance is improved by using the first clustering model and the second clustering model having the different types of the machine learning techniques.
On the other hand, Shan teaches wherein the first data set is input into at least a first clustering model and a second clustering model of the plurality of clustering models, wherein the first and second clustering models are different types of machine learning techniques; (See col. 3, ll 1-32, 45-53, wherein clustering model and data clusters are disclosed, also See col. 13, ll 43-49, col. 14, ll 13-39, wherein applying multiple techniques to identify clusters in which “clustering compression model generator 214 may automatically group "like" users to indicate affinity groups. The grouping may be based on transaction data and/or topically compressed transaction data for the users” are disclosed, also See col. 15, ll 10-25, wherein utilizing multiple techniques to develop optimized data clusters in which “multiple techniques may be applied to develop more optimized data clusters, such as combining clustering algorithms with machine-learning based techniques, such as topic modeling…clustering output maps distance of users to the developed segment centers. Thus, a user may be assigned to a segment they are closest to along with distance measurements that show the user's proximity to other (possibly all) segments. This creates opportunities to consider multiple types of transaction behavior of the user in assessing how their behavior (such as spending patterns) is unique from other users in the population and target content accordingly” are disclosed; as taught by Shan.)
wherein the second data set is input into at least the first clustering model and the second clustering model with the different types of the machine learning techniques; (See col. 4, ll 10-20, wherein generating data records are disclosed, also See Fig. 2, col. 12, ll 27-43, wherein creating data in which “"Vector A" may then be used as an input to a clustering algorithm, such as k-means clustering in order to produce clustering results, which will be referred to as "Model B”… the clustering algorithm returns the location of the center of a preset number of clusters in the same space as "Vector A". A segment can then be assigned to a user by measuring the distance from "Vector A" to each of the points described in "Model B." This system may then, optionally, generate a second vector, "Vector B" which measures the distances of the given data point (user)” are disclosed; as taught by Shan.)
wherein an accuracy of the ensemble distance is improved by using the first clustering model and the second clustering model having the different types of the machine learning techniques. (See col. 3, ll 1-32, 45-53, wherein clustering model and data clusters are disclosed, also See col. 13, ll 43-49, col. 14, ll 13-39, wherein applying multiple techniques to identify clusters in which “clustering compression model generator 214 may automatically group "like" users to indicate affinity groups. The grouping may be based on transaction data and/or topically compressed transaction data for the users” are disclosed, also See col. 15, ll 10-25, wherein utilizing multiple techniques to develop optimized data clusters in which “multiple techniques may be applied to develop more optimized data clusters, such as combining clustering algorithms with machine-learning based techniques, such as topic modeling…clustering output maps distance of users to the developed segment centers. Thus, a user may be assigned to a segment they are closest to along with distance measurements that show the user's proximity to other (possibly all) segments. This creates opportunities to consider multiple types of transaction behavior of the user in assessing how their behavior (such as spending patterns) is unique from other users in the population and target content accordingly” are disclosed; as taught by Shan.)
Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Shan teachings in the combination of Han and Bradley system. Skilled artisan would have been motivated to incorporate the system for user behavior segmentation using latent topic detection taught by Shan in the combination of Han and Bradley system for effective dimension reduction in the context of unsupervised learning. In addition, both of the references (Han, Bradley, and Shan) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as vector quantisation. This close relation between both of the references highly suggests an expectation of success.
However, the combination of Han, Bradley, and Shan fails to disclose wherein the ensemble distance between first data set and the second data set is calculated as a weighted average of the dimensional distance for each dimension, wherein a weight is applied to each dimensional distance in which the weight is based on a cluster quality associated with each dimension, wherein the cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering models.
On the other hand, Williams teaches wherein the ensemble distance between first data set and the second data set is calculated as a weighted average of the dimensional distance for each dimension, wherein a weight is applied to each dimensional distance in which the weight is based on a cluster quality associated with each dimension, wherein the cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering models. (See Figs. 7-14, paras. 36-37, 96-97, 99-101, wherein grouping component (average centroid distance), calculated score for each cluster that includes a weighted average of different components are disclosed, also See paras. 51, 163, wherein evaluating quality of cluster solutions in which “computation management module 214 may evaluate the quality of the cluster solutions using a weighed scoring system. The computation management module 214 may use the scores in several ways, such as: (a) assisting in the search algorithm, if one is utilized, and/or (b) allowing the user to scrutinize only those solutions with the highest overall scores. In some implementations, the quality score is based on a weighted combination of factors that are evaluated on the variables that have been separated by the data management module 208, for example: how well the solutions cover the range of values from the Target Drivers; how tightly grouped the clusters are across the chosen Cluster Candidates; and the overall diversity or heterogeneity of the clusters across both the Cluster Candidates and the Profile Variables” (analogous to cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering models) [0051] are disclosed; as taught by Williams.)
Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Williams teachings in the combination of Han, Bradley, and Shan system. Skilled artisan would have been motivated to incorporate the system for clustering and evaluation of data taught by Williams in the combination of Han, Bradley, and Shan system for effective dimension reduction in the context of unsupervised learning. In addition, both of the references (Han, Bradley, Shan, and Williams) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as vector quantisation. This close relation between both of the references highly suggests an expectation of success.
As per claims 2, 9, 16 and 23, the combination of Han, Bradley, Shan, and Williams discloses wherein each of the elements of the first plurality of cluster vectors and the elements of the second plurality of cluster vectors each include a data cluster and wherein the mapping is identified based on a centroid for each data cluster. (See para. 6, wherein centroid models are disclosed, also See Fig. 4C, paras. 2, 46-49, 56, wherein combination of dimensions and dimension values are disclosed; as taught by Han.)
As per claims 3, 10, 17 and 24, the combination of Han, Bradley, Shan, and Williams discloses wherein the dimensional distance between the first data set and the second data set for each dimension is calculated based on a size of the first data set, a size of the second data set, and a distance between the centroid of mapped elements of the first plurality of cluster vectors and elements of the second plurality of cluster vectors. (See Fig. 4C, paras. 2, 46-49, 56, wherein combination of dimensions and dimension values are disclosed, also See paras. 6, 22, 44, 77, wherein centroid models, clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
As per claims 4, 11 and 18, the combination of Han, Bradley, and Williams discloses wherein the ensemble distance between first data set and the second data set is calculated as an average of the first model distance and the second model distance. (See paras. 6, 22, 44, 77, wherein clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
However, the combination of Han and Bradley fails to disclose a first portion of the first plurality of cluster vectors is based on output from the first clustering model and another first portion of the first plurality of cluster vectors is based on output from the second clustering model; a second portion of the second plurality of cluster vectors is based on output from the first clustering model and another second portion of the second plurality of cluster vectors is based on output from the second clustering model; a first model distance is based on the first portion and the second portion, while a second model distance is based on the another first portion and the another second portion.
On the other hand, Shan teaches a first portion of the first plurality of cluster vectors is based on output from the first clustering model and another first portion of the first plurality of cluster vectors is based on output from the second clustering model; (See Fig. 8, col. 21, ll 43-56, wherein generating data and providing portions of data are disclosed, also See Fig. 2, col. 12, ll 27-58, col. 13, ll 60-65, wherein output ; as taught by Shan.)
a second portion of the second plurality of cluster vectors is based on output from the first clustering model and another second portion of the second plurality of cluster vectors is based on output from the second clustering model; (See Fig. 8, col. 21, ll 43-56, wherein generating data and providing portions of data are disclosed, also See Fig. 2, col. 12, ll 27-58, col. 13, ll 60-65, wherein outputting transaction data and generation of clustering compression model are disclosed; as taught by Shan.)
a first model distance is based on the first portion and the second portion, while a second model distance is based on the another first portion and the another second portion. (See Fig. 8, col. 21, ll 43-56, wherein generating data and providing portions of data are disclosed, also See Fig. 2, col. 12, ll 27-58, col. 13, ll 60-65, wherein outputting transaction data and generation of clustering compression model are disclosed, also See col. 15, ll 10-25, wherein utilizing multiple techniques to develop optimized data clusters in which “multiple techniques may be applied to develop more optimized data clusters, such as combining clustering algorithms with machine-learning based techniques, such as topic modeling…clustering output maps distance of users to the developed segment centers. Thus, a user may be assigned to a segment they are closest to along with distance measurements that show the user's proximity to other (possibly all) segments. This creates opportunities to consider multiple types of transaction behavior of the user in assessing how their behavior (such as spending patterns) is unique from other users in the population and target content accordingly” are disclosed; as taught by Shan.)
See claims 1, 8 and 15 for motivation above.
As per claims 6, 13 and 20, the combination of Han, Bradley, Shan, and Williams discloses removing cluster vectors from the first plurality of cluster vectors and the second plurality of cluster vectors having a cluster quality below a threshold value. (See paras. 7, 21-22, 77, wherein reduced training data, removing dimension values process are disclosed, also See paras. 6, 44, wherein clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
As per claims 7 and 14, the combination of Han, Bradley, Shan, and Williams discloses wherein the plurality of clustering models are K-means clustering models. (See Fig. 2, paras. 22, 65, 69 , wherein K data blocks, clustering techniques (i.e. K-mean) are disclosed; as taught by Han.)
As per claim 21, Han discloses a computer-implemented method for data difference evaluation, the computer-implemented method comprising:
obtaining a first data set and a second data set; (See Fig. 6, paras. 7, 67, wherein plurality of training data sets, receiving data set are disclosed; as taught by Han.)
identifying a mapping between elements of the first plurality of vectors and elements the second plurality of feature vectors created by a same model of the plurality of models; (See Fig. 4C, paras. 2, 46-49, 56, wherein combination of dimensions and dimension values are disclosed; as taught by Han.)
calculating, for each of the plurality of models based at least in part on the mapping, a model distance between the first data set and the second data set; (See paras. 6, 22, 44, 77, wherein clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
and calculating, based at least in part on the model distances, an ensemble distance between first data set and the second data set. (See paras. 6, 22, 44, 77, wherein clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
However, Han fails to disclose creating a first plurality of feature vectors by inputting the first data set into each of a plurality of models, wherein each of the plurality of models calculates a first number of features that each correspond to an element of one of the first plurality of vectors; creating a second plurality of feature vectors by inputting the second data set into each of the plurality of models, wherein each of the plurality of models calculates a second number of features that each that each correspond to an element of one of the second plurality of feature vectors.
On the other hand, Bradley teaches creating a first plurality of feature vectors by inputting the first data set into each of a plurality of models, wherein each of the plurality of models calculates a first number of features that each correspond to an element of one of the first plurality of vectors; (See Fig. 6C, Table 1, col. 5, ll 35-67, col. 6, ll 10-65, wherein vectors are disclosed, also See Fig. 4, 7, col. 8, ll 1-23, col. 11, ll 28-43, col. 12, ll 1-31, wherein evaluating, scoring process of cluster models, dimensions correspond to clusters are disclosed; as taught by Bradley.)
creating a second plurality of feature vectors by inputting the second data set into each of the plurality of models, wherein each of the plurality of models calculates a second number of features that each that each correspond to an element of one of the second plurality of feature vectors. (See Fig. 6C, Table 1, col. 5, ll 35-67, col. 6, ll 10-65, wherein vectors are disclosed, also See Fig. 4, 7, col. 8, ll 1-23, col. 11, ll 28-43, col. 12, ll 1-31, wherein evaluating, scoring process of cluster models, dimensions correspond to clusters are disclosed; as taught by Bradley.)
Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Bradley teachings in the Han system. Skilled artisan would have been motivated to incorporate the system for varying cluster number in a scalable clustering system for use with large databases to taught by Bradley in the Han system for effective dimension reduction in the context of unsupervised learning. In addition, both of the references (Han and Bradley) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as vector quantisation. This close relation between both of the references highly suggests an expectation of success.
However, the combination Han and Bradley fails to disclose wherein the first data set is input into at least a first clustering model and a second clustering model of the plurality of clustering models, wherein the first and second clustering models are different types of machine learning techniques; wherein the second data set is input into at least the first clustering model and the second clustering model with the different types of the machine learning techniques; wherein an accuracy of the ensemble distance is improved by using the first clustering model and the second clustering model having the different types of the machine learning techniques.
On the other hand, Shan teaches wherein the first data set is input into at least a first clustering model and a second clustering model of the plurality of clustering models, wherein the first and second clustering models are different types of machine learning techniques; (See col. 3, ll 1-32, 45-53, wherein clustering model and data clusters are disclosed, also See col. 13, ll 43-49, col. 14, ll 13-39, wherein applying multiple techniques to identify clusters in which “clustering compression model generator 214 may automatically group "like" users to indicate affinity groups. The grouping may be based on transaction data and/or topically compressed transaction data for the users” are disclosed, also See col. 15, ll 10-25, wherein utilizing multiple techniques to develop optimized data clusters in which “multiple techniques may be applied to develop more optimized data clusters, such as combining clustering algorithms with machine-learning based techniques, such as topic modeling…clustering output maps distance of users to the developed segment centers. Thus, a user may be assigned to a segment they are closest to along with distance measurements that show the user's proximity to other (possibly all) segments. This creates opportunities to consider multiple types of transaction behavior of the user in assessing how their behavior (such as spending patterns) is unique from other users in the population and target content accordingly” are disclosed; as taught by Shan.)
wherein the second data set is input into at least the first clustering model and the second clustering model with the different types of the machine learning techniques; (See col. 4, ll 10-20, wherein generating data records are disclosed, also See Fig. 2, col. 12, ll 27-43, wherein creating data in which “"Vector A" may then be used as an input to a clustering algorithm, such as k-means clustering in order to produce clustering results, which will be referred to as "Model B”… the clustering algorithm returns the location of the center of a preset number of clusters in the same space as "Vector A". A segment can then be assigned to a user by measuring the distance from "Vector A" to each of the points described in "Model B." This system may then, optionally, generate a second vector, "Vector B" which measures the distances of the given data point (user)” are disclosed; as taught by Shan.)
wherein an accuracy of the ensemble distance is improved by using the first clustering model and the second clustering model having the different types of the machine learning techniques. (See col. 3, ll 1-32, 45-53, wherein clustering model and data clusters are disclosed, also See col. 13, ll 43-49, col. 14, ll 13-39, wherein applying multiple techniques to identify clusters in which “clustering compression model generator 214 may automatically group "like" users to indicate affinity groups. The grouping may be based on transaction data and/or topically compressed transaction data for the users” are disclosed, also See col. 15, ll 10-25, wherein utilizing multiple techniques to develop optimized data clusters in which “multiple techniques may be applied to develop more optimized data clusters, such as combining clustering algorithms with machine-learning based techniques, such as topic modeling…clustering output maps distance of users to the developed segment centers. Thus, a user may be assigned to a segment they are closest to along with distance measurements that show the user's proximity to other (possibly all) segments. This creates opportunities to consider multiple types of transaction behavior of the user in assessing how their behavior (such as spending patterns) is unique from other users in the population and target content accordingly” are disclosed; as taught by Shan.)
Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Shan teachings in the combination of Han and Bradley system. Skilled artisan would have been motivated to incorporate the system for user behavior segmentation using latent topic detection taught by Shan in the combination of Han and Bradley system for effective dimension reduction in the context of unsupervised learning. In addition, both of the references (Han, Bradley, and Shan) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as vector quantisation. This close relation between both of the references highly suggests an expectation of success.
However, the combination of Han, Bradley, and Shan fails to disclose wherein the ensemble distance between first data set and the second data set is calculated as a weighted average of the dimensional distance for each dimension, wherein a weight is applied to each dimensional distance in which the weight is based on a cluster quality associated with each dimension, wherein the cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering model.
On the other hand, Williams teaches wherein the ensemble distance between first data set and the second data set is calculated as a weighted average of the dimensional distance for each dimension, wherein a weight is applied to each dimensional distance in which the weight is based on a cluster quality associated with each dimension, wherein the cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering model. (See Figs. 7-14, paras. 36-37, 96-97, 99-101, wherein grouping component (average centroid distance), calculated score for each cluster that includes a weighted average of different components are disclosed, also See paras. 51, 163, wherein evaluating quality of cluster solutions in which “computation management module 214 may evaluate the quality of the cluster solutions using a weighed scoring system. The computation management module 214 may use the scores in several ways, such as: (a) assisting in the search algorithm, if one is utilized, and/or (b) allowing the user to scrutinize only those solutions with the highest overall scores. In some implementations, the quality score is based on a weighted combination of factors that are evaluated on the variables that have been separated by the data management module 208, for example: how well the solutions cover the range of values from the Target Drivers; how tightly grouped the clusters are across the chosen Cluster Candidates; and the overall diversity or heterogeneity of the clusters across both the Cluster Candidates and the Profile Variables” (analogous to cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering models) [0051] are disclosed; as taught by Williams.)
Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Williams teachings in the combination of Han, Bradley, and Shan system. Skilled artisan would have been motivated to incorporate the system for clustering and evaluation of data taught by Williams in the combination of Han, Bradley, and Shan system for effective dimension reduction in the context of unsupervised learning. In addition, both of the references (Han, Bradley, Shan, and Williams) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as vector quantisation. This close relation between both of the references highly suggests an expectation of success.
As per claim 22, Han discloses a computer-implemented method for data difference evaluation, the computer-implemented method comprising:
obtaining a first data set and a second data set; (See Fig. 6, paras. 7, 67, wherein plurality of training data sets, receiving data set are disclosed; as taught by Han.)
identifying a first mapping between elements of the first cluster vector and elements the third cluster vector and a second mapping between elements of the second cluster vector and elements the fourth cluster vector; (See Fig. 4C, paras. 2, 46-49, 56, wherein combination of dimensions and dimension values are disclosed; as taught by Han.)
calculating a first dimensional distance between the first cluster vector and the third cluster vector based on at least in part on the first mapping; (See paras. 6, 22, 44, 77, wherein clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
calculating a second dimensional distance between the second cluster vector and the fourth cluster vector based on at least in part on the second mapping; (See paras. 6, 22, 44, 77, wherein clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
calculating, based at least in part on the first dimensional distance and the second dimensional distance, an ensemble distance between first data set and the second data set. (See paras. 6, 22, 44, 77, wherein clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
However, Han fails to disclose creating a first cluster vector, having a first dimension, for the first data set by inputting the first data set into a first clustering model, wherein the first clustering model separates the first data set into a first number of clusters that each correspond to an element of the first cluster vector; creating a second cluster vector, having a second dimension, for the first data set by inputting the first data set into a second clustering model, wherein the second clustering model separates the first data set into a second number of clusters that each correspond to an element of the second cluster vector; creating a third cluster vector, having the first dimension, for the second data set by inputting the second data set into the first clustering model, wherein the first clustering model separates the second data set into the first number of clusters that each correspond to an element of the third cluster vector; creating a fourth cluster vector, having the second dimension, for the second data set by inputting the second data set into the second clustering model, wherein the second clustering model separates the second data set into the second number of clusters that each correspond to an element of the fourth cluster vector.
On the other hand, Bradley teaches creating a first cluster vector, having a first dimension, for the first data set by inputting the first data set into a first clustering model, wherein the first clustering model separates the first data set into a first number of clusters that each correspond to an element of the first cluster vector; (See Fig. 6C, Table 1, col. 5, ll 35-67, col. 6, ll 10-65, wherein vectors are disclosed, also See Fig. 4, 7, col. 8, ll 1-23, col. 11, ll 28-43, col. 12, ll 1-31, wherein evaluating, scoring process of cluster models, dimensions correspond to clusters are disclosed; as taught by Bradley.)
creating a second cluster vector, having a second dimension, for the first data set by inputting the first data set into a second clustering model, wherein the second clustering model separates the first data set into a second number of clusters that each correspond to an element of the second cluster vector; (See Fig. 6C, Table 1, col. 5, ll 35-67, col. 6, ll 10-65, wherein vectors are disclosed, also See Fig. 4, 7, col. 8, ll 1-23, col. 11, ll 28-43, col. 12, ll 1-31, wherein evaluating, scoring process of cluster models, dimensions correspond to clusters are disclosed; as taught by Bradley.)
creating a third cluster vector, having the first dimension, for the second data set by inputting the second data set into the first clustering model, wherein the first clustering model separates the second data set into the first number of clusters that each correspond to an element of the third cluster vector; (See Fig. 6C, Table 1, col. 5, ll 35-67, col. 6, ll 10-65, wherein vectors are disclosed, also See Fig. 4, 7, col. 8, ll 1-23, col. 11, ll 28-43, col. 12, ll 1-31, wherein evaluating, scoring process of cluster models, dimensions correspond to clusters are disclosed; as taught by Bradley.)
creating a fourth cluster vector, having the second dimension, for the second data set by inputting the second data set into the second clustering model, wherein the second clustering model separates the second data set into the second number of clusters that each correspond to an element of the fourth cluster vector. (See Fig. 6C, Table 1, col. 5, ll 35-67, col. 6, ll 10-65, wherein vectors are disclosed, also See Fig. 4, 7, col. 8, ll 1-23, col. 11, ll 28-43, col. 12, ll 1-31, wherein evaluating, scoring process of cluster models, dimensions correspond to clusters are disclosed; as taught by Bradley.)
Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Bradley teachings in the Han system. Skilled artisan would have been motivated to incorporate the system for varying cluster number in a scalable clustering system for use with large databases to taught by Bradley in the Han system for effective dimension reduction in the context of unsupervised learning. In addition, both of the references (Han and Bradley) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as vector quantisation. This close relation between both of the references highly suggests an expectation of success.
However, the combination Han and Bradley fails to disclose wherein the first and second clustering models are different types of machine learning techniques; wherein an accuracy of the ensemble distance is improved by using the first clustering model and the second clustering model having the different types of the machine learning techniques.
On the other hand, Shan teaches wherein the first and second clustering models are different types of machine learning techniques; (See col. 3, ll 1-32, 45-53, wherein clustering model and data clusters are disclosed, also See col. 13, ll 43-49, col. 14, ll 13-39, wherein applying multiple techniques to identify clusters in which “clustering compression model generator 214 may automatically group "like" users to indicate affinity groups. The grouping may be based on transaction data and/or topically compressed transaction data for the users” are disclosed, also See col. 15, ll 10-25, wherein utilizing multiple techniques to develop optimized data clusters in which “multiple techniques may be applied to develop more optimized data clusters, such as combining clustering algorithms with machine-learning based techniques, such as topic modeling…clustering output maps distance of users to the developed segment centers. Thus, a user may be assigned to a segment they are closest to along with distance measurements that show the user's proximity to other (possibly all) segments. This creates opportunities to consider multiple types of transaction behavior of the user in assessing how their behavior (such as spending patterns) is unique from other users in the population and target content accordingly” are disclosed; as taught by Shan.)
wherein an accuracy of the ensemble distance is improved by using the first clustering model and the second clustering model having the different types of the machine learning techniques. (See col. 3, ll 1-32, 45-53, wherein clustering model and data clusters are disclosed, also See col. 13, ll 43-49, col. 14, ll 13-39, wherein applying multiple techniques to identify clusters in which “clustering compression model generator 214 may automatically group "like" users to indicate affinity groups. The grouping may be based on transaction data and/or topically compressed transaction data for the users” are disclosed, also See col. 15, ll 10-25, wherein utilizing multiple techniques to develop optimized data clusters in which “multiple techniques may be applied to develop more optimized data clusters, such as combining clustering algorithms with machine-learning based techniques, such as topic modeling…clustering output maps distance of users to the developed segment centers. Thus, a user may be assigned to a segment they are closest to along with distance measurements that show the user's proximity to other (possibly all) segments. This creates opportunities to consider multiple types of transaction behavior of the user in assessing how their behavior (such as spending patterns) is unique from other users in the population and target content accordingly” are disclosed; as taught by Shan.)
Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Shan teachings in the combination of Han and Bradley system. Skilled artisan would have been motivated to incorporate the system for user behavior segmentation using latent topic detection taught by Shan in the combination of Han and Bradley system for effective dimension reduction in the context of unsupervised learning. In addition, both of the references (Han, Bradley, and Shan) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as vector quantisation. This close relation between both of the references highly suggests an expectation of success.
However, the combination of Han, Bradley, and Shan fails to disclose wherein the ensemble distance between first data set and the second data set is calculated as a weighted average of the dimensional distance for each dimension, wherein a weight is applied to each dimensional distance in which the weight is based on a cluster quality associated with each dimension, wherein the cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering model.
On the other hand, Williams teaches wherein the ensemble distance between first data set and the second data set is calculated as a weighted average of the dimensional distance for each dimension, wherein a weight is applied to each dimensional distance in which the weight is based on a cluster quality associated with each dimension, wherein the cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering model. (See Figs. 7-14, paras. 36-37, 96-97, 99-101, wherein grouping component (average centroid distance), calculated score for each cluster that includes a weighted average of different components are disclosed, also See paras. 51, 163, wherein evaluating quality of cluster solutions in which “computation management module 214 may evaluate the quality of the cluster solutions using a weighed scoring system. The computation management module 214 may use the scores in several ways, such as: (a) assisting in the search algorithm, if one is utilized, and/or (b) allowing the user to scrutinize only those solutions with the highest overall scores. In some implementations, the quality score is based on a weighted combination of factors that are evaluated on the variables that have been separated by the data management module 208, for example: how well the solutions cover the range of values from the Target Drivers; how tightly grouped the clusters are across the chosen Cluster Candidates; and the overall diversity or heterogeneity of the clusters across both the Cluster Candidates and the Profile Variables” (analogous to cluster quality associated with each dimension is determined by a feature analysis module that determines a score for each of the plurality of clustering models as a measure of how the first and second data sets fit the first and second clustering models) [0051] are disclosed; as taught by Williams.)
Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Williams teachings in the combination of Han, Bradley, and Shan system. Skilled artisan would have been motivated to incorporate the system for clustering and evaluation of data taught by Williams in the combination of Han, Bradley, and Shan system for effective dimension reduction in the context of unsupervised learning. In addition, both of the references (Han, Bradley, Shan, and Williams) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as vector quantisation. This close relation between both of the references highly suggests an expectation of success.
As per claim 25, the combination of Han, Bradley, Shan, and Williams discloses wherein the ensemble distance between first data set and the second data set is calculated as a weighted average of the first dimensional distance and the second dimensional distance, where a first weight applied to the first dimensional distance is based on a first cluster quality corresponding to the first cluster vector and a second weight applied to the second dimensional distance is based on a second cluster quality corresponding to the first cluster vector. (See Fig. 4C, paras. 2, 46-49, 56, wherein combination of dimensions and dimension values are disclosed, also See paras. 6, 22, 44, 77, wherein centroid models, clustering algorithm, dimension settings (i.e. distance function, density threshold, number of clusters) are disclosed; as taught by Han.)
Conclusion
1. The examiner requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
2. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c).
POINT OF CONTACT
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LIN LIN M HTAY whose telephone number is (571)272-7293. The examiner can normally be reached on M-F, 7am-3pm, PST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached on (571)272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/L. L. H./
Examiner, Art Unit 2153
/KRIS E MACKES/ Primary Examiner, Art Unit 2153