DETAILED ACTION
This Office Action is in response to the amendments filed on 10/08/2025.
Claims 1, 9, and 17 currently amended.
Claims 1-6, 8-14, and 16-20 are currently pending in this application and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
In reference to Applicant’s arguments on page(s) 9-13 regarding rejections made under 35 U.S.C. 101:
Applicant submits that amended claims 1, 9 and 17, and their respective dependent claims are patent subject matter eligible under § 101 for at least the following reasons.
First, independent claims are not directed to an abstract idea. In the Office Action, the Examiner alleged that the limitations cover "a mental process including an observation, evaluation, judgement or opinion that could be performed in the human mind or with the aid of pencil and paper." Office Action, p. 10.
Applicant respectfully disagrees and submits that the independent claims, at least as amended, do not fall within the mental processes. The mental process groupings of abstract ideas cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Applicant submits that the independent claims, at least as amended, are not directed to an abstract idea. Particularly, training a quantum machine learning model based on the NISQ devices with the qubits representing the data points of the first subset of the dataset involves more than mere observations, evaluations, judgements, and/or opinions such that it cannot practically be performed in human mind or with the aid of pencil and paper. A human mind cannot train a quantum machine learning model. Particularly, a human mind cannot identify data points and apply the data points to physical qubits of a quantum machine.
Accordingly, the independent claims and respective dependent claims, do not fall into the category of "mental processes". The claims in the current application are therefore patent subject matter eligible and the subject eligibility analysis need not proceed further. But even if further analysis were to be performed, the claims still pass it.
Applicant submits that for the above reasons the independent claims 1, 9, and 17 as a whole are not drawn to an abstract idea but are drawn toward patentable subject matter. Therefore, Applicant respectfully submits that the claims are not directed toward a judicial exception and thus do not recite an abstract idea.
Second, assuming, arguendo, that the claims are directed to one of the subject matter groupings of abstract ideas enumerated under step 2A, Prong One, which is not conceded, the concepts of the claims are integrated into a practical application such that the claims are not directed to an abstract idea.
Regarding the Step 2A, Prong 2 analysis, section 2106.04(d)(1) of M.P.E.P. requires that "[a] claim reciting a judicial exception is not directed to the judicial exception if it also recites additional elements demonstrating that the claim as a whole integrates the exception into a practical application." M.P.E.P. § 2106.04(d)(1). "One way to demonstrate such integration is when the claimed invention improves the functioning of a computer or improves another technology or technical field." Id.
The Office Action states that the claims are not integrated into a practical application because "[t]his additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component". See Office Action at p. 14. Applicant respectfully disagrees.
The independent claims, at least as amended, recite additional limitations reflecting an improvement of the existing technology. In particular, the quantum machine learning model is trained based on a subset of data points derived from a dataset including a number of data points. The data points included in the subset are representative of the dataset from which the subset is constructed such that "the functionality of a computing system ... may be improved by increasing the training speed of machine learning models implemented on the computing system while maintaining a target level of accuracy of the trained model."
Particularly, "[p]erforming training a quantum machine learning model using a NISQ device may be impractical because NISQ devices may not include sufficient qubits for performing the operations necessary to train the quantum machine learning model." Specification-as-filed, para. [0015]. The claims of the present disclosure improve training of a quantum machine learning model implemented on one or more NISQ devices by "representing a large dataset using a subset of data points representative of the large dataset according to the present disclosure." Id.
As such, by performing the limitations recited in the independent claims, the method improves the functioning of a computer or improves technologies in an existing technical area of, for example, training quantum machine learning models.
The rejection is traversed because, even if the claims are directed to an abstract idea under Step 2A (prong 1), the claims nevertheless recite additional elements that integrate the judicial exception into a practical application under step 2A (prong 2). In particular, the claims of the instant application "...such that the training is accelerated relative to training on the entire dataset and is executable within the limited qubit resources of the NISQ devices" have integrated the exception into a practical application because they improve the functionality of a computer.
Accordingly, because the additional limitations of the independent claims reflect an improvement in the functioning of another technology or technical field, the claim integrates the judicial exception into a practical application and thus imposes a meaningful limit on the judicial exception.
Further analysis under Step 2B is not necessary. However, even if under Step 2A, the examiner still considers the claimed invention as being directed toward a judicial exception, the amended claims pass Step 2B, because they amount to significantly more than the judicial exception. Step 2B analysis under M.P.E.P. § 2106.05 requires determining whether "the additional elements recited in the claims provided 'significantly more' than the recited judicial exception (e.g., because the additional elements were unconventional in combination)." M.P.E.P. 2106.05.
Properly analyzed, the independent claims are patentable under Step 2B analysis in the M.P.E.P. § 2106.05. Section § 2106.05(I)(A) of the M.P.E.P. provides limitations that the courts have found to qualify as "significantly more" when recited in a claim with a judicial exception. Such limitations include improvements to the functioning of a computer, improvement to any other technology or technical field, applying the judicial exception with (or by use of) a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, and adding a specific limitation other than what is well-understood, routine, conventional activity in the field.
For at least these reasons, Applicant respectfully submits that claims 1, 9, and 17 are not directed toward an abstract idea. Applicants therefore respectfully request that the Examiner withdraw the rejections of independent claims 1, 9, and 17 and dependent claims 2-8, 10-16, and 18-20 under 35 U.S.C. § 101.
Examiner’s response:
Applicant’s arguments have been fully considered but are found to be not persuasive.
Applicant argues that the claims as amended cannot be directed to an abstract idea since the human mind cannot train a quantum machine learning model based on NISQ devices. Examiner agrees with this sentiment, however the step of training the quantum machine learning model is not the abstract idea in question. The claims as amended still recite actions of partitioning data, clustering data, determining weight values of the data, and selecting partitions of the data to be removed based on the weights and centroids of the data. The abstract ideas are all related to data manipulation and mathematical concepts. Because these actions are performed prior to the training of the quantum machine learning model they are related to preprocessing of data, which can be performed in the human mind or with the addition of pencil and paper.
Applicant argues that the instant application, as amended, provides an improvement on the functionality of a computer. Examiner disagrees. Applicant argues that the improvement comes from the processing of data, "...such that the training is accelerated relative to training on the entire dataset and is executable within the limited qubit resources of the NISQ devices". The training of the quantum model is only accelerated due to the preprocessing of the data, which includes removing partitions of the dataset to decrease the amount of data processed. This is not an improvement on the functionality of a computer because the training of the model is not done in a unique way and is only improved because there is less data to process. The improvement of a technology cannot arise from an abstract idea, in this case the data manipulation that is performed prior to training the model, and as such there is no improvement presented in the amended claims.
In light of the amendments made on the claims, the rejections under 35 U.S.C. 101 are maintained and updated below.
In reference to Applicant’s arguments on page(s) 13-19 regarding rejections made under 35 U.S.C. 103:
Applicant respectfully traverses the rejections for the following:
Gupta
Applicant respectfully traverses the rejections. The cited references, alone or in combination, fail to disclose all limitations of the independent claims. Particularly, Gupta fails to teach subject- based partition, weight vectors, weighted centroids, or partition weights.
Gupta does not teach Subject-based Partition
Claim 1 recites that the dataset be separated into partitions "based on a target number of subjects and a dimensionality of the data points. "While Gupta discloses clustering data objects in a multidimensional feature space ( [0088]), this clustering is performed without reference to any target number of subjects.
The present claims recite classifying data objects by subject, where the "subject" refers to the semantic content of the data itself, rather than metadata or file type. Gupta, by contrast, is directed to file classification at the metadata or topic level. Specifically, claim 1 recites:"...separating the dataset into a plurality of partitions based on a target number of subjects and a dimensionality of the data points ..."
and "... obtaining a plurality of weight vectors, each respective weight vector corresponding to respective subject of the target number of subjects..." (Claim 1, as-filed application, emphasis added).
The specification explains that the "subject" is inherent to the data and task at hand: "...the target number of represented subjects may indicate a number of parameters associated with a topic related to the machine learning model ... For example, in some implementations, the parameters may represent respective characteristics such as weighted average, exponential moving average, median, or other parameters." (Spec. [0024], emphasis added). In addition, each data point (not whole document) is assigned a weight vector that quantifies its degree of relevance to predefined semantic subjects (Spec. [0028]-[0029]). These subjects are conceptual categories, and each weight vector element represents the semantic contribution of that data point to the subject. Weighted centroids are then computed for subjects, and partitions are evaluated against these centroids for removal. Accordingly, Applicant's weight vectors capture the semantic meaning of the data relative to subjects, not raw keyword frequencies.
By contrast, Gupta repeatedly describes classification at the metadata or file-type level. Gupta explains: "Metadata is information about a data object, such as the identity of the creator ... the time at which the object was created, the type of file, etc. This information ... may be employed to classify the topic of the data object." (Gupta [0054], emphasis added). Gupta further discloses "topic classification" based on keywords or expert-applied tags: "Topic classification may include identifying or predicting a classification for a document ... using one or more words representing a topic ... [an] expert user identifying specific words ... and tag the documents with a particular classification." (Gupta [0052]). And Gupta explicitly describes clusters based on document types: "One cluster may be for workover reports, while another is for drilling logs, and another is artificial lift data." (Gupta [0089], emphasis added).
Gupta also explains generating weight vectors using TF-IDF "TFIDF ... defines how important a term is in a document with respect to all the documents in the dataset. It is used as a term weighting factor where the TFIDF score represents the importance of a word/phrase/feature in a textual paragraph within a corpus." (Gupta [0113]). "These methods generate a matrix containing the term weighting scores of each word in each document. Each row of the matrix is a document and each column represents a word/phrase/feature and the elements in this matrix are the weighting scores." (Gupta [0114]).
Accordingly, Gupta is concerned with categorizing documents by metadata, keywords, or document type (e.g., drilling logs vs. workover reports). The weight vector for each document is a vector of keyword frequency scores. A document's placement in vector space depends only on the relative statistical frequency of terms across documents. The end result is a topic-level classification of documents into "types" based on clustering of similar word frequencies.
This is fundamentally different from the present claims, which recites classifying data points by semantic subject matter inherent in the dataset (e.g., weighted average vs. exponential moving average), as recited in the specification.
Gupta does not teach Weight Vectors
The specification clarifies that: "...for each data point ... a weight vector including a plurality of values may be calculated, with each value representing the weight ... in relation to a corresponding represented subject." (Spec. [0029], emphasis added).
Thus, weight vectors of the claims are subject-oriented: each dimension corresponds to a subject, and each value indicates how strongly the data point relates to that subject.
By contrast, the Examiner cites Gupta's disclosure of TF-IDF vectors: "TFIDF ... defines how important a term is in a document with respect to all the documents in the dataset ... [it] generates a matrix containing the term weighting scores of each word in each document. Each row of the matrix is a document and each column represents a word/phrase/feature and the elements ... are the weighting scores." (Gupta [0113]-[0114], emphasis added).
These TF-IDF vectors are feature vectors based on word/phrase frequency, not subject-oriented weight vectors. In Gupta, each dimension corresponds to a term (feature), not a subject as defined in Applicant's claims. Accordingly, Gupta does not teach or suggest the weight vectors as recited in the claims.
Gupta does not teach Weighted Centroids
The Examiner cites Gupta's disclosure of clustering: "The preprocessing may also include tokenizing ... prior to TFIDF vectorization ... [then] executing an unsupervised machine learning technique to identify one or more clusters ..." (Gupta [0112], [0118]); "The algorithm follows ... assigning data points as centroids and finding distances of other data points to the centroids ..." (Gupta [0108]).
However, in Gupta, centroids are cluster centers in a feature space of terms, not weighted centroids tied to subjects. They represent average positions of feature vectors (e.g., TF-IDF term vectors), not centroids determined from subject-oriented weight vectors. Accordingly, Gupta fails to disclose the claimed weighted centroids.
Gupta does not teach Partition Weights
Thus, "partition weights" are derived from subject-oriented weight vectors -each partition weight reflects the contribution of data points relative to subjects.
By contrast, Gupta recites, as cited in the Office Action: "...the clustering technique may associate a score or vector with the data object, which produces a 'location' thereof within a multi-dimensional space ... clusters are then determined based on the proximity of the locations of the data objects ..." (Gupta [0088]); and that TF-IDF produces "term weighting scores of each word in each document" (Gupta [0114]).
Again, Gupta's "scores" and "vectors" are similarity measures in a feature space of terms, not partition weights derived from subject-based weight vectors. Proximity in feature space is not equivalent to partition weights as recited in the claims.
Hoff
Hoff fails to remedy the deficiencies of Gupta. In particular, Hoff fails to teach"... selecting a first partition of the plurality of partitions... between the respective first weighted centroid and each of the first partition weights."
Taken together, the claims and specification establish that: Each data point is assigned a subject-oriented weight vector. From these, weighted centroids are computed for subjects. Partition weights are computed from subject-oriented vectors. Partition removal decisions are made by comparing weighted centroids to partition weights.
By contrast, Hoff only discloses discarding clusters with low probability of matching, without any subject-based weight vectors or weighted centroids.
These disclosures differ in at least three critical ways: First, Hoff s "centroid" is an unweighted geometric average of features, not a subject-oriented weighted centroid derived from weight vectors in the present application. Second, Hoff's "discarding" step is triggered by low probability of matching during feature-based nearest-neighbor classification, not by comparing partition weights to weighted centroids. Last, Hoff nowhere discloses or suggests partition weights. Applicant's "partition weights" are values derived from subject-oriented weight vectors (Spec. [0029]), while Hoff's "probability" is merely a heuristic confidence score for feature matching.
Accordingly, Hoff does not disclose or suggest "selecting a first partition of the plurality of partitions to remove from the dataset based on respective relationships between the respective first weighted centroid and each of the first partition weights," as recited in Claim 1. The cited passages of Hoff ([0052], [0063]) disclose only discarding clusters with low probability of matching and computing unweighted centroids in feature space, which is fundamentally different from Applicant's claimed subject-oriented weight-based removal process.
Examiner’s response:
Applicant’s arguments have been fully considered and are found to be persuasive.
Applicant argues that the “subjects” of the instant application are not taught by Gupta and that Gupta is primarily concerned with clustering based on metadata of a file system. Examiner agrees that Gupta does not cluster based on something that is analogous to the “subjects” of the instant application, however it is the recommendation of the Examiner that the claim language be amended to better define what is meant by the “subject”. As it is currently written, the “subject” of the claims can easily be interpreted as being analogous to a generic feature of a piece of data or dataset. The description provided in the arguments presented and the specification, if incorporated into the claims, would serve to clarify what is meant by “subject” in the claims.
Applicant argues that Gupta does not teach the use of weight vectors, upon further review of the reference, Examiner agrees. The vectors used in Gupta are related to frequency of words in the document/dataset. While the Examiner is of the opinion that the words/phrases of Gupta are analogous to features of a given dataset, and therefore the “subjects” of the claims (a point brought up above), the weighting matrix used in Gupta is not one that characterizes each feature as having a value that represents relative contribution of that feature.
Applicant argues that Gupta does not teach the use of weighted centroids. Examiner agrees. It is clear upon further review of the reference that the centroids of Gupta are not based on any weighting factor but are based on average distance of feature vectors in relation to one another.
Applicant argues that Gupta does not teach partition weights for the plurality of clusters. Examiner agrees. Gupta’s clusters are based on similarity measures instead of weighting values and similarity in feature space is no the same as weighting the partitions of data.
It is clear to the Examiner that the deficiencies of Gupta are not remedied by Hoff, as Hoff teaches selecting and removing partitions of data based on a similarity matching scheme instead of taking into account any feature weights or centroid weights of the clusters. Redmond also does not provide evidence of remedying the deficiencies of Gupta and Hoff.
In light of the arguments presented, the rejections made under 35 U.S.C. 103 are withdrawn.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-6, 8-14, and 16-20 rejected under 35 U.S.C. 101 because they are directed to an abstract idea without significantly more.
Step 1 analysis:
Independent Claim 1 recites, in part, a method, therefore falling into the statutory category of process. Independent Claim 9 recites, in part, one or more non-transitory computer-readable storage media configured to store instructions, therefore falling into the statutory category of manufacture. Independent Claim 17 recites, in part, a system comprising one or more processors and one or more non-transitory computer-readable storage media, therefore falling into the statutory category of machine.
Regarding Claim 1:
Step 2A: Prong 1 analysis:
Claim 1 recites in part:
“separating the dataset into a plurality of partitions based on a target number of subjects and a dimensionality of the data points included in the dataset, each of the partitions including one or more data points of the plurality of data points”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses separating data according to a number of separations and a dimensionality.
“obtaining a plurality of weight vectors, each respective weight vector corresponding to a respective subject of the target number of subjects and including a value indicative of relative contribution of each data point to the respective subject”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses creating weight vectors that correspond to the number of data separations.
“determining a plurality of first weighted centroids of the dataset each respective first weighted centroid corresponding to a respective subject of the target number of subjects and being determined based on the plurality of data points and a respective weight vector associated with the respective subject that corresponds to the respective first weighted centroid”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses clustering a dataset based on the number of separations of the dataset.
“determining a plurality of first partition weights, each of the first partition weights being determined based on the respective data points included in a respective partition and one or more elements of a respective weight vector associated with the respective data points”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses determining parameter values based on data that has previously been augmented.
“selecting a first partition of the plurality of partitions to remove from the dataset based on respective relationships between the respective first weighted centroid and each of the first partition weights”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses selecting data to be removed from a dataset.
“obtaining a first subset of the dataset by removing the data points associated with the first partition from the dataset”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses removing data from a dataset.
Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea.
Step 2A: Prong 2 analysis:
The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of:
“obtaining a dataset including a plurality of data points”. This additional element amounts to extra-solution activity of receiving data (MPEP 2106.05(g): i.e., pre-solution activity of gathering data for use in the claimed process.
“loading each data point of the first subset into qubits of one or more noisy intermediate-scale quantum (NISQ) devices”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (NISQ device) (See MPEP 2106.05(f)).
“training a quantum machine learning model based on the one or more NISQ devices with the qubits representing the data points of the first subset of the dataset”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (NISQ device) (See MPEP 2106.05(f)).
Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Step 2B analysis:
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception.
As discussed above, the additional element(s) of “obtaining a dataset including a plurality of data points ” is/are recited at a high level of generality and amount(s) to extra-solution activity of receiving data i.e., pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
As discussed above, the additional element(s) of “loading each data point of the first subset into qubits of one or more noisy intermediate-scale quantum (NISQ) devices” and “training a quantum machine learning model based on the one or more NISQ devices with the qubits representing the data points of the first subset of the dataset” is/are recited at a high-level of generality such that it/they amount(s) to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)).
Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception.
Regarding Claim 2:
Step 2A: Prong 1 analysis:
Claim 2 recites in part:
“determining one or more second weighted centroid of the dataset each corresponding to a respective subject of the target number of subjects, each of the second weighted centroids being determined based on the data points included in the first subset and the respective weight vector associated with the respective subject”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses clustering a dataset based on the number of separations of the dataset.
“determining one or more second partition weights included in the first subset, each of the second partition weights being determined based on one or more elements of a weight vector associated with a respective subject of the target number of subjects”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses determining parameter values based on data that has previously been augmented.
“identifying a second partition of the partitions included in the first subset having a least influence on the determining the one or more second weighted centroid based on the second partition weights”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses determining a second separation of the dataset.
“obtaining a second subset by removing the data points associated with the second partition from the first subset”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses removing data from a dataset.
Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea.
Step 2A: Prong 2 analysis:
The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of:
“training the quantum machine learning model based on the second subset of the data set”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (quantum computer) (See MPEP 2106.05(f)).
Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Step 2B analysis:
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception.
As discussed above, the additional element(s) of “training the quantum machine learning model based on the second subset of the data set” is/are recited at a high-level of generality such that it/they amount(s) to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)).
Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception.
Regarding Claim 3:
Step 2A: Prong 1 analysis:
Claim 3 recites in part:
“determining an iteration condition”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses determining a condition to iterate on.
“determining whether the iteration condition is satisfied”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses determining that a condition is satisfied.
Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea.
Step 2A: Prong 2 analysis:
The claim does not recite any additional elements that integrate the judicial exception into a practical application.
Step 2B analysis:
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception.
Regarding Claim 4:
Step 2A: Prong 1 analysis:
Claim 4 recites in part:
“wherein the dataset is separated into 2k(d + 1) partitions, wherein "k" represents a target number of points and "d" represents the dimensionality of the data points”. As drafted and under its broadest reasonable interpretation, this limitation covers a mathematical relationship.
Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea.
Step 2A: Prong 2 analysis:
The claim does not recite any additional elements that integrate the judicial exception into a practical application.
Step 2B analysis:
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception.
Regarding Claim 5:
Step 2A: Prong 1 analysis:
Claim 5 recites in part:
“wherein selecting the first partition of the plurality of partitions to remove from the dataset comprises identifying the partition as having a least influence on the determining the first weighted centroid of the dataset by comparing the first partition weights to the first weighted centroid to determine which partitions corresponding to the first partition weights contributes the least to representation of the first weighted centroid”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses comparing weights and data to determine what data to remove.
Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea.
Step 2A: Prong 2 analysis:
The claim does not recite any additional elements that integrate the judicial exception into a practical application.
Step 2B analysis:
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception.
Regarding Claim 6:
Step 2A: Prong 1 analysis:
Claim 6 recites in part:
“determining one or more machine-learning parameters based on quantum data points”. Under the broadest reasonable interpretation, this limitation covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. See MPEP 2106.04(a)(2)(III). As drafted, this encompasses determining parameters of a machine learning model.
Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea.
Step 2A: Prong 2 analysis:
The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of:
“loading each data point included in the first subset into a quantum state”. This limitation merely indicates a field of use or technological environment in which the judicial exception is performed (quantum computing) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Step 2B analysis:
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception.
The additional element(s) of “loading each data point included in the first subset into a quantum state” is/are directed to particular field(s) of use (quantum computing) (MPEP 2106.05(h)) and therefore do not provide significantly more than the abstract idea, and thus the claim is subject-matter ineligible.
Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception.
Regarding Claim 8:
Step 2A: Prong 2 analysis:
The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of:
“the plurality of data points included in the dataset include financial or economic data”. This limitation merely indicates a field of use or technological environment in which the judicial exception is performed (financial and economic data) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
“the quantum machine learning model is trained to perform analysis of financial data or economic data”. This limitation merely indicates a field of use or technological environment in which the judicial exception is performed (quantum computing) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Step 2B analysis:
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception.
The additional element(s) of “the plurality of data points included in the dataset include financial or economic data” and “the quantum machine learning model is trained to perform analysis of financial data or economic data” is/are directed to particular field(s) of use (financial and economic data and quantum computing) (MPEP 2106.05(h)) and therefore do not provide significantly more than the abstract idea, and thus the claim is subject-matter ineligible.
Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception.
Regarding Claim 9:
Due to claim language similar to that of Claim 1, Claim 9 is rejected for the same reasons as presented above in the rejection of Claim 1, with the exception of the following limitation.
Step 2A: Prong 2 analysis
The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of:
“One or more non-transitory computer-readable storage media configured to store instructions that, in response to being executed by one or more processors, cause a system to perform operations”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (storage media and processor) (See MPEP 2106.05(f)).
Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Step 2B analysis:
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception.
As discussed above, the additional element(s) of “One or more non-transitory computer-readable storage media configured to store instructions that, in response to being executed by one or more processors, cause a system to perform operations” is/are recited at a high-level of generality such that it/they amount(s) to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)).
Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception.
Regarding Claim 10:
Due to claim language similar to that of Claim 2, Claim 10 is rejected for the same reasons as presented above in the rejection of Claim 2.
Regarding Claim 11:
Due to claim language similar to that of Claim 3, Claim 11 is rejected for the same reasons as presented above in the rejection of Claim 3.
Regarding Claim 12:
Due to claim language similar to that of Claim 4, Claim 12 is rejected for the same reasons as presented above in the rejection of Claim 4.
Regarding Claim 13:
Due to claim language similar to that of Claim 5, Claim 13 is rejected for the same reasons as presented above in the rejection of Claim 5.
Regarding Claim 14:
Due to claim language similar to that of Claim 6, Claim 14 is rejected for the same reasons as presented above in the rejection of Claim 6.
Regarding Claim 16:
Due to claim language similar to that of Claim 8, Claim 16 is rejected for the same reasons as presented above in the rejection of Claim 8.
Regarding Claim 17:
Due to claim language similar to that of Claims 1 and 9, Claim 17 is rejected for the same reasons as presented above in the rejection of Claims 1 and 9.
Regarding Claim 18:
Due to claim language similar to that of Claims 2 and 10, Claim 18 is rejected for the same reasons as presented above in the rejection of Claims 2 and 10.
Regarding Claim 19:
Due to claim language similar to that of Claims 3 and 11, Claim 19 is rejected for the same reasons as presented above in the rejection of Claims 3 and 11.
Regarding Claim 20:
Due to claim language similar to that of Claims 6 and 14, Claim 20 is rejected for the same reasons as presented above in the rejection of Claims 6 and 14.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20210233008 A1 – A method for analyzing data includes obtaining data objects from a data repository
US 20200074672 A1 – A system and method for detecting a pose of an object
US 20210342730 A1 – A novel and useful system and method of quantum enhanced accelerated training of a classic neural network
US 20140143251 A1 – relates to data clustering and, in particular, to a parallel D2-clustering method performed as part of a dynamic hierarchical structure
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to COREY M SACKALOSKY whose telephone number is (703)756-1590. The examiner can normally be reached M-F 7:30am-3:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/COREY M SACKALOSKY/Examiner, Art Unit 2128
/OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128