DETAILED ACTION
This non-final office action is responsive to application 18/275,045 as submitted 31 July 2023.
Claim status is Preliminary Amendment in which claims 1-21 and 23 are currently pending and under examination; claim 22 is cancelled; amended claims are 7, 9, 11-13, 16-21 and 23; claims of independent form are 1, 21 and 23.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the domestic benefit of prior-filed provisional application 63/146,489 under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. The application has an effective filing date of 02/05/2021.
Information Disclosure Statement
As required by MPEP 609(c), the applicant’s submissions of the Information Disclosure Statements dated 10/17/34 - 12/04/25 are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by MPEP 609 C(2), a copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action.
Specification
The specification is objected to because the abstract of the disclosure does not commence on a separate sheet in accordance with 37 CFR 1.52(b)(4) and 1.72(b). A new abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 5-6, 12, 19-21 and 23 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, 7, 14, 29-30 and 33-34 of copending Application No. 18/867,043 in view of Tang et al., “Integrated Principal Component Analysis” (arXiv: 1810.00832v2). This is an obviousness-type provisional nonstatutory double patenting rejection. The instant application recites claim limitations that are substantively similar to the co-pending application with substitution of terminology (bold below), but does not recite variance (underlined below) which is met by Tang. Notably, terminology equivalence is apparent from the instant application’s Provisional 63/146,489 with Appendix including inventor Gemp’s reference EigenGame: PCA as a Nash Equilibrium at [P.3 Sect.3 ¶1] “We adhere to the following notation… principal components (equivalently eigenvectors)” demonstrates equivalence of terminology. Further, the difference in scope regarding variance is met by reference Tang as per [P.3 Sect1.2] “maximize the variance …variance-maximizing” thus covering remaining scope of claim not already met by the co-pending application. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to maximize variance per Tang in combination to arrive at the invention as claimed as applying known techniques to known methods ready for improvement to yield predictable results and/or for a motivation being “variance explained by iPCA” [P.7 Sect2.2.1]. See the following comparison table:
Instant Application: 18/275,045
Co-pending Application: 18/867,043
Claim 1.
A method of determining a plurality of principal components v of a data set X, the method comprising:
obtaining initial estimates for the plurality of principal components v’; and
for each particular principal component vi, generating a final estimate for the principal component vi by repeatedly performing operations comprising:
generating a reward estimate using the data set X and the current estimate v̂i of the particular principal component vi, wherein the reward estimate is larger if the current estimate v̂i of the particular principal component vi captures more variance in the data set X;
generating, for each parent principal component vj of the particular principal component vi, a respective punishment estimate, wherein the punishment estimate is larger if the current estimate v̂i of the particular principal component vi and the current estimate v̂j of the parent principal component vj are not orthogonal;
generating a combined punishment estimate for the particular principal component vi by combining the respective punishment estimates of each parent principal component vj; and
generating an update to the current estimate v̂i of the particular principal component vi according to a difference between the reward estimate and the combined punishment estimate.
Claim 1.
A method of determining a plurality of generalized eigenvectors v of a first matrix A and a second matrix B, wherein at least one of the first matrix A or the second matrix B characterize a data set, the method comprising:
obtaining initial estimates for the plurality of generalized eigenvectors v; and
for each particular generalized eigenvector vi, generating a final estimate for the particular generalized eigenvector vi, comprising at each stage t of a plurality of stages:
performing a plurality of iterations m in parallel, wherein performing each of the plurality of iterations m comprises performing operations comprising:
obtaining a random sample of the data set;
generating, from the random sample of the data set, an estimate Atm of the first matrix A and an estimate Btm of the second matrix B;
generating a reward estimate using (i) the estimate Atm of the first matrix A and the estimate Btm of the second matrix B and (ii) a current estimate v̂i of the particular generalized eigenvector vi, wherein the reward estimate is larger if an estimated generalized eigenvalue λj corresponding to the current estimate v̂i of the particular generalized eigenvector vi is larger;
generating, for each parent generalized eigenvector v of the particular generalized eigenvector vi, a respective punishment estimate, wherein the punishment estimate is larger if the current estimate v̂i of the particular generalized eigenvector vi and a current estimate v̂i of the parent generalized eigenvector v are not orthogonal;
generating a combined punishment estimate by combining the respective punishment estimates for each parent generalized eigenvector v of the particular generalized eigenvector vi; and
generating an initial update to the current estimate v̂i of the particular generalized eigenvector vi according to a difference between the reward estimate and the combined punishment estimate;
generating, from the respective initial updates to the current estimate v̂i of the particular generalized eigenvector vi generated at the plurality of iterations, a combined update to the current estimate vi of the particular generalized eigenvector vi; and
updating the current estimate v̂i of the particular generalized eigenvector vi by applying the combined update.
In the above, bold elements are equivalent terminology per Provisional appendix Gemp-EigenGame, and the underlined element is met by Tang in an obviousness finding discussed above.
Claim 5.
The method of claim 1, wherein
the final estimates for the principal components v are generated in parallel across the principal components v.
Claim 3.
The method of claim 1,wherein,
at each stage t of the plurality of stages, the respective current estimates for each of the plurality of generalized eigenvectors v are updated in parallel.
Claim 6.
The method of claim 5, wherein, for each particular principal component vi:
computations for generating the final estimate for the principal component v are assigned to a respective first processing device of a plurality of first processing devices; and
the current estimate vi of the particular principal component vi is broadcast to each other first processing device of the plurality of first processing devices at regular intervals.
Claim 29.
The method of claim l, wherein, for each particular generalized eigenvector vi:
computations for generating the final estimate for the particular generalized eigenvector vi are assigned to a respective different set of one or more devices of a plurality of devices; and
the current estimate i of the particular principal generalized eigenvector vi is broadcast to each other device of the plurality of devices at regular intervals.
Claim 12.
The method of any one of claim 1, wherein, for each particular principal component vi:
generating a combined punishment estimate for the particular principal component vi comprises determining a sum of the respective punishment estimates of each parent principal component v1.
Claim 30.
The method of claim l, wherein, for each particular generalized eigenvector vi:
generating a combined punishment estimate for the particular generalized eigenvector vi comprises determining a sum of the respective punishment estimates of each parent generalized eigenvector .
Claim 19.
The method of claim 1, further comprising:
using the plurality of principal components v to process the data set X using a machine learning model.
Claim 7.
The method of claim 4, further comprising:
using the final estimates for the plurality of generalized eigenvectors v [..] a machine learning model to process first projected elements x' and…
Claim 20.
The method of claim 1, in which the data set X comprises
one or more of: a set of images collected by a camera or a set of text data.
Claim 14.
The method of any one of claims 11-13, wherein at least some of the plurality of elements represent
one or more of: speech data, network signal data, image data, or sensor data.
Claim 21.
A system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations for determining a plurality of principal components v of a data set X, the operations comprising: [claim 1]
Claim 33.
A system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations for determining a plurality of generalized eigenvectors v of a first matrix A and a second matrix B, wherein at least one of the first matrix A or the second matrix B characterize a data set, the operations comprising: [claim 1]
Claim 23.
One or more non-transitory computer-readable storage media storing instructions that when executed by one or more computers cause the one or more computers to perform operations for determining a plurality of principal components v of a data set X, the operations comprising: [claim 1]
Claim 34.
One or more non-transitory computer storage media storing instructions that when executed by one or more computers cause the one or more computers to perform operations for determining a plurality of generalized eigenvectors v of a first matrix A and a second matrix B, wherein at least one of the first matrix A or the second matrix B characterize a data set, the operations comprising: [claim 1]
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1, 21 and 23 rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, regards as the invention. Particularly, antecedent basis is lacking for the following limitations: Claims 1, 21 and 23 limitation generating a reward estimate recites in “the current estimate” should be ‘a current estimate’ to properly introduce the term. Accordingly, antecedent basis is insufficient for the above identified limitation in the claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-21 and 23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In determining whether the claims are subject matter eligible, the examiner applies guidance set forth under MPEP 2106.
Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—all claims fall within one of the four statutory categories: claims 1-20 are a method/process, claim 21 is a system/machine, and claim 21 is a computer readable medium/article of manufacture. Thus, the analysis should proceed per MPEP 2106.03.
Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes—the claims, under the broadest reasonable interpretation, recites an abstract idea. In this case, claims fall within the enumerated grouping of abstract idea being “Mathematical Concepts” enumerated grouping of abstract idea under MPEP 2106.04(a)(2)(I). Particularly, claims are drawn to determining a plurality of principal components based on estimates. When read in light of specification, introduced is [0001] “PCA is commonly used for dimensionality reduction” and proceeds with equations for the estimations. Estimation includes reward estimate, punishment/penalty estimate, combined punishment estimate, and updated estimate according to difference. Difference is a subtractive function detailed per specification [0055] or [0100] and with ∑-sigma for combining by summation, the first term is reward estimate based on statistical variance and second term as a fraction is punishment estimate based on non-orthogonality (i.e. non-perpendicular). Thus, the claim viewed as a whole simply serves to characterize an equation. This does not merely involve math somehow in someway, it is irrevocably math as it fundamental tenet. Accordingly, the claims are drawn to mathematical concepts as the abstract idea. By example, a breakdown is provided immediately below using the specification and is further apparent from the Provisional filing based on Gemp’s EigenGame at [P.4] Alg 1 for-do loop
PNG
media_image1.png
340
912
media_image1.png
Greyscale
PNG
media_image2.png
618
902
media_image2.png
Greyscale
Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—a practical application is not integrated by the judicial exception because the additional elements are as follows:
Claim 1 has zero additional elements. Despite the title of invention, there is decidedly nothing in the claim that is even mildly suggestive of agentic AI, or machine learning of any sort, much less how it would used in practical sense for concrete real-world use case. Therefore, any purported allegation of practical application is entirely irrelevant because there are no additional elements in claim 1.
Independent claims 21 and 23 include generic computer elements - these additional elements are recited at a high level of generality and amount to mere instructions to implement the abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea under MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application and the claims remain directed to an abstract idea. As per MPEP 2106.04(a)(2), claims that require computer may still recite an abstract idea.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No—the claims do not include additional elements that amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are such that there aren’t any in claim 1. Since there aren’t any additional elements, there is no basis for a finding of significantly more.
Concerning independent claims 21 and 23 that further include generic computer elements under MPEP 2106.05(f), these additional elements are particularly inadequate in satisfying the test of particular machine under MPEP 2106.05(b). The use of a general purpose computer cannot provide an inventive concept, and the courts have long found that mere use of a general purpose computer to perform the abstract idea is not sufficient to amount to significantly more. Accordingly, the identified additional elements do not amount to significantly more.
Taken alone, their additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. Their collective functions merely provide conventional computer implementation. The claim as a whole is no more than a drafting effort designed to monopolize the judicial exception.
For at least the foregoing reasons, the claims are not patent eligible. This rejection applies equally to independent claims 1, 21 and 23 as well to dependent claims 2-20. Dependent claims when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea, or that they include additional elements which integrate the judicial exception into a practical application or amount to significantly more.
Dependent claim 2 discloses wherein estimates are sequentially generated in descending order of principal components. This is considered part of the abstract idea being math estimation which may comprise rank-sorting values. There are no additional elements.
Dependent claims 3-4 detail equations which further embellish the abstract idea of mathematical concepts. Again, there are no additional elements.
Dependent claims 5 discloses wherein estimates are generated in parallel across principal components. This is considered part of the abstract idea including mathematical relationships. For example, dual optimization or solving simultaneously for plurality of PCA estimates.
Dependent claims 6 discloses wherein computations are assigned to devices and broadcasting to the devices the pca estimates. The limitation of computation assigning can be a mental process such as judging which type of device is more suitable for performing math. The actual devices and broadcasting are considered as additional elements which amount to mere use of computers as a tool to perform the abstract idea and/or insignificant extra-solution activity under MPEP 2106.05(f)(g). Particularly, the devices are recited at a high level of generality and do not satisfy test of particular machine under MPEP 2106.05(b), and broadcasting is a well-understood, routine and conventional (WURC) activity under MPEP 2106.05(d)(II)(i) transmitting data over a network. Taken together and considered as a whole, the limitations amount to distributed implementation of PCA. This is not an inventive concept in view of the evidence comprising Chen et al., “Distributed Estimation for Principal Component Analysis: an Enlarged Eigenspace Analysis” arXiv: 2004.02336v3 at [P.11-12] Alg.1 Line 7 or Alg.2 Line 7 transmitting. As such, the additional elements are insufficient to integrate the judicial exception into a practical application or amount to significantly more.
Dependent claim 7 discloses obtaining a subset of data for generating a reward estimate and current estimate of principal components wherein reward estimate is larger if current estimate captures more variance. This is considered part of the abstract idea being mathematical estimates. The variance is a statistical relationship that may be larger based on ‘ > ‘ operands for greater than thresholds or subject to min/max constraints. There are no additional elements.
Dependent claim 8 discloses wherein, for each principal component, reward estimate is proportional to the specified product between data X and principal component v. This is considered part of the abstract idea being math which is further characterized by the specified proportionality. There are no additional elements.
Dependent claim 9 discloses wherein a direction of the punishment estimate for principal components is equal to a direction of the initial estimate. This is considered part of the abstract idea being math, the equality to direction may be positive or negative polarity of a vector representation for estimation in principal component analysis. There are no additional elements.
Dependent claim 10-11 disclose wherein the punishment estimate for principal components is proportional to the specified product. This is considered part of the abstract idea being math which is further characterized by the specified proportionalities. There are no additional elements.
Dependent claim 12 discloses wherein generating combined punishment estimate comprises determining a sum of punishment estimates for each principal component. This is considered part of the abstract idea being mathematical summation or additive estimation. There are no additional elements.
Dependent claim 13 discloses wherein generating an update according to a difference between estimates comprises determining gradient of a utility function using the difference between estimates, generating intermediate updates of specified proportionality, and generating the update to the current estimate using the intermediate update. This is all math and therefore considered part of the abstract idea. There are no additional elements.
Depending claim 14 discloses an equation for generating updates to the estimate based on a step size. This is part of the abstract idea as a mathematical equation. There are no additional elements.
Dependent claim 15 discloses wherein generating an update to current estimate comprises generating intermediate updates using different subsets, in parallel across plurality of second devices, further generating updates by combining intermediate updates to generate update to current estimate. The limitations of generating estimates are treated as part of the abstract idea being math. The devices are treated as additional elements which fall under MPEP 2106.05(f) mere use of computers to perform the abstract idea, and do not qualify as a particular machine under MPEP 2106.05(b). Evidence is given in supplement to show that inventive concept is not present, particularly Raja et Bajwa, “Distributed Stochastic Algorithms for High-rate Streaming Principal Component Analysis” arXiv: 2001.01017v1 at [P.10-11] Alg.1 Line2 or Alg.2 Line4 parallel processors. In view of the foregoing, the additional elements are insufficient to integrate the judicial exception into a practical application or as amounting to significantly more.
Dependent claim 16 discloses subtracting estimates to generate difference and left-multiplying the difference by a factor of proportionality. This is clearly math which is the abstract idea. There are no additional elements.
Dependent claim 17 discloses generating an update to the current estimate of principal component according to the specified fraction and normalizing. This is considered part of the abstract idea being mathematical concepts. There are no additional elements.
Dependent claim 18 discloses reducing dimensionality of data using the principal components. This may be considered part of the abstract idea of math and reducing dimension can be simplest form such as 2-dimension reduced to 1-dimension as may be performed by selecting a column of table data or singular value decomposition. There are no additional elements.
Dependent claim 19 discloses further using the principal components to process data using a machine learning model. The machine learning model is an additional element and which falls under MPEP 2106.05(h) generally linking the use of the judicial exception to a particular technological environment or field of use, hence using. The use-it/apply-it machine learning model does not meaningfully limit the claim because the model contains no particularity and may be an off-the-shelf model broadly conveying mere association the field of artificial intelligence in general. Accordingly, the additional elements do not integrate the judicial exception into a practical application or amount to significantly more.
Dependent claim 20 discloses wherein the data set comprises text data or images collected by camera in the alternative. The limitation can be considered as additional elements which amount to adding insignificant extra-solution activity to the judicial exception falling under MPEP 2106.05(g) e.g. mere data gathering or selecting the type of data to manipulate. The limitation is recited at a high level of generality and does not meaningfully limit the claim as it is a conventional activity identified under MPEP 2106.05(d)(II)(iv-v) retrieving information in memory or extracting data from documents. As such, the additional elements do not integrate the judicial exception into a practical application or amount to significantly more.
Taken alone, their additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination does not cure eligibility particularly when viewed in light of the evidentiary references. Their collective functions merely provide known computer implementation. The claim as a whole is no more than a drafting effort designed to monopolize the judicial exception.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 5, 7-12, 18-21 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over:
Gang et Bajwa, “A Linearly Convergent Algorithm for Distributed Principal Component Analysis” hereinafter Gang (arXiv: 2101.01300v1) in view of
Tang et Allen, “Integrated Principal Component Analysis” hereinafter Tang (arXiv: 1810.00832v2).
With respect to claim 1, Gang teaches:
A method of determining a plurality of principal components v of a data set X, the method comprising: {Gang [P.5] “proposed methods… Principal Component Analysis (PCA)” formulated with X that decorrelates features of y sampled from data matrix Y, see [P.9 Alg] Distributed Sanger’s Alg. (DSA)}
obtaining initial estimates for the plurality of principal components v’; and {Gang discloses at [P.3 Last2¶] “initial estimates” as [P.5 Last¶] “PCA is then estimated by finding the eigenvectors” particularly [P.9 Alg.1] Xinit initialize step}
for each particular principal component vi, generating a final estimate for the principal component vi by repeatedly performing operations comprising: {Gang [P.8 ¶2] “iterative method” implemented as for-do loop Alg.1 [P.9] providing “estimate for kth eigenvector” eigenvector is principal component per instant provisional appendix Gemp-EigenGame notation [P.3 Sect.3 ¶1]}
generating an update to the current estimate v̂i of the particular principal component vi according to a difference between the reward estimate and the combined punishment estimate. {Gang [P.11 Sect.B] “update equation for an estimate of the kth eigenvector” detailed Eqs. 18-19 where subtractive operand ‘ - ‘ preceding -∑ denotes difference, ∑ is combining and the two terms are estimated by multiplying products, similarly as introduced [P.9 Eq.9]}
However, the following limitations are further detailed by Tang:
generating a reward estimate using the data set X and the current estimate v̂i of the particular principal component vi, wherein the reward estimate is larger if the current estimate v̂i of the particular principal component vi captures more variance in the data set X; {Tang [P.9-10 Sect3.1] Unpenalized Estimators, ‘un’penalized is reward as opposed to penalized of following section 3.2. The estimators are introduced for covariance [P.9 Sect.3 ¶1] and discloses [P.10 Sect3.1 ¶3] “applying PCA to X”}
generating, for each parent principal component vj of the particular principal component vi, a respective punishment estimate, wherein the punishment estimate is larger if the current estimate v̂i of the particular principal component vi and the current estimate v̂j of the parent principal component vj are not orthogonal; {Tang [P.10-11 Sect3.2] discloses Penalized Estimators “novel type of penalty, named the multiplicative Frobenius iPCA penalty” and non-orthogonal is [P.13 ¶3] “off-diagonal” e.g. [P.33 ¶1] “penalizes the off-diagonal”}
generating a combined punishment estimate for the particular principal component vi by combining the respective punishment estimates of each parent principal component vj; and {Tang [P.10-11 Sect3.2] Eq.12-Eq. of P.11 where large sigma ∑Kk=1 summation denotes combining for the penalized/punishment estimator, “performing iPCA with the additive Frobenius penalty is equivalent to PCA”}
Tang is directed to principal component analysis thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to employ the teachings of Tang in combination with Gang to arrive at the invention as claimed for a motivation “it is important to select penalty parameters… select the λ which minimizes the error between the imputed values and the observed values” [P.15 Sect3.2.3] describes practical considerations, and further with “two main theoretical contributions – proving subspace consistency of the additive L1 correlation iPCA estimator and proving global convergence of the multiplicative Frobenius iPCA estimator” [P.4 ¶2]. Moreover, “iPCA works best when the data is normally distributed” [P.6 ¶1] e.g. data is “jointly shared” [P.5 ¶2] supports applicability to Gang’s method of distributed PCA.
With respect to claim 2, the combination of Gang and Tang discloses the method of claim 1, wherein
the final estimates for the principal components are generated sequentially, in descending order of principal component {Tang discloses [P.55 ¶2] “eigenvalues are sorted in descending order” eigenvalues of eigenvector where vectors are sequential, e.g. [P.16 Last¶] “top d eigenvectors of the estimate ∑ are given by u1, …, ud” similarly at [P.7 Sect2.2.1] top m iPC integrated principal components}.
With respect to claim 5, the combination of Gang and Tang teaches the method of claim 1, wherein
the final estimates for the principal components v are generalized in parallel across the principal components {Tang [P.8 Sect2.3.1 ¶2] “estimates Δk simultaneously with ∑” e.g. [P.6 Last2¶] “estimates Δk and ∑ concurrently” Eqs. 3-4 jointly learned for respective eigen-decompositions, see also [P.11 ¶1-2] “generalization of PCA”, [P.22 ¶1-2] “generalizes PCA… concurrent estimation of ∑ and Δk”}.
With respect to claim 7, the combination of Gang and Tang teaches the method of claim 5, wherein
the method further comprises obtaining a subset Xt of a plurality of data elements in the data set X; and {Tang [P.3 Sect1.2 ¶1] “data set X ∈ Rn x p with n samples” where n corresponds to a subset, e.g. “row of X… column of X” similarly at [P.5 Sect2.1 ¶1] “X1, …, XK, of dimensions n x pq, …, pK, where n is the number of samples”. See also Gang [P.8 ¶3] “mini-batch of samples”}
generating a reward estimate using the data set X and the current estimate v̂i of the particular principal component vi comprises generating a reward estimate using the subset Xt and the current estimate v̂i of the particular principal component vi, wherein the reward estimate is larger if the current estimate v̂i of the particular principal component vi captures more variance in the subset Xt. {Tang [P.3 Sect1.2 ¶1] “PC score is uj := Xvj” and “sample version of PCA plugs in an estimate Δ’ …Δ’ is the maximum likelihood estimator (MLE)“ cont’d “maximize the variance” where MLE is detailed [P.9-10 Sect3.1] Eqs.9-10}
With respect to claim 8, the combination of Gang and Tang teaches the method of claim 7, wherein
for each particular principal component vi, the reward estimate is proportional to Xtv̂i or XtTXtv̂i {Tang [P.3 Sect1.2 ¶1] “PC score is uj := Xvj”}.
With respect to claim 9, the combination of Gang and Tang teaches the method of claim 7, wherein for each particular principal component vi:
a direction of the punishment estimate corresponding to each parent principal component vj is equal to a direction of the initial estimate v̂j of the parent principal component vj. {Gang discloses [P.11 Sect.B] “Sanger’s direction and update for an estimate of the kth eigenvector is thus given as” Eq.18 such that [P.6 ¶1] “principal angles between Q and X given by either (2) or (3) will be the same” same angle yields same direction}
With respect to claim 10, the combination of Gang and Tang teaches the method of claim 7, wherein
the punishment estimate for each parent principal component vj is proportional to <Xtv̂i,Xtv̂j>v̂j
{Gang [P.20 Last¶] “generalized Rayleigh quotient” where numerator is proportion specified}.
With respect to claim 11, the combination of Gang and Tang teaches the method of claim 7, wherein for each particular principal component vi
the punishment estimate corresponding to each parent principal component vj is proportional to: (<Xtv̂i,Xtv̂j>/<Xtv̂i,Xtv̂j>)Xtv̂j {Gang [P.20 Last¶] “generalized Rayleigh quotient” suggests proportion}
With respect to claim 12, the combination of Gang and Tang teaches the method of claim 1, wherein, for each particular principal component vi:
generating a combined punishment estimate for the particular principal component vi comprises determining a sum of the respective punishment estimates of each parent principal component vj. {Tang [P.10-11 Sect3.2] “additive-type penalty” ∑-summation is detailed for iPCA}
With respect to claim 18, the combination of Gang and Tang teaches the method of claim 1, further comprising:
using the plurality of principal components v to reduce a dimensionality of data set X {Tang [P.64 ¶1] “Xk …performing dimension reduction” again [P.22 ¶3] “dimension reduction” with dimensionality specified [P.5 Sect2.1 ¶1] “X1, …, XK, of dimensions n x p1, …, n x pK” See also Gang [P.2 ¶2], [P.4 ¶3], [P.9 ¶1]}.
With respect to claim 19, the combination of Gang and Tang teaches the method of claim 1, further comprising:
using the plurality of principal components v to process the data set X using a machine learning model {Tang [P.20 ¶2] “taking the top PCs and using them as predictors in a random forest” random forest is machine learning model, the data is rosmap alzheimers dataset Fig 6 training. See also Gang [P.7 Last2¶] “neural network training”}.
With respect to claim 20, the combination of Gang and Tang teaches the method of claim 1, in which the data set X comprises
one or more of: a set of images collected by a camera or a set of text data {Tang [P.18 Sect4.2] case study using rosmap dataset for alzheimer’s which is a known text-based dataset, [P.65 ¶1]}.
With respect to claim 21, the rejection of claim 1 is incorporated. The difference in scope being a system comprising a computer and device storing instructions executable to perform limitations of method claim 1. Gang discloses [P.3 ¶2] “server is required to co-ordinate among the various data centers… distributed manner when data is scattered over a network of interconnected nodes” with [P.7 ¶1] “memory and storage requirements” as well as algorithmic pseudocode [P.9 Alg.1]. The remainder of this claim is rejected for the same rationale as claim 1.
With respect to claim 23, the rejection of claim 1 is incorporated. The difference in scope being a non-transitory computer-readable storage media storing instructions executed by computer to perform the method of claim 1. Gang discloses [P.7 ¶1] “memory and storage requirements” providing algorithmic pseudocode [P.9 Alg.1] and [P.3 ¶2] “server is required to co-ordinate among the various data centers… distributed manner when data is scattered over a network of interconnected nodes”. The remainder of this claim is rejected for the same rationale as claim 1.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Gang and Tang in view of
Chen et al., “Distributed Estimation for Principal Component Analysis: an Enlarged Eigenspace Analysis” hereinafter Chen (arXiv: 2004.02336v3).
With respect to claim 6, the combination of Gang and Tang teaches the method of claim 5. Chen teaches wherein, for each principal component vi:
computations for generating the final estimate for the principal component v are assigned to a respective first processing device of a plurality of first processing devices {Chen at [P.10] Alg.1 Lines1-6 “each local machine computes… Compute the local gradient information” using w-eigenvector estimator for distributed top eigenvector algorithm, similarly at [P.11] Alg.2 Line3, eigenvector for PCA per Title}; and
the current estimate vi of the particular principal component vi is broadcast to each other first processing device of the plurality of first processing devices at regular intervals {Chen [P.10-11] Alg.1 Line7 or Alg.2 Line7 “Transmit” is broadcasting based on the eigenvector estimator w within g gradient information, the intervals are “iterations T” of for-loops in Alg.1, e.g. [P.14 Last¶] “procedure is repeated L times until we obtain all the L top eigenvectors”}.
Chen is directed to distributed estimation for principal component analysis thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to specify devices and broadcast/transmit per Chen in combination to arrive at the invention as claimed for motivations of off-loading compute by allocating resources in a distributed environment and/or “The contributions of our method is two-fold. First, as compared to DC method in Fan, we completely remove the assumption on the number of machines. Our method leverages shift-and-invert preconditioning (a.k.a., Rayleigh quotient iteration) from numerical analysis together with quadratic programming and achieves a fast convergence rate… The second contribution of our paper is that we propose an enlarged eigenspace estimator that does not require any eigengap assumptions” [P.5 ¶2].
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Gang and Tang in view of
Bhatia et al., “Gen-Oja: A Simple and Efficient Algorithm for Streaming Generalized Eigenvector Computation” hereinafter Bhatia.
With respect to claim 17, the combination of Gang and Tang teaches the method of claim 1. Bhatia teaches wherein, for each particular principal component vi:
generating an update to the current estimate v̂i of the particular principal component vi comprises updating the current estimate to be v̂’i and normalizing: v̂i <- v̂’i / ||v̂’i|| {Bhatia [P.4] Alg.1 for-loop last line same function, described [P.6-7 Pg.Brk] “Gen-Oja’s update… normalization step”}
Bhatia is directed to eigenvector decomposition and streaming PCA thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to update and normalize with Oja’s rule per Bhatia in combination as applying a known technique to a known method ready for improvement to yield predictable results being that it is a “natural extension of the popular Oja’s algorithm used for solving the streaming PCA problem” [P.4 Sect.4 ¶1] such that it “enables us to analyze the improvement” [P.7 ¶1] and provides a simple and efficient generalized computation per Title.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Chase P Hinckley whose telephone number is (571)272-7935. The examiner can normally be reached M-F 9:00 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda M. Huang can be reached at 571-270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHASE P. HINCKLEY/Examiner, Art Unit 2124