Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-5, 7-15, and 17-20 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1-5 and 7-10 recite a method (a process) and claims 11-15 and 17-20 recite a system (machine) therefore fall into a statutory category.
Step 2A – Prong 1 (Is a Judicial Exception Recited?):
Referring to claims 1-5, 7-15, and 17-20, the claims are directed to a manner of computing a score for a cloud computing project, which under its broadest reasonable interpretation covers concepts covered under the Mental Processes grouping of abstract ideas.
The abstract idea portion of the claims is as follows:
(Claim 1) A computer-implemented method [when executed by data processing hardware causes the data processing hardware to perform operations] comprising:
(Claim 11) [A system comprising: data processing hardware; and memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations comprising]: receiving unlabeled training data comprising a plurality of cloud resource metrics from a first plurality of cloud computing projects, each cloud computing project comprising a set of computing resources and characterized by cloud resource usage metrics indicative of resource usage and activity within a cloud computing environment;
generating a tailored, [self-supervised machine learning] model by training the model exclusively on the unlabeled training data, wherein the training configures the model to generate a plurality of project clusters by identifying patterns in the unlabeled training data;
for each respective cloud computing project of a second plurality of cloud computing projects, utilizing the [self-supervised machine learning] model to perform operations comprising: assigning the respective cloud computing project to one of a plurality of project clusters based on cloud usage metrics indicative of resource usage and activity within the cloud computing environment, wherein a first project cluster corresponds to active projects and a second project cluster corresponds to inactive projects; and determining a project usage score based on a distance between the respective cloud computing project and a centroid of an assigned project cluster; and
and communicating, [to a client device of the cloud computing environment], one or more remediation recommendations based on the respective project usage scores generated for the plurality of cloud computing projects.
Where the portions not bracketed recite the abstract idea.
Here the claims are directed to concepts capable of being performed in the human mind or via pen and paper (including an observation, evaluation, judgement, opinion) but for the recitation of generic computer components. In the present application concepts directed to determining a score for a cloud computing project (See paragraphs 43-44).
If a claim limitation, under its broadest reasonable interpretation, covers concepts capable of being performed in the human mind or via pen and paper, it falls under the Mental Processes grouping of abstract ideas. See MPEP 2106.04.
Step 2A-Prong 2 (Is the Exception Integrated into a Practical Application?):
The examiner views the following as the additional elements:
Data processing hardware. (See paragraph 28)
A system. (See paragraph 27)
Memory hardware. (See paragraph 28)
A client device. (See paragraphs 29 and 43)
Cloud computing environment. (See paragraph 28)
Self-supervised machine learning (See paragraphs 43-44)
The combination of these additional elements and/or results oriented steps are no more than mere instructions to apply the exception using generic computing components. (See MPEP 2106.05 (f))
Accordingly, even in combination these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea.
Step 2B (Does the claim recite additional elements that amount to Significantly More than the Judicial Exception?):
As noted above, the claims as a whole merely describes a method that generally “apply” the concepts discussed in prong 1 above. (See MPEP 2106.05 f (II)) In particular applicant has recited the computing components at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. As the court stated in TLI Communications v. LLC v. AV Automotive LLC, 823 F.3d 607, 613 (Fed. Cir. 2016) merely invoking generic computing components or machinery that perform their functions in their ordinary capacity to facilitate the abstract idea are mere instructions to implement the abstract idea within a computing environment and does not add significantly more to the abstract idea. Accordingly, these additional computer components do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore, even when viewed as a whole, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea and as a result the claim is not patent eligible.
Dependent claims 2 and 12 further define the abstract idea as identified. Additionally, the claim recites the generic client device (See paragraphs 29 and 43) for merely implementing the abstract idea using generic computing components which does not integrate the abstract idea into a practical application or adds significantly more. Therefore claims 2 and 12 are considered to be patent ineligible.
Dependent claims 3 and 13 further define the abstract idea as identified. Additionally, the claim recites the generic client device (See paragraphs 29 and 43) and cloud computing environment (See paragraph 28) for merely implementing the abstract idea using generic computing components which does not integrate the abstract idea into a practical application or adds significantly more. Therefore claims 3 and 13 are considered to be patent ineligible.
Dependent claims 4-5, 7-10, 14-15, and 17-20 further define the abstract idea as identified. Therefore claims 4-5, 7-10, 14-15, and 17-20 are considered to be patent ineligible.
In conclusion the claims do not provide an inventive concept, because the claims do not recite additional elements or a combination of elements that amount to significantly more than the judicial exception of the claims. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and the collective functions merely provide conventional computer implementation. Therefore, whether taken individually or as an order combination, the claims are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Response to Arguments
Applicant's arguments filed September 19, 2025 have been fully considered.
Applicant’s amendments and arguments, on pages 8-11 of the Remarks, regarding the 101 rejection the Examiner finds unpersuasive. Applicant argues that the claims recite an inventive concept that is not an abstract idea but rather a patent-eligible application of a self-supervised machine learning model to solve a specific technical problem in cloud computing environments. Applicant contends the limitations such as 1) receiving unlabeled training data, 2) generating a tailored, self-supervised machine learning model by training the model exclusively on the unlabeled training data, 3) generating a plurality of project cluster, and 4) determining a project usage score based on a distance between the respective cloud computing project and a centroid of an assigned project cluster indicated the amended claims are not directed to a mental process but rather they are directed to a specific improvement in the training and capability of a machine learning model for cloud resource management.
The Examiner respectfully disagrees viewing the limitations regarding the receiving unlabeled training data, generating a model by training the model exclusively on the unlabeled training data, generating a plurality of project cluster, and determination steps are all recited at a high level of generality and are similar to those in Electric Power Group where the claims involved the collection and analysis of information. The specific type of machine learning model used is mere instructions to apply the abstract idea and does not integrate the abstract idea into a practical application or adds significantly more. A user may perform the step of generating a mental model trained based on the analysis of the relevant metrics to subsequently generate clusters based on identified patterns. The training as claimed is a part of the abstract idea and is not viewed as constituting a specific improvement in training but rather training using a particular type of data (unlabeled data). Further the improvement with respect to using unlabeled data rather than difficult to obtain labeled information is an improvement to the computation of computing the metrics which the Examiner maintains is not a technological improvement but rather an improvement to the abstract idea. The Examiner further notes that the cloud resource management aspect is communicating the score with a recommendation and nothing more and was considered as part of the recited abstract idea.
Applicant argues the claims are similar to those in Ex Parte Desjardins as the claims recite a specific improvement in training: the use of unlabeled training data via super-supervised learning, where this addresses the difficult of classifying cloud projects where labeled data (ground truth) is unavailable or expensive to obtain. According to Applicant utilizing a self-supervised approach that derives clusters and centroids from unlabeled data, the claims improve the machine learning model’s ability to function in data scarce environment which is a technical improvement to the model itself.
The Examiner respectfully disagrees viewing the receiving and use of the unlabeled training data is a part of the abstract idea as presently claimed under their broadest reasonable interpretation as discussed prior. (See also paragraphs 43-44). The Examiner reiterates that they view the self-supervised approach as mere instructions to apply the abstract idea that does not integrate the abstract idea into a practical application or adds significantly more. Further the benefits proffered by the Applicant are improvements to the abstract idea of calculating the score and not to an improvement to machine learning technology for the reasons as discussed above.
Applicant argues the reliance on unlabeled data is a technical limitation and distinguishes the invention from standard supervised learning. According to Applicant a human cannot perform self-supervised learning to generate clusters from unlabeled data in their mind, which is a distinct computational process that leverages the geometric structure of data vectors.
The Examiner respectfully disagrees viewing the use of the unlabeled data is a part of the abstract idea under its broadest reasonable interpretation. (See also paragraphs 43-44). The Examiner views that the generating clusters from unlabeled data is capable of being performed in the human mind. Further the Examiner views there are no aspects pertaining to leveraging geometric structures of data vectors or other technical specificity to merit consideration as an additional element with respect to the generation of the clustering or use of the unlabeled data as claimed.
Applicant argues specifying that the usage score is determined based on the “distance… to a centroid”, where this is a specific implementation that provides a technological solution to the problem of identifying “active” vs. “inactive” projects without rigid, pre-defined rules, which constitutes a specific technical improvement that allows the cloud computing environment to automatically identify and remediate inactive therefore freeing up computing resources and improving security in a manner that was not previously possible without manual, supervised labeling. According to Applicant the claims provide a specific, technically rooted method of training and utilizing a machine learning model using unlabeled data and self-supervision. This is exactly the type of AI innovation that Desjardins protects and the claims are not directed to a mental process, nor are they generic applications of ML.
The Examiner respectfully disagrees viewing the recited abstract idea is providing for any such improvement to the management of projects in a cloud computing environment. See MPEP. 2106.04 (a) (It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92 (1981))… In addition, the improvement can be provided by the additional element(s) in combination with the recited judicial exception. See MPEP § 2106.04(d) (discussing Finjan, Inc. v. Blue Coat Sys., Inc., 879 F.3d 1299, 1303-04 (Fed. Cir. 2018)). Further regarding the automation of the manual inspection, the Examiner views the mere automation of a manual process using generic computing technology does not amount to an improvement. See Credit Acceptance Corp. v. Westlake Servs., 859 F.3d 1044, 1055 (Fed Cir. 2017) (“Our prior cases have made clear that mere automation of manual processes using generic computers does not constitute a patentable improvement in computer technology.”) Regarding the aspect of a rule-based method applicable for multiple businesses, the examiner views this is an improvement provided by the abstract idea, rather than the additional elements and therefore does not integrate the abstract idea into a practical application.
The Examiner views the usage of the type of model (self-supervised machine learning) is mere instructions to apply the abstract idea using generic computing components. The claims are not directed to improving machine learning models or more particularly self-supervised machine learning models, but rather using them to merely apply the abstract idea using generic computing components. The Examiner does not view the manner of training the model as an additional element but rather as a part of the abstract idea for the reasons as discussed above.
Therefore, the Examiner has maintained the 101 rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lan et al. (US 20230111287) – directed to training a machine learning system to perform a classification task by classifying input data into one of a plurality of classes.
Rama (US 20220383187) -directed to detecting non-compliances using machine learning using anomaly detection on an input dataset of unlabeled observations.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J MONAGHAN whose telephone number is (571)270-5523. The examiner can normally be reached on Monday- Friday 8:30 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sarah Monfeldt can be reached on (571) 270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.J.M./Examiner, Art Unit 3629 /SARAH M MONFELDT/Supervisory Patent Examiner, Art Unit 3629