Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-18 and 20-21 are pending. Claim 19 is canceled and Claim 21 is newly added by Applicant.
Examiner Notes
Examiner cites particular paragraphs and/or columns and lines in the references as applied to Applicant’s claims for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The prompt development of a clear issue requires that the replies of the Applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Authorization for Internet Communications in a Patent Application
Applicant is encouraged to file an Authorization for Internet Communications in a Patent Application form (http://www.uspto.gov/sites/default/files/documents/sb0439.pdf) along with the response to this office action to facilitate and expedite future communication between Applicant and the examiner. If the form is submitted then Applicant is requested to provide a contact email address in the signature block at the conclusion of the official reply.
Allowable Subject Matter
Claims 9-10 and 15-16 are objected to as being dependent upon a rejected base claim, but would be allowable over the prior art of record if rewritten to overcome the applicable rejection(s) and/or objection(s) set forth in this Office action and to include all of the limitations of the base claim and any intervening claims because the examiner found neither prior art cited in its entirety, nor based on the prior art, found any motivation to combine any of the said prior art.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 21 is rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
As per claim 21, ll. 3, recites “deployment” but ll. 2 recites “automatic deployment”. What is the difference between deployment and automatic deployment? In ll. 3-4, “are preauthorized for deployment to a plurality of cloud service providers different from the cloud computing systems on which the one or more computing workloads that are preauthorized”. How can the cloud service providers be different for all workloads that are preauthorized? Ll. 6 recites “automatic deployment” while ll. 7-9 recite “deployment”. What is the difference between deployment and automatic deployment? Ll. 7-9 recite “providers are not preauthorized for deployment to the plurality of cloud service providers different from the computing systems on which the one or more workloads not preauthorized for deployment are currently deployed”. How can the cloud service providers be different for all workloads that are not preauthorized? For the purposes of examination, the examiner interprets that the workloads preauthorized for automatic deployment and the workloads not preauthorized for automatic deployment can reside on the same and/or different cloud service providers. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4, 6, 8, 11, 14, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Borthakur (US 2015/0347183) (as previously cited) in view of Sabin et al. (US 2012/0233625) (hereinafter Sabin as previously cited) in view of Odibat et al. (US 2024/0249220) (hereinafter Odibat as previously cited).
As per claim 1, Borthakur primarily teaches the invention as claimed including a computing system for deploying computing resources, the computing system comprising:
one or more processors ([0055]; [0060]; [0067] processors and memory with computer readable storage medium); and
memory storing computer-readable instructions ([0055]; [0060]; [0067] processors and memory with computer readable storage medium) that, when executed by the one or more processors, cause the computing system to:
retrieve resource data comprising deployment costs of one or more computing workloads that are currently deployed on one or more cloud computing systems ([0014] perform one or more total cost of ownership analyses of the computer systems with workloads as deployed in a customer datacenter environment and as deployed in a computing resource service provider environment);
retrieve, service provider data comprising provider costs of the plurality of service providers ([0017] a cost may be provided by the computing resource service provider as a service associated with hosting workloads, and a return on investment may be calculated by comparing the total cost of ownership for operating the workload within the customer datacenter environment to costs for operating the workload within a computing resource service provider environment);
generate, cloud deployment data comprising predicted deployment costs of the plurality of service providers for each of the one or more computing workloads ([0026]-[0027] and [0046] customer-estimated costs and other estimated costs associated with providers and workloads);
based on the deployment costs for one or more of the plurality of service providers meeting one or more criteria, migrate the one or more computing workloads with the predicted deployment costs that meet the one or more criteria ([0026]-[0027] migrate workloads based on evaluating estimated costs meeting various criteria); and
generate, for each of the one or more computing workloads, based on the cloud deployment data, indications of the predicted deployment costs resulting from migration/deployment of workloads to the plurality of cloud service providers ([0042] required resources may be estimated as part of determining the cost to migrate a workload and [0044]-[0046] compare costs of operating workloads in different environments to determine which environment to migrate the workloads to based on the costs).
Borthakur does not explicitly teach:
wherein the one or more computing workloads comprise one or more computing workloads that are preauthorized for automatic deployment to a plurality of cloud service providers, and one or more computing workloads that are not preauthorized for automatic deployment to the plurality of cloud service providers;
via a cloud application programming interface (API) connector;
cloud service providers;
based on inputting the resource data and the cloud service provider data into one or more machine learning models.
However, Sabin teaches:
wherein the one or more computing workloads comprise one or more computing workloads that are preauthorized for automatic deployment to a plurality of cloud service providers, and one or more computing workloads that are not preauthorized for automatic deployment to the plurality of cloud service providers ([0077]-[0083] workload deployment manager automatically acquires the workload, selects a cloud, identifies security enforced by a particular cloud or even the security designation of a particular cloud, such as public or private, ensures workloads are authorized to access selected clouds, enforces identity and policy based restrictions on the user when accessing cloud resources in connection with workloads to be deployed in the selected cloud, and finally deploys the workload based on the previous steps);
via a cloud application programming interface (API) connector ([0036] software can contact business intelligence services via web services or some other Application Programming Interface (API));
cloud service providers ([0017] and [0043] cloud services).
Sabin and Borthakur are both concerned with workload placement and execution in computing environments and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Borthakur in view of Sabin because it would provide a way for workload coordination using an automated discovery service which identifies resources with hardware and software specific dependencies for a workload. The dependencies are made generic and the workload and its configuration with the generic dependencies are packaged. At a target location, the packaged workload is presented and the generic dependencies automatically resolved with new hardware and software dependencies of the target location. A workload packager can restrict a total number of available resources for the machine to a reduced subset of resources based on access permissions associated with the user providing the machine identifier. The workload packager resolves workload resources used with the workload. That is, once the workload is selected then all the sub resources used within or needed by that workload are identified.
Borthakur in view of Sabin do not explicitly teach based on inputting the resource data and the cloud service provider data into one or more machine learning models.
However, Odibat teaches based on inputting the resource data and the cloud service provider data into one or more machine learning models (abstract input resource profile into a neural network and [0073] input cloud service type into the neural network).
Odibat and Borthakur are both concerned with resource allocations and costs in computing environments and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Borthakur in view of Sabin in view of Odibat because it would provide a way to reduce the unpredictability of cloud resource costs, as well as improve the understanding of previously incurred cloud services costs, to allow stakeholders of a business to better understand the source of costs, as well as plan accordingly for future costs. An operational maturity score can be computed based on a variety of inputs, including a level of utilization of the resource. This can help to identify underutilized or unused resources. The score can also include identifying cost optimization opportunities such as discount plans and/or committed use discounts. If plans such as these exist and are being used, it can affect the operational maturity score. The operational maturity score is a metric that can indicate a level of potential savings that is available. Recommendations to take advantage of these savings may be provided. Anomaly-based alerts can be generated based on customer-driven and/or data-defined criteria, thereby helping users estimate cloud computing costs more quickly and accurately to enable users to better forecast their budgets, and help identify future needs based on their usage pattern, thus improving the technical field of cloud computing.
As per claim 3, Sabin teaches wherein the cloud API connector is configured to perform real-time retrieval of the resource data or the cloud service provider data ([0077] workload deployment manager can be configured to automatically acquire the workload package on behalf of the user via a profile setting associated with the user or via a registration of the workload package previously registered by the user).
As per claim 4, Borthakur further teaches wherein the meeting the one or more criteria comprises the predicted deployment costs being less than the deployment costs of the one or more computing workloads by at least a threshold amount ([0049] analyze resource costs associated with workload migrations based on system policies, thresholds, and metrics).
As per claim 6, Sabin teaches wherein the plurality of cloud service providers comprise a plurality of computing hardware resources or computing software resources on which computing processes of the one or more computing workloads are capable of being performed ([0015] hardware and software resources).
As per claim 8, Sabin teaches wherein the one or more computing workloads comprise computing processes performed on one or more physical devices of the plurality of cloud service providers or one or more virtual devices of the plurality of cloud service providers ([0015] hardware and software resources).
As per claim 11, Borthakur further teaches wherein the indications of the predicted deployment costs comprise indications of a difference between the predicted deployment costs and the deployment costs of the one or more computing workloads that are currently deployed ([0046] compare actual and estimated resource costs).
As per claim 14, it has similar limitations as claim 1 and is therefore rejected using the same rationale.
As per claim 17, it has similar limitations as claim 3 and is therefore rejected using the same rationale.
As per claim 20, it has similar limitations as claim 1 and is therefore rejected using the same rationale.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Borthakur in view of Sabin in view of Odibat in view of Sathaye et al. (US 2024/0168796) (hereinafter Sathaye as previously cited).
As per claim 2, Borthakur in view of Sabin in view of Odibat do not explicitly teach wherein the memory stores additional computer-readable instructions that, when executed by the one or more processors, further cause the computing system to: determine, based on inputting the resource data into the one or more machine learning models, the one or more computing workloads that are preauthorized for automatic migration.
However, Sathaye teaches wherein the memory stores additional computer-readable instructions that, when executed by the one or more processors, further cause the computing system to: determine, based on inputting the resource data into the one or more machine learning models, the one or more computing workloads that are preauthorized for automatic migration ([0110] train artificial intelligence model to automatically migrate workloads).
Sathaye and Borthakur are both concerned with migrating workloads in computing environments and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Borthakur in view of Sabin in view of Odibat in view of Sathaye because it would provide a way of monitoring a workload metric corresponding to a portion of a workload executing on a computing system, and analyzing, using a first trained learning model, the workload metric with respect to another workload metric to result in a comparative workload metric indicating that migration of the first portion of the first workload to the computing system is likely to result in a defined benefit with respect to continuing to execute the first portion of a first workload on the second computing system. An example benefit may be a reduction in cost or an increase in performance of the workload. The resulting system can also generate, using the trained learning model, a migration determination to migrate the first portion of the first workload from the second computing system of the computing systems to the third computing system of the computing systems based on the comparative workload metric being determined to satisfy a comparative workload metric criterion (e.g., migration would result in a lower cost or an improved performance of the workload).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Borthakur in view of Sabin in view of Odibat in view of Ahlvin et al. (US 2024/0176637) (hereinafter Ahlvin as previously cited).
As per claim 5, Borthakur in view of Sabin in view of Odibat do not explicitly teach wherein the one or more machine learning models comprise a decision tree model configured based on historical costs of deploying the one or more computing workloads to a plurality of historical cloud service providers.
However, Ahlvin teaches wherein the one or more machine learning models comprise a decision tree model configured based on historical costs of deploying the one or more computing workloads to a plurality of historical cloud service providers ([0050] relative costs of the different allocation types offered by the cloud service provider and [0054] implement decision tree based on historical cost data).
Ahlvin and Borthakur are both concerned with resource allocations in computing environments and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Borthakur in view of Sabin in view of Odibat in view of Ahlvin because it would enable optimal allocation plans to be determined for a customer of a cloud service provider based on multiple allocation types having different pricing models. The resulting system could track the impact of different utilization levels and/or allocation types on user satisfaction. An optimization engine can be used to ingest demand forecasts along with information on allocation types to produce an optimized allocation plan that mixes pay-as-you-go and reserved allocation types with possible overallocation to arrive at an allocation strategy that enables a hosted service to meet expected demand in the most cost-efficient manner.
Claims 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Borthakur in view of Sabin in view of Odibat in view of Wouhaybi et al. (US 2022/0012112) (hereinafter Wouhaybi as previously cited).
As per claim 7, Borthakur in view of Sabin in view of Odibat do not explicitly teach wherein the one or more machine learning models are configured to determine the one or more computing workloads that are preauthorized for automatic migration based on evaluation of whether the one or more computing workloads are critical workloads that require authorization for redeployment.
However, Wouhaybi teaches wherein the one or more machine learning models are configured to determine the one or more computing workloads that are preauthorized for automatic migration based on evaluation of whether the one or more computing workloads are critical workloads that require authorization for redeployment ([0120] critical workloads may be redeployed).
Wouhaybi and Borthakur are both concerned with resource allocations in computing environments and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Borthakur in view of Sabin in view of Odibat in view of Wouhaybi because it would provide for a robustness score using the one or more reliability thresholds and the capability information of the selected portion to determine a health state of selected portion of a network. If the robustness score of a selected portion meets the high reliability threshold, this could indicate that the selected portion of the computing infrastructure is healthy and can run at maximum capacity with any appropriate workloads, storage, and/or network needs for the selected portion. If the robustness score meets the reduced reliability threshold, this could indicate that the selected portion has an average health state but can still be reliably used for certain workloads and/or storage needs. For example, critical or long-running workloads may be placed on a different portion of the computing infrastructure that has a better (e.g., higher) health state. If the robustness score meets the minimum reliability threshold, this could indicate that the selected portion is unhealthy and needs preventive action before a failure occurs. Mitigation actions that attempt to increase the life of a node when, for example, a robustness score of a node indicates a low reliability and/or a change in reliability. For example, if the robustness score indicates that the health state of the node has been reduced, a determination can be made as to whether the health state declined faster than expected for that type of node, thus shortening the expected overall lifetime for the node. In this scenario, preventive actions to attempt to extend the life of the node. Such actions can include, for example, reducing the usage of the node. For example, load balancing can be performed, e.g., by the orchestrator, to shift some of the workload to less utilized nodes. By decreasing the usage of the current node, the power and utilization can be decreased to a desired limit and thus, the operating temperature of the node can also be decreased. Such preventive actions may help extend the life of the node.
As per claim 18, it has similar limitations as claim 7 and is therefore rejected using the same rationale.
Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Borthakur in view of Sabin in view of Odibat in view of Hari (US 2020/0089515) (as previously cited).
As per claim 12, Borthakur in view of Sabin in view of Odibat do not explicitly teach wherein the one or more machine learning models comprise a neural network configured to determine, based on the resource data and the cloud service provider data, migration costs for each of the plurality of cloud service providers.
However, Hari teaches wherein the one or more machine learning models comprise a neural network configured to determine, based on the resource data and the cloud service provider data, migration costs for each of the plurality of cloud service providers ([0095] use machine learning to generate a cost for each of the cloud provider services for migrating applications).
Hari and Borthakur are both concerned with resource allocations in computing environments and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Borthakur in view of Sabin in view of Odibat in view of Hari because it would provide for a load balancer that improves the distribution of workloads across machine instances and containers. The load balancer can optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource used by an application. The load balancer can provide performance information to a monitoring service that includes latency measurements, throughput measurements, and traffic measurements for each of the machine instances and for each of the containers.
As per claim 13, Hari teaches wherein the predicted deployment costs comprise the migration costs for each of the plurality of cloud service providers ([0085] machine learning model can generate output vector which identifies the cost percentage difference that is predicted to occur when migrating based on the output vector).
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Borthakur in view of Sabin in view of Odibat in view of Mishra et al. (US 2024/0427644) (hereinafter Mishra) in view of Martinez et al. (US 2014/0280961) (hereinafter Martinez as provided in the Notice of References Cited dated 12/17/2025).
As per claim 21, Borthakur in view of Sabin in view of Odibat do not explicitly teach wherein the one or more computing workloads that are preauthorized for automatic deployment to a plurality of cloud service providers are preauthorized for deployment to a plurality of cloud service providers different from the cloud computing systems on which the one or more computing workloads that are preauthorized for automatic deployment are currently deployed, and wherein the one or more computing workloads that are not preauthorized for automatic deployment to the plurality of cloud service providers are not preauthorized for deployment to the plurality of cloud service providers different from the computing systems on which the one or more workloads not preauthorized for deployment are currently deployed.
However, Mishra teaches wherein the one or more computing workloads that are preauthorized for automatic deployment to a plurality of cloud service providers are preauthorized for deployment to a plurality of cloud service providers, and wherein the one or more computing workloads that are not preauthorized for automatic deployment to the plurality of cloud service providers are not preauthorized for deployment to the plurality of cloud service providers ([0019] a schedule may also be generated that considers timing, cost, and latency requirements while minimizing predicted carbon emissions. The workload may be transferred i.e., such as automatically transferred, transferred after approval to the recommended execution environment for processing; [0028]-[0029] previously-approved scheduling times and regions are based on historical approval information i.e., by users, supervising computing processes and recommendations may be automatically implemented upon receiving approval from a user or a supervising computer process. In additional or alternative implementations, the recommendations may be automatically implemented without receiving such approval. Automatic implementation of the recommendations may include workload and scheduling updates such as temporarily updating, permanently updating. Automatically implementing recommendations may further include cloud provisioning, such as by provisioning and transferring workloads if a workload is shifted from one region such as one cloud computing facility to another. Automatic implementation of the recommendations may additionally or alternatively include automated workload execution at scheduled times, such as scheduled times, and within one or more recommended execution environments; [0035] calculate overall data transfer energy cost based on both the first and second measures of carbon intensity; [0038] approval of using the first execution environment to execute the workload; [0046] provisioning entities may include the provisioning process and approvals needed i.e., such as users who need to approve particular workload assignments).
Mishra and Borthakur are both concerned with resource allocations in computing environments and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Borthakur in view of Sabin in view of Odibat in view of Mishra because it would offer several benefits to organizations that process workloads in cloud computing environments. By identifying a set of candidate execution environments and generating recommendation information based on carbon intensity values and other execution information, the system may enable organizations to make informed decisions about where to process their workloads. This may help organizations reduce their carbon footprint and energy costs, while also potentially improving the performance and reliability of their workloads. Additionally, or alternatively, the system may provide scheduling information and other controls to ensure that workloads are executed in compliance with governance policies and other requirements. Overall, the system provides a valuable tool for organizations to optimize their workload processing in cloud computing environments.
Borthakur in view of Sabin in view of Odibat in view of Mishra do not explicitly teach:
different from the cloud computing systems on which the one or more computing workloads that are preauthorized for automatic deployment are currently deployed;
different from the computing systems on which the one or more workloads not preauthorized for deployment are currently deployed.
However, Martinez teaches:
different from the cloud computing systems on which the one or more computing workloads that are preauthorized for automatic deployment are currently deployed;
different from the computing systems on which the one or more workloads not preauthorized for deployment are currently deployed ([0148] resource affinity may specify that all workloads within a given container must be deployed on the same resource e.g. host, cluster, provider, network, or conversely may not be deployed on the same resource).
Martinez and Borthakur are both concerned with resource allocations in computing environments and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Borthakur in view of Sabin in view of Odibat in view of Mishra in view of Martinez because it would provide a way to determine the most suitable deployment of a computer workload to a cloud-computing environment, or determine the value/benefit of deploying a computer workload to a cloud-computing environment. A planning module analyzes a computer workload or workflow that may have previously been on a physical or virtual computing resource and assists in migrating or importing the computer workload or workflow to the clouding-computing environment. The planning module assesses difficulty in migrating or importing the computer workload or workflow, and the efficiency or value of using the cloud-computing environment. Deploying the cloud-computing resource comprises deploying a pre-determined set of cloud-computing resources to optimize the computer workloads' performance.
Response to Arguments
Applicant's arguments have been fully considered but they are not persuasive.
In the Remarks on pg. 10, Applicant argues that Sabin does not disclose automatic deployment. The examiner respectfully traverses. Sabin in at least [0077]-[0083] teach that a workload deployment manager automatically acquires the workload, selects a cloud, identifies security enforced by a particular cloud or even the security designation of a particular cloud, such as public or private, ensures workloads are authorized to access selected clouds, enforces identity and policy based restrictions on the user when accessing cloud resources in connection with workloads to be deployed in the selected cloud, and finally deploys the workload based on the previous steps. The examiner cites particular paragraphs and/or columns and lines in the references as applied to Applicant’s claims for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Thus, for at least the reasons provided above, Applicant’s arguments are unpersuasive and the rejections are sustained.
Citation of Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure:
Remy et al. (US 12,149,419) disclose benchmarking and prediction of cloud system performance.
Telang et al. (US 2022/0413932) disclose multi-cloud deployment strategy based on activity workload.
Mehrotra et al. (US 11,477,275) disclose deploying workloads in a cloud-computing environment.
Gupta et al. (US 2022/0188172) disclose cluster selection for workload deployment.
Ranjan et al. (US 2020/0304571) disclose application migrations.
Porter et al. (US 2020/0379805) disclose automated cloud-edge streaming workload distribution.
Vaddi (US 2020/0236169) disclose cloud platform or cloud provider selection.
D M et al. (US 2020/0218579) disclose selecting a cloud service provider.
Aydelott et al. (US 2018/0095778) disclose metric driven deployments to cloud service providers.
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Adam Lee whose telephone number is (571) 270-3369. The examiner can normally be reached on M-TH 8AM-5PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached on 571-272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form.
/Adam Lee/Primary Examiner, Art Unit 2198 March 11, 2026