DETAILED ACTION
This office action is a response to a communication made on 10/16/2025.
Claims 1, 14 and 21 are currently amended.
Claims 1-21 are pending for this application.
Request for continued examination (RCE) under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/16/2025 has been entered.
Response to Arguments
Applicant’s arguments, see remarks on page 8, filed 10/16/2025, with respect to claims 1-21 have been fully considered and are persuasive. The rejection of claims 1- 21 has been withdrawn.
Applicant’s arguments with respect to claim(s) 1, 14 and 21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s arguments, see remarks on page 8-10, filed 10/16/2025, with respect to the rejection(s) of claim(s) 1, 14 and 21 under 103 have been considered and regarding the amended feature of “c) generating a second proposed state of the distributed computing network by an autoscaler module according to the first proposed state and the compute operational parameter data to provision and/or deprovision virtual application instances of the application on the compute nodes selected in the first proposed state” are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of SANTOS et al. “gym-hpa: Efficient Auto-Scaling via Reinforcement Learning for Complex Microservice-based Applications in Kubernetes,” NOMS 2023-2023IEEE/IFIP Network Operations and Management Symposium, Retrieved from https:/ieeexplore.ieee.org/document/10154298 (2023) in view of Mishra et al. (US 2023/0297433), and further in view of Laribi et al. (US 2013/0297802).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Latency optimizer module and an autoscaler module in claims 1, 14 and 21.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The applicant specification describes the application provisioner 102 comprises a latency optimizer module 106 and an auto-scaler module 108. Each of modules 106, 108 comprise one or more trained machine learning algorithms, see ¶0154.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over SANTOS et al. “gym-hpa: Efficient Auto-Scaling via Reinforcement Learning for Complex Microservice-based Applications in Kubernetes,” NOMS 2023-2023IEEE/IFIP Network Operations and Management Symposium, Retrieved from https:/ieeexplore.ieee.org/document/10154298 (2023), hereinafter “Santos” in view of Mishra et al. (US 2023/0297433), hereinafter “Mishra”, and further in view of Laribi et al. (US 2013/0297802), hereinafter “Laribi”. Santos cited in applicant IDS filed 03/21/2024.
With respect to claim 1, Santos discloses a computer-implemented method of provisioning resources in a distributed compute network comprising one or more routing nodes (page-3, Col-2, Section: III Application deployment in Kubernetes, teaches providing several software components for the automatic life cycle management of containerized applications across a set of cluster nodes (i.e. routing nodes). K8s follows the known master-slave model), and wherein the method is performed by at least one hardware processor and comprising:
a) receiving, by a system manager (page-4, Col-1, II. 3-4, controller manager as system manager), routing operational parameter data from plurality of routing nodes (Page-2, Col-2, II. 36-44, section: Time series analysis, teaches workload forecasting is applied, and then scaling actions are triggered based on the predicted workload (i.e. routing operational parameter data)… Workload prediction models are typically applied to perform adequate scaling actions, leading to resource efficiency and minimal QoS impact, page 3-4, Col-2, and col-1, Section: III Application deployment in Kubernetes, teaches providing several software components for the automatic life cycle management of containerized applications across a set of cluster nodes (i.e. routing nodes). K8s follows the known master-slave model, where at least one master node manages containers across multiple worker nodes (slaves). Master nodes typically have more computing power to operate all software components (e.g., API server, Kubelet, Controller Manager) responsible for handling the complete life cycle workflow of containerized applications, page-4, Col-1, II. 4-8, teaches the application’s latency (Ψa), researchers can specify which measurement or metric to consider. Section VI-A describes the two evaluated applications and the corresponding measurement considered as the application’s latency (i.e. routing operation parameter)) and compute operational parameter data from the plurality of compute nodes for a current state of the distributed compute network (Page-2, Col-2, II. 32-35, section: Queuing model based, teaches these models depend on known parameters (e.g., the arrival rate of service requests), meaning that model and metrics recalculation is needed for dynamic workloads (i.e. compute operational parameter data), page-4, Col-1, II. 4-8, teaches Section VI-A describes the two evaluated applications (i.e. compute nodes) and the corresponding measurement considered as the application’s latency, page-6, Col-2, II. 3-6, teaches the latency for RC (Ψa1) corresponds to the calculation of the average response time of the Redis server by collecting the total query duration and the total query response time during the last five minutes as shown in (7));
b) generating a first proposed state of the distributed compute network by utilizing the routing operational parameter data in a latency optimizes module to simulate selection and/or deselection of one or more compute nodes for provisioning of virtual application instances of the application (Page-2, Col-2, II. 36-44, Section: Time series analysis, teaches workload forecasting is applied, and then scaling actions are triggered based on the predicted workload (i.e. routing operational parameter data)… Workload prediction models are typically applied to perform adequate scaling actions, leading to resource efficiency and minimal QoS impact, page-4, Col-1, II. 4-8, teaches the application’s latency (Ψa), researchers can specify which measurement or metric to consider. Section VI-A describes the two evaluated applications and the corresponding measurement considered as the application’s latency (i.e. routing operation parameter), page-6, II. Col-1, II. 8-14, teaches the latency function (i.e. first proposed state) leads the agent (i.e. latency optimizes module) to find proper allocation schemes that reduce the overall application latency. The goal is to reach a null reward since the agent is penalized based on the latency. A threshold (τa) teaches the agent that the latency should be lower than the threshold since the threshold corresponds to the penalty given to the agent in case maximum and minimum replication factors are not respected, page -6, Col-1, Section E. agents, teaches value-based algorithms learn to select actions based on the predicted value of the input state or action (i.e., critic).).
However, Santos remain silent on c) generating a second proposed state of the distributed computing network by an autoscaler module according to the first proposed state and the compute operational parameter data to provision and/or deprovision virtual application instances of the application on the compute nodes selected in the first proposed state.
Mishra discloses c) generating a second proposed state of the distributed computing network by an autoscaler module according to the first proposed state and the compute operational parameter data to provision and/or deprovision virtual application instances of the application on the compute nodes selected in the first proposed state (¶0014, teaches the distributed network may include an autoscaler configured to manage the processing resources of a service (e.g., an application, one or more processes or functions of an application, one or more processes, etc.). The autoscaler may instantiate a set of partitions within a processing node (e.g., such as a processing device, server, virtual machine, etc.) of the distributed network, ¶0050, teaches Resource allocator 216 may include an autoscaler that may automatically (or subject to user intervention) allocate additional processing resources to the service upon detecting a particular processing load, ¶0070, teaches the new partitions can be instantiated within a virtual machine that already has instantiated partitions allocated to the application (e.g., such as VM 1104), within a new virtual machine that will manage one or more partitions (e.g., such as VM n 114), provisioning one or more processing devices (e.g., with virtual environments and/or partitions), ¶0075, teaches an autoscaler may receive historical resource allocation data associated with a particular service deployed within a distributed network. The autoscaler may be a component of a processing node (e.g., a processing device, server such a cloud resource server 212 of FIG. 2 , a virtual machine executing within a processing device, or the like that is configured to provide access to applications and/or services) operating within the distributed network. The autoscaler may execute one or more operations configured to manage resources and/or services of the distributed network, see ¶0044).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Santos’s dynamic workloads i.e. compute operational parameter data and workload (i.e. second proposed state) forecasting is applied in view of Laribi’s system with generating a second proposed state of the distributed computing network by an autoscaler module according to the first proposed state and the compute operational parameter data to provision and/or deprovision virtual application instances of the application on the compute nodes selected in the first proposed state of Mishra, in order to ensure efficiency, scalability and cost control based on current performance and load conditions (Mishra, ¶0057-¶0059, and ¶0075).
However, Santos in view of Mishra remain silent on a plurality of compute nodes each comprising one or more physical compute servers configured to host one or more virtual application instances of an application thereon, wherein the plurality of compute nodes are geographically-spaced from one another across one or more geographical regions, d) implementing the second proposed state on the distributed compute network by provisioning and/or deprovisioning one or more virtual application instances of the application on one or more compute nodes on the distributed computing network to define a new state of the distributed computing network.
Laribi discloses a plurality of compute nodes each comprising one or more physical compute servers configured to host one or more virtual application instances of an application thereon (¶0006, teaches one or more instances of the plurality of instances of the application as a virtual or physical machine executing on one or more servers (i.e. plurality of compute nodes) of the cloud service provider), wherein the plurality of compute nodes are geographically-spaced from one another across one or more geographical regions (¶0048, teaches the system may include multiple, logically-grouped servers 106 (i.e. compute nodes). In these embodiments, the logical group of servers may be referred to as a server farm 38. In some of these embodiments, the serves 106 may be geographically dispersed),
d) implementing the second proposed state on the distributed compute network by provisioning and/or deprovisioning one or more virtual application instances of the application on one or more compute nodes on the distributed computing network to define a new state of the distributed computing network (¶0169, teaches load (i.e. second proposed state) or network traffic can be distributed among a first core 505A, a second core 505B, a third core 505C, a fourth core 505D, a fifth core 505E, a sixth core 505F, a seventh core 505G, and so on such that distribution is across all or two or more of the n cores 505N (hereinafter referred to collectively as cores 505, ¶0244, teaches The AAPE may monitor status and health of a cloud application or cloud service 710 and may automatically generate requests to provision or de-provision additional virtual machines or start or shutdown additional physical machines 718, as well as providing configuration details to newly provisioned machines).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Santos’s workload and deployed pods is updated in view of Mishra’s system with the plurality of compute nodes are geographically-spaced from one another across one or more geographical regions and implementing the second proposed state on the distributed compute network by provisioning and/or deprovisioning one or more virtual application instances of the application on one or more compute nodes on the distributed computing network to define a new state of the distributed computing network of Laribi, in order to allow the system to balance loads more effectively, enhance the distributed computing environment by deploying new virtual instances of an application across multiple nodes, and achieve specific objectives such as improving performance, increasing capacity or introducing new features into the networked environment (Laribi, ¶0055, ¶0231, ¶0254).
For claim 14, it is a system claim corresponding to the method of claim 1. Therefore claim 14 is rejected under the same ground as claim 1.
For claim 21, it is a non-transitory computer readable medium storage claim corresponding to the method of claim 1. Therefore claim 21 is rejected under the same ground as claim 14.
With respect to claims 2 and 15, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 1, wherein the first model comprises a trained machine learning model (Santos, page-2, Col-1, II. 9-12, teaches traditional approaches are mainly focused on threshold-based or Machine Learning (ML)-based methods focused on resource efficiency).
With respect to claim 3, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 2, wherein the first model is trained using reinforcement-learning (Santos, page-4, Col-1, Section A. System overview, teaches the cluster mode corresponds to reinforcement learning (RL) training on a real cluster environment through the Deployment component).
With respect to claims 4 and 15, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 1, wherein the second model comprises a trained machine learning model (Santos, page-2, Col-1, II. 9-12, teaches traditional approaches are mainly focused on threshold-based or Machine Learning (ML)-based methods focused on resource efficiency).
With respect to claim 5, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 4, wherein the second model is trained using reinforcement-learning ((Santos, page-4, Col-1, Section A. System overview, teaches the cluster mode corresponds to reinforcement learning (RL) training on a real cluster environment through the Deployment component).
With respect to claims 6 and 16, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 1, wherein the routing operational parameter data comprises measured and/or predicted values of a local latency between a respective routing node and any available compute nodes accessible by the respective routing node (Santos, Page-2, Col-2, II. 36-44, teaches workload forecasting is applied, and then scaling actions are triggered based on the predicted workload (i.e. routing operational parameter data)… Workload prediction models are typically applied to perform adequate scaling actions, leading to resource efficiency and minimal QoS impact, page -6, Col-1, Section E. agents, teaches value-based algorithms learn to select actions based on the predicted value of the input state or action (i.e., critic)).
With respect to claims 7 and 17, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 6, wherein step b) further comprises:
e) generating a first proposed state of the distributed compute network having a global latency for the application which meets or exceeds a global latency threshold (Santos, Page-2, Col-2, II. 36-44, teaches workload forecasting is applied, and then scaling actions are triggered based on the predicted workload (i.e. routing operational parameter data)… Workload prediction models are typically applied to perform adequate scaling actions, leading to resource efficiency and minimal QoS impact, page-6, II. Col-1, II. 8-14, teaches the latency function (i.e. first proposed state) leads the agent to find proper allocation schemes that reduce the overall application latency. The goal is to reach a null reward since the agent is penalized based on the latency. A threshold (τa) teaches the agent that the latency should be lower than the threshold since the threshold corresponds to the penalty given to the agent in case maximum and minimum replication factors are not respected, page -6, Col-1, Section E. agents, teaches value-based algorithms learn to select actions based on the predicted value of the input state or action (i.e., critic), Laribi, ¶0004, teaches responsive to one or more metrics exceeding a threshold, the appliance may automatically provision or start, or deprovision or shut down, one or more virtual machines or physical machines from a service provider such as a cloud service provider, and may provide configuration information to the provisioned or started machines as needed), the global latency of the application comprising a function of the local latencies of any provisioned virtual application instances (Santos, page-5, Col-1, II. 6-8, teaches the two evaluated applications and the corresponding measurement considered as the application’s latency).
With respect to claim 8, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 7, wherein step b) further comprises:
g) proposing one or more actions to the current state to generate the first proposed state (Santos, page-2, Col-2, Section: Time series analysis, teaches Workload prediction models are typically applied to perform adequate scaling actions…the proposed approach finds appropriate scaling actions depending on the current status of multiple microservices), the one or more actions comprising selecting and/or deselecting one or more compute nodes for provisioning of virtual application instances of the application (Page-5, Col-1, Section A. Reinforcement learning (RL) based auto scaling, teaches The reward relates to the new observation, describing the environment state after applying the action selected by the agent. For instance, in auto-scaling, the reward is positive if the agent’s action increases the application performance (e.g., high resource usage, low response time));
h) determining whether the global latency of the first proposed state meets or exceeds the latency threshold and, if so determined, proceeding to step c) (page-6, II. Col-1, II. 8-14, teaches the latency function (i.e. first proposed state) leads the agent to find proper allocation schemes that reduce the overall application latency. The goal is to reach a null reward since the agent is penalized based on the latency. A threshold (τa) teaches the agent that the latency should be lower than the threshold since the threshold corresponds to the penalty given to the agent in case maximum and minimum replication factors are not respected).
with respect to claim 9, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 8, wherein if, at step h) the global latency of the first proposed state does not meet or exceed the latency threshold (Santos, page-2, Col-2, section: Queuing model based, teaches All queuing-based methods are applicable for multi-tier applications, however, they mainly rely on stationary systems where the demand does not change over time, page-4, Col-1, II. 9-12, teaches K8s establishes a connection of identical pods via a Deployment [36], but it does not natively support the aggregation of different pods into a particular application), the method further comprises:
i) Iteratively repeating steps g) and h) until the latency threshold is met (Santos, page-2, Col-2, section: time series analysis, teaches most methods [21]–[23] rely on predefined thresholds since actions are triggered if the predicted metric goes beyond a certain threshold. Workload prediction models are typically applied to perform adequate scaling actions, leading to resource efficiency and minimal QoS impact, page-5, Col-1, section Reinforcement learning (RL) based auto scaling, teaches the agent learns to perform the given task by repeated interactions with the environment and determining the inherent synergies between states, actions, and subsequent rewards).
With respect to claims 10 and 19, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 1, wherein the compute operational parameter data comprises resource load data relating to the computational load on one or more virtual application instances running on one or more compute nodes of the distributed compute network (Santos, Page-2, Col-2, II. 32-35, teaches These models depend on known parameters (e.g., the arrival rate of service requests), meaning that model and metrics recalculation is needed for dynamic workloads (i.e. compute operational parameter data, Page-4, Col-1, Section: Kubernetes Integration, teaches Container abstraction provides less isolation than Virtual Machines (VMs), and if several containers are running on the same cluster node, the sharing of physical resources might lead to a performance degradation known as resource contention, page-6, Col-1, II. 3-7, teaches if the agent attempts to deploy or terminate pod instances that would violate the maximum and minimum replication factor, the agent receives a penalty of -1 so that the agent learns what actions are possible based on the current number of deployed pods).
With respect to claims 11 and 20, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 10, wherein step c) further comprises: j) generating a second proposed state of the distributed compute network in which one or more virtual application instances have a computational load within a target range (Santos, Page-5, Col-1, II. 1-6, teaches 75% is the target resource usage since optimal usage (i.e., 100% resource utilization) might lead to performance degradation if the demand suddenly increases or containers request further resources. Concerning the application’s latency (Ψa), researchers can specify which measurement or metric to consider).
With respect to claim 12, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 11, wherein the target range has an upper threshold and a lower threshold (Santos, page-4, Col-22, Section: Kubernetes integration, teaches KHPA currently scales the number of pods in a deployment based on the resource usage of a given metric and in minimum and maximum replication thresholds (αmax and αmin), page-6, II. Col-1, II. 8-14, teaches the latency function (i.e. first proposed state) leads the agent to find proper allocation schemes that reduce the overall application latency. The goal is to reach a null reward since the agent is penalized based on the latency. A threshold (τa) teaches the agent that the latency should be lower than the threshold since the threshold corresponds to the penalty given to the agent in case maximum and minimum replication factors are not respected).
With respect to claims 13 and 18, Santos in view of Mishra, and further in view of Laribi discloses a computer-implemented method according to claim 11, wherein step c) further comprises:
k) proposing one or more actions to the first proposed state to generate the second proposed state, the one or more actions comprising provisioning and/or deprovisioning virtual application instances of the application on one or more compute nodes (Santos, Page-2, Col-2, II. 36-44, teaches workload (i.e. second proposed state) forecasting is applied, and then scaling actions are triggered based on the predicted workload … Workload prediction models are typically applied to perform adequate scaling actions, leading to resource efficiency and minimal QoS impact, page-2, Col-2, Section: Time series analysis, teaches Workload prediction models are typically applied to perform adequate scaling actions…the proposed approach finds appropriate scaling actions depending on the current status of multiple microservices, page-6, II. Col-1, II. 8-14, teaches the latency function (i.e. first proposed state) leads the agent to find proper allocation schemes that reduce the overall application latency. The goal is to reach a null reward since the agent is penalized based on the latency. A threshold (τa) teaches the agent that the latency should be lower than the threshold since the threshold corresponds to the penalty given to the agent in case maximum and minimum replication factors are not respected; and
l) determining whether one or more virtual application instances have a computational load within the target range and, if so determined, proceeding to step d) ((Santos, Page-5, Col-1, II. 1-6, teaches 75% is the target resource usage since optimal usage (i.e., 100% resource utilization) might lead to performance degradation if the demand suddenly increases or containers request further resources. Concerning the application’s latency (Ψa), researchers can specify which measurement or metric to consider).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US20130010610A1 abstract teaches, predicts network link failures and creates a change in the network before the failure actually happens by instigating policy-based adjustment of routing parameters. In particular, an embodiment of the invention operates in two phases. In the first phase the historical operation of a network is observed (B.4.2), to determine observed relationships between link or cluster failures that have occurred, and subsequent failures of different links or clusters. From these observed relationships failure rules can be derived (B.4.4) that are then applied to control routing in the network during a second, control, phase. That is, in the second, control, phase, the derived failure rules are applied such that if a link or cluster failure occurs, then from the rules a prior knowledge of what additional links may fail in the next a time period is obtained, and remedial action can then be taken such as routing data traffic away from the links that are predicted to fail (B.4.6).
US7961638B2 abstract teaches, A method for routing data over a telecommunications carrier network including at least two switching devices defining at least one physical link, the method including defining in advance a plurality of traffic flows, each associated with specific routing through at least one of the physical links in the network; configuring a routing system according to the traffic flows; collecting data of traffic transmitted over at least some of the traffic flows, calculating traffic statistics, from the collected data, for each of the flows in the network; and re-calculating the routing system configuration utilizing the calculated traffic statistics.
US20220114033A1 abstract teaches the apparatus includes one or more processors to determine dependencies between sets of tasks of a plurality of tasks to be executed by a plurality of cores of a network; determine latency deadlines of respective ones of the plurality of tasks; and determine an allocation of individual ones of the plurality of among the plurality of cores for execution based on the dependencies and based on the latency deadlines.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GOLAM MAHMUD whose telephone number is (571)270-0385. The examiner can normally be reached Mon-Fri 8.00-5.00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached on 5712703037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GOLAM MAHMUD/ Examiner, Art Unit 2458/UMAR CHEEMA/Supervisory Patent Examiner, Art Unit 2458