DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement filed 5/3/2023 {IPU Based Cloud infrastructure} fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered.
Claim Interpretation
Observation 1: Applicant introduces in the claims the term “life cycle management feature” without providing any additional detail in the claims. Furthermore, the specification of the instant application does not provide any details what this term means. Examiner under BRI interprets this term as any service provided by any entity or orchestrator or server or node or device.
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Regarding Claim 14: it recites “A device, comprising: a networked processing unit connected to a network of at least one orchestrated edge computing environment; and a storage medium including instructions embodied thereon, wherein the instructions, which when executed by the networked processing unit, configure the networked processing unit to deploy remedial actions for failure scenarios occurring in the at least one orchestrated edge computing environment, with operations to: retrieve an orchestration configuration of a controller entity and a worker entity, wherein the controller entity is responsible for orchestration of the worker entity to provide at least one service; determine a failure scenario of the orchestration of the worker entity, based on network data received at the networked processing unit, the networked processing unit located in the network between the controller entity and the worker entity; and cause a remedial action to resolve the failure scenario and modify the orchestration configuration, wherein the remedial action includes replacing functionality of the controller entity or the worker entity with functionality at a replacement entity.”.
A claim limitation invokes 112(f) if it meets the three-prong analysis: (1) the claim limitation uses the term “means” or “step” or a term as a substitute for “means” as a generic placeholder; (2) the term
“means” or “step” or the generic placeholder is modified by functional language; and (3) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for
performing the claimed function. MPEP 2181(I). Under the first prong, “network processing unit, storage medium when executed by networked processing unit: are used as a generic placeholder. There are functions of “configure, deploy remedial actions, retrieve, determine, cause to resolve..” coupled to the means. Furthermore, the generic placeholder is not preceded by a structural modifier. The structure of the Networked processing unit is specified as data processing unit or infrastructure processing unit (IPU , DPU) which has FPGA, ASIC and corresponding description in the specification and the algorithm for performing those function is shown as the flowchart of Fig. 12 and corresponding description in the specification. Therefore, the claim limitation invokes 112(f). However, having a defined structure in the specification along with algorithms of these functions would not invoke 112(a) or 112(b) rejections.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6, 8-19, 21-25 are rejected under 35 U.S.C. 102(a2) as being anticipated by Rajagopalan et al. (“Rajagopalan”, US 20240031221 A1) hereinafter Rajagopalan.
Regarding claim 1, Rajagopalan teaches a method performed by a networked processing unit ([0048-0050] Fig. 6, Networking processing unit) for deploying remedial actions of failure scenarios occurring in at least one orchestrated edge computing environment ([0034-0035] switches and nodes are failing)([0027-0028] Fig. 1, multistage switching network)(Fig. 2, CDC in the information handling with leader and follower nodes), comprising:
identifying an orchestration configuration of a controller entity ([0003-0007] Centralized Controller or Service CDC, A singular endpoint, which requires a leader election algorithm; 2. An algorithm to dynamically place the distributed and centralized applications on appropriate switch resource based on CPU, memory, role of the switch and its proximity to the non-volatile memory express (NVMe) over Fabrics (NVMe-oF™) endpoints)([0030-0031] CDC may provide zoning services to enforce connectivity between the host and subsystem, and/or vice-versa, based on the zoning policies.)([0027-0031] The multistage switching network 110 comprises a leader switch or node 112 and a backup leader switch or node 114, both of which communicatively couple to a plurality of follower switches or nodes 116 a-116 d.) and a worker entity ([0027-0031] Fig. 1, servers 120, storage entities 130), wherein the controller entity is responsible for orchestration of the worker entity to provide at least one service ([0027] The leader nodes and the follower nodes provide configurable and dedicated communication paths for connections between endpoints, such as one or more servers 120 and one or more storages 130);
determining a failure scenario of the orchestration of the worker entity, based on network data received at the networked processing unit in a network established between the controller entity and the worker entity ([0034-0035] failing node, transferring or moving the service from the failed node to another node)([0038] Fig. 4, step 425, responsive to a node/switch failure being identified, the service(s) deployed on the failed node/switch are moved into one or more available nodes/switches that have the same role as the failed node/switch. For example, when a leader node failure is identified, the centralized service(s) deployed on the failed leader switch are moved into a backup leader node; while when a follower node failure is identified, the distributed service(s) deployed on the failed follower switch are moved into one or more other follower nodes. When one follower node e.g., a leaf switch, deployed with multiple distributed services is failed, the multiple distributed services may be moved together into one available follower node, or distributed among multiple available nodes depending on statuses of the multiple available nodes, requirements of the multiple distributed services, proximity of the multiple available nodes to endpoint(s) to which the one leaf switch connects,); and
causing a remedial action to resolve the failure scenario and modify the orchestration configuration, wherein the remedial action includes replacing functionality of the controller entity or the worker entity with functionality at a replacement entity ([0038-0042] Fig. 4, step 425, Step 430 responsive to one or more new endpoints to be added for accessing the network fabrics, one or more follower nodes (e.g., leaf switches) are added to the network fabrics for direct connections to the one or more new endpoints with new distributed services instantiated on the one or more added follower nodes. With this approach, horizontal scaling of endpoints is seamless as such a scaling only needs to add more follower nodes (e.g., leaf switches) to the network fabric.,).
Regarding claim 2, Rajagopalan teaches the method of claim 1,
Rajagopalan teaches wherein the failure scenario includes an event where at least one life cycle management feature of the at least one service provided by the worker entity is not responsive, and wherein the remedial action causes the at least one life cycle management feature to be performed at the replacement entity ([0034-0035] failing node, transferring or moving the service from the failed node to another node)([0038] Fig. 4, step 425, responsive to a node/switch failure being identified, the service(s) deployed on the failed node/switch are moved into one or more available nodes/switches that have the same role as the failed node/switch. For example, when a leader node failure is identified, the centralized service(s) deployed on the failed leader switch are moved into a backup leader node; while when a follower node failure is identified, the distributed service(s) deployed on the failed follower switch are moved into one or more other follower nodes. When one follower node e.g., a leaf switch, deployed with multiple distributed services is failed, the multiple distributed services may be moved together into one available follower node, or distributed among multiple available nodes depending on statuses of the multiple available nodes, requirements of the multiple distributed services, proximity of the multiple available nodes to endpoint(s) to which the one leaf switch connects,) ([0038-0042] Fig. 4, step 425, Step 430 responsive to one or more new endpoints to be added for accessing the network fabrics, one or more follower nodes (e.g., leaf switches) are added to the network fabrics for direct connections to the one or more new endpoints with new distributed services instantiated on the one or more added follower nodes. With this approach, horizontal scaling of endpoints is seamless as such a scaling only needs to add more follower nodes (e.g., leaf switches) to the network fabric.,).
Regarding claim 3, Rajagopalan teaches the method of claim 1,
Rajagopalan teaches wherein the failure scenario includes an event where the at least one service provided by the worker entity is not responsive, and wherein the remedial action causes the at least one service to be migrated to the replacement entity ([0034-0035] failing node, transferring or moving the service from the failed node to another node)([0038] Fig. 4, step 425, responsive to a node/switch failure being identified, the service(s) deployed on the failed node/switch are moved into one or more available nodes/switches that have the same role as the failed node/switch. For example, when a leader node failure is identified, the centralized service(s) deployed on the failed leader switch are moved into a backup leader node; while when a follower node failure is identified, the distributed service(s) deployed on the failed follower switch are moved into one or more other follower nodes. When one follower node e.g., a leaf switch, deployed with multiple distributed services is failed, the multiple distributed services may be moved together into one available follower node, or distributed among multiple available nodes depending on statuses of the multiple available nodes, requirements of the multiple distributed services, proximity of the multiple available nodes to endpoint(s) to which the one leaf switch connects,) ([0038-0042] Fig. 4, step 425, Step 430 responsive to one or more new endpoints to be added for accessing the network fabrics, one or more follower nodes (e.g., leaf switches) are added to the network fabrics for direct connections to the one or more new endpoints with new distributed services instantiated on the one or more added follower nodes. With this approach, horizontal scaling of endpoints is seamless as such a scaling only needs to add more follower nodes (e.g., leaf switches) to the network fabric.,).
Regarding claim 4, Rajagopalan teaches the method of claim 3,
Rajagopalan teaches wherein the remedial action further causes tracking of service requests associated with the failure scenario, and coordination of the tracked service requests among the worker entity and the replacement entity ([0038-0040] Fig. 4, steps 425, 430, monitoring and complete transitioning services to the new node).
Regarding claim 5, Rajagopalan teaches the method of claim 1,
Rajagopalan teaches wherein the failure scenario includes an event where the controller entity is not responsive, and wherein the remedial action causes the replacement entity to assume control of the orchestration of the worker entity ([0038-0040] Fig. 4, centralized services are follower or leader nodes that are failing, they are replaced).
Regarding claim 6, Rajagopalan teaches the method of claim 1,
Rajagopalan teaches wherein the failure scenario includes an event where the controller entity is not responsive ([0038-0040] Fig. 4, centralized services are follower or leader nodes that are failing, they are replaced), wherein the controller entity is additionally responsible for orchestration of entities in multiple clusters (Fig. 2, centralized services responsible for hosts 215-1 through 215-2, and for storages 220-1 through 220-m), and wherein the remedial action includes providing a notification to at least one user based on the failure scenario ([0030] CDC notification services, CDC service management for administrator).
Regarding claim 8, Rajagopalan teaches the method of claim 1,
Rajagopalan teaches wherein the at least one orchestrated edge computing environment is arranged in a single site implementation, and wherein the controller entity operates as an orchestrator for a plurality of workers including the worker entity ([0029-0034] Fig. 2, centralized services responsible for hosts 215-1 through 215-2, and for storages 220-1 through 220-m).
Regarding claim 9, Rajagopalan teaches the method of claim 1,
Rajagopalan teaches wherein the at least one orchestrated edge computing environment is arranged in a multiple site implementation, and wherein the controller entity operates as an orchestrator for multiple points of presence including the worker entity ([0029-0034] Fig. 2, centralized services responsible for hosts 215-1 through 215-2 for first site with multiple nodes, and for storages 220-1 through 220-m another site with multiple nodes).
Regarding claim 10, Rajagopalan teaches the method of claim 1,
Rajagopalan teaches wherein the at least one orchestrated edge computing environment is arranged in a hub and spoke hierarchy ([0028-0034] Fig. 1, Fig. 2, CDC are in the follower and leader node, and storages and hosts are connected to the centralized nodes{follower or leader nodes}), and wherein the controller entity operates as an orchestrator for multiple worker entities including the worker entity in the hierarchy ([0028-0034] Fig. 1, Fig. 2, CDC are in the follower and leader node, and storages and hosts are connected to the centralized nodes{follower or leader nodes}),
Regarding claim 11, Rajagopalan teaches the method of claim 1,
Rajagopalan teaches wherein the worker entity provides at least one microservice using at least one container ([0040] containerized applications, services may be deployed dynamically on network switches such that the services may be closer to endpoints).
Regarding claim 12, Rajagopalan teaches the method of claim 1,
Rajagopalan teaches wherein the networked processing unit is implemented at a network interface in a gateway or switch ([0027-0028] Fig. 1, The multistage switching network 110 comprises a leader switch or node 112 and a backup leader switch or node 114, both of which communicatively couple to a plurality of follower switches or nodes 116 a-116 d.).
Regarding claim 13, Rajagopalan teaches the method of claim 12,
Rajagopalan teaches wherein the controller entity and the worker entity each include respective processing circuitry and respective network processing units, and wherein the remedial action is performed based on operations invoked by the method at one or more of the respective network processing units ([0048-0050] Fig. 6, networking processing unit, CPU).
Regarding claim 14, claim 14 is rejected with the same reasoning as claim 1.
Regarding claim 15, claim 15 is rejected with the same reasoning as claim 2.
Regarding claim 16, claim 16 is rejected with the same reasoning as claim 3.
Regarding claim 17, claim 17 is rejected with the same reasoning as claim 4.
Regarding claim 18, claim 18 is rejected with the same reasoning as claim 5.
Regarding claim 19, claim 19 is rejected with the same reasoning as claim 6.
Regarding claim 21, Rajagopalan teaches the device of claim 14,
Rajagopalan teaches wherein the at least one orchestrated edge computing environment is arranged in one of:
a single site implementation where the controller entity operates as an orchestrator for a plurality of workers including the worker entity ([0028-0034] Fig. 1, Fig. 2, CDC are in the follower and leader node, and storages and hosts are connected to the centralized nodes{follower or leader nodes}),;
a multiple site implementation where the controller entity operates as an orchestrator for multiple points of presence including the worker entity ([0029-0034] Fig. 2, centralized services responsible for hosts 215-1 through 215-2 for first site with multiple nodes, and for storages 220-1 through 220-m another site with multiple nodes); or
a hub and spoke hierarchy, and wherein the controller entity operates as an orchestrator for multiple worker entities including the worker entity in the hierarchy ([0028-0034] Fig. 1, Fig. 2, CDC are in the follower and leader node, and storages and hosts are connected to the centralized nodes{follower or leader nodes}),.
Regarding claim 22, claim 22 is rejected with the same reasoning as claim 13.
Regarding claim 23, claim 23 is rejected with the same reasoning as claim 1.
Regarding claim 24, Rajagopalan teaches the non-transitory machine-readable storage medium of claim 23,
Rajagopalan teaches wherein the failure scenario includes an event where:
at least one life cycle management feature of the at least one service provided by the worker entity is not responsive, and the remedial action causes the at least one life cycle management feature to be performed at the replacement entity ([0034-0035] failing node, transferring or moving the service from the failed node to another node)([0038] Fig. 4, step 425, responsive to a node/switch failure being identified, the service(s) deployed on the failed node/switch are moved into one or more available nodes/switches that have the same role as the failed node/switch. For example, when a leader node failure is identified, the centralized service(s) deployed on the failed leader switch are moved into a backup leader node; while when a follower node failure is identified, the distributed service(s) deployed on the failed follower switch are moved into one or more other follower nodes. When one follower node e.g., a leaf switch, deployed with multiple distributed services is failed, the multiple distributed services may be moved together into one available follower node, or distributed among multiple available nodes depending on statuses of the multiple available nodes, requirements of the multiple distributed services, proximity of the multiple available nodes to endpoint(s) to which the one leaf switch connects,) ([0038-0042] Fig. 4, step 425, Step 430 responsive to one or more new endpoints to be added for accessing the network fabrics, one or more follower nodes (e.g., leaf switches) are added to the network fabrics for direct connections to the one or more new endpoints with new distributed services instantiated on the one or more added follower nodes. With this approach, horizontal scaling of endpoints is seamless as such a scaling only needs to add more follower nodes (e.g., leaf switches) to the network fabric.,).
the at least one service provided by the worker entity is not responsive, and the remedial action causes the at least one life cycle management feature to be performed at the replacement entity ([0034-0035] failing node, transferring or moving the service from the failed node to another node)([0038] Fig. 4, step 425, responsive to a node/switch failure being identified, the service(s) deployed on the failed node/switch are moved into one or more available nodes/switches that have the same role as the failed node/switch. For example, when a leader node failure is identified, the centralized service(s) deployed on the failed leader switch are moved into a backup leader node; while when a follower node failure is identified, the distributed service(s) deployed on the failed follower switch are moved into one or more other follower nodes. When one follower node e.g., a leaf switch, deployed with multiple distributed services is failed, the multiple distributed services may be moved together into one available follower node, or distributed among multiple available nodes depending on statuses of the multiple available nodes, requirements of the multiple distributed services, proximity of the multiple available nodes to endpoint(s) to which the one leaf switch connects,) ([0038-0042] Fig. 4, step 425, Step 430 responsive to one or more new endpoints to be added for accessing the network fabrics, one or more follower nodes (e.g., leaf switches) are added to the network fabrics for direct connections to the one or more new endpoints with new distributed services instantiated on the one or more added follower nodes. With this approach, horizontal scaling of endpoints is seamless as such a scaling only needs to add more follower nodes (e.g., leaf switches) to the network fabric.,).
the controller entity is not responsive, and the remedial action causes the replacement entity to assume control of the orchestration of the worker entity ([0038-0040] Fig. 4, centralized services are follower or leader nodes are failing, they are replaced); or
the controller entity is not responsive, and wherein the remedial action includes providing a notification to at least one user based on the failure scenario ([0038-0040] Fig. 4, centralized services are follower or leader nodes are failing, they are replaced)([0030] CDC notification services, CDC service management for administrator).
Regarding claim 25, claim 25 is rejected with the same reasoning as claim 21.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 7 and 20 are rejected under 35 U.S.C. 103 as being un-patentable by Rajagopalan et al. (“Rajagopalan”, US 20240031221 A1) hereinafter Rajagopalan, in view of Shaw et al. (“Shaw”, US 20220286481 A1) hereinafter Shaw.
Regarding claim 7, Rajagopalan teaches the method of claim 1,
Rajagopalan does not explicitly teach, but Shaw teaches
wherein the failure scenario is determined in response to interruption of a heartbeat at the controller entity or the worker entity ([0087-0088] Fig. 3, Fig. 4, The heartbeat system 314 may be used to provide periodic or aperiodic information from the endpoint 302 or other system components about system health, security, status, and so forth. A heartbeat may be encrypted or plaintext, or some combination of these, and may be communicated unidirectionally (e.g., from the endpoint 302 to the threat management facility 308) or bidirectionally (e.g., between the endpoint 302 and the server 306,)([0120-0124] Fig. 6, the exemplary method 600 may include determining whether the device is one of a set of managed devices for the enterprise network. In certain implementations, determining whether the device is one of the managed devices of the set of managed devices for the enterprise network may be based on whether the device provides a heartbeat to the threat management facility, with the presence of the heartbeat generally identifying the device as one of the managed devices of the set of managed devices and, similarly, the absence of the heartbeat generally identifying the device as an unmanaged device.).
It would have been obvious to a person skilled in the art, before the effective filing date of the invention, to modify Rajagopalan in view of Shaw in order to utilize heartbeat protocol to detect any failure or identify the device status within the network because it provides a way to add unmanaged devices to the enterprise network with managed devices and allow these unmanaged devices to be in compliance and be monitored within the enterprise and making efficient use of administrator resources without bringing threat to the network (Shaw [0002]).
Regarding claim 20, claim 20 is rejected with the same reasoning as claim 7.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FADI HAJ SAID whose telephone number is (571)272-2833. The examiner can normally be reached on 8:00 AM - 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Follansbee can be reached on 571-272-3964. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FADI HAJ SAID/Primary Examiner, Art Unit 2444