DETAILED ACTION
This Office Action is in response to claims filed on 08/31/2023.
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, and 4-8 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, and 4-6 of co-pending Application No. 18/157,196 (hereinafter ‘196) in view of Cohen et al. Pub. No. US 2023/0185580 A1 (hereinafter Cohen) further in view of Subramanian et al. Pub. No. US 2021/0311765 A1. Although the claims at issue are not identical, they are not patentably distinct from each other because: First, the claims of ‘196 are narrower in scope and, in view of Cohen, would be recognized by a person of ordinary skill in the art as an obvious variant.
Further, the instant application’s recitation of “containers” and “containers running the containerized application” is substantially equivalent to the ‘196 recitation of “worker nodes” as evidenced by the instant application’s claim language. In particular, the instant application’s claim 3 explicitly teaches that “the device running the containerized application via the one or more containers deployed in the operating system of the device comprises a worker node.” Accordingly, the substitution of “worker nodes” for “containers” between the ‘196 reference and the instant application is not patentably distinct.
Additionally, the instant application’s recitation of “containerized application” and “application configuration parameters” is substantially equivalent to ‘196 recitation of “containerized workload” and “workload configuration parameters” The terms “application” and “workload” are used interchangeably in the art to refer to software executing in a computing environment. Thus, the substitution of “workload” and “application” between the ‘196 reference and the instant application are not patentably distinct.
Further, the claims 1 and 5 of the instant application, when read together, collectively teach every limitation set forth in claim 1 of the ‘196 reference. Particularly, the infrastructure controller, the runtime controller, and the logical separation of the control and extension planes recited in the ‘196 reference are identically taught by the instant application’s claim 5. The remaining limitations of the ‘196 reference’s claim 1 are taught by the instant application’s claim 1. Thus, the scopes are commensurate and the claims are not patentably distinct.
Instant Application
Application 18/157,196
1. A method of automatically deploying a containerized application on an operating system of a device, the method comprising:
booting the device with the corresponding operating system;
powering on a hypervisor as a first user processing running on the operating system;
powering on a container engine as a second user process running on the operating system;
booting a virtual machine (VM) running an embedded hypervisor, wherein the VM is running on the hypervisor; and
in response to booting the VM;
automatically obtaining, by the VM, one or more intended state configuration files defining a control plane configuration for providing services for at least deploying and managing the containerized application and application configuration parameters for the containerized application;
deploying, on the VM, a control plane pod configured according to the control plane configuration;
deploying one or more containers based on the control plane configuration, wherein the one or more containers are deployed on the operating system via the container engine; and
deploying the containerized application identified by the application configuration parameters on the one or more containers.
5. The method of claim 4, further comprising:
deploying, on the VM, an infrastructure controller configured to manage a state of the control plane; and
deploying, on the VM, a runtime controller for the one or more containers configured to manage a state of the one or more containers running the containerized application, wherein:
the infrastructure controller is deployed in the control plane,
the runtime controller is deployed in an extension plane, and
the control plane and the extension plane are logically separated planes.
1. A method of automatically deploying a containerized workload on a hypervisor based device, the method comprising:
booting the device running a hypervisor;
in response to booting the device:
automatically obtaining, by the device, one or more intended state configuration files from a server external to the device, the one or more intended state configuration files defining a control plane configuration for providing services for at least deploying and managing the containerized workload and workload configuration parameters for the containerized workload;
deploying, in a container plane, a control plane pod configured according to the control plane configuration;
deploying one or more nodes based on the control plane configuration;
deploying an infrastructure controller configured to manage a state of the control plane; and
deploying a runtime controller for the one or more worker nodes configured to manage a state of the one or more worker nodes, wherein:
the infrastructure controller is deployed in the control plane;
the runtime controller is deployed in an extension plane; and
the control plane and the extension plane are logically separated planes; and
deploying one or more workloads identified by the workload configuration parameters on the one or more worker nodes.
4. The method of claim 1, wherein: the control plane pod is deployed in a control plane; the one or more containers are deployed in a worker plane; and
the control plane and the worker plane are logically separated planes
2. The method of claim 1, wherein:
the one or more worker nodes are deployed in a worker plane; and
the control plane and the worker plane are logically separated planes.
6. The method of claim 5, wherein the infrastructure controller is configured to manage the state of the control plane based on:
monitoring for changes to the control plane configuration; and
updating the state of the control plane based on detecting a change to the control plane configuration when monitoring for the changes to the control plane configuration.
4. The method of claim 2, wherein the infrastructure controller is configured to manage the state of the control plane based on:
monitoring changes to the control plane configuration; and
updating the state of the control plane based on detecting a change to the control plane configuration when monitoring for the changes to the control plane configuration.
7. The method of claim 5, wherein the runtime controller is configured to manage the state of the one or more containers running the containerized application based on:
monitoring for changes to the application configuration parameters; andupdating the state of the one or more containers based on detecting a change to the application configuration parameters when monitoring for the changes to the application configuration parameters.
5. The method of claim 2, wherein the runtime controller is configured to manage the state of the one or more worker nodes based on:
monitoring changes to the workload configuration parameters; and
updating the state of the one or more worker nodes based on detecting a change to the workload configuration parameters when monitoring for the changes to the workload configuration parameters.
8. The method of claim 1, wherein:
the device obtains two intended configuration files,
a first intended state configuration file of the two intended state configuration files defining the control plane configuration, and
a second intended state configuration file of the two intended state configuration files defining the application configuration parameters.
6. The method of claim 1, wherein:
the device obtains two intended configuration files:
a first intended state configuration file of the two intended state configuration files defining control plane configuration, and
a second intended state configuration file of the two intended state configuration files defining the workload configuration parameters.
‘196 does not explicitly disclose “booting the device with the corresponding operating system;
powering on a hypervisor as a first user processing running on the operating system;
powering on a container engine as a second user process running on the operating system;
booting a virtual machine (VM) running an embedded hypervisor, wherein the VM is running on the hypervisor”
However, Cohen teaches booting the device with the corresponding operating system ([0018], The host systems 110 may be booted from external stored device 140 using the live OS image 120);
powering on a hypervisor as a first user processing running on the operating system ([0017], Live OS 120 may, optionally, include a hypervisor 125 (which may also be known as a virtual machine monitor (VMM), which provides a virtual operating platform for VMs 130 and manages their execution);
powering on a container engine as a second user process running on the operating system ([0016], Host system 110 may additionally include one or more virtual machines (VMs) 130, serverless functions 134, containers 136, container orchestration platform 138 and live operating system (OS) 120.);
… wherein the one or more containers are deployed on the operating system via the container engine ([0017], Host system 110 may also include a container orchestration platform 138 (e.g., to manage containers 136. For example, container orchestration platform 138 may manage instantiating, scaling, networking, etc. of containers 136. In some examples, container orchestration platform 138 and container 136 may execute on the live OS 120)
Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teaching of Cohen with the teachings of ‘196 in order to provide a method that teaches booting a device with a corresponding operating system and powering on a hypervisor and a container engine on such operating system where virtual machines and containers are deployed on the hypervisor and container engine respectively. The modification would have been motivated by the desire to apply the trivial and routine operations of “powering on”, “booting”, or activating virtualized components on a computing environment, with reasonable expectation of success.
However, Cohen does not explicitly teach booting a virtual machine (VM) running an embedded hypervisor, wherein the VM is running on the hypervisor.
Subramanian teaches booting a virtual machine (VM) running an embedded hypervisor, wherein the VM is running on the hypervisor (FIG. 2, Pod VMs 130 running on Hypervisor 150, Pod VMs running a container engine 208; [0035], Each pod VM 130 has one or more containers 206 running therein in an execution space managed by container engine 208. The lifecycle of containers 206 is managed by pod VM agent 212)
Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teaching of Subramanian with the teachings of Cohen and ‘196 in order to provide a method that teaches booting a virtual machine within a hypervisor executing on a corresponding operating system. The rationale above with regard to the trivial operation of “booting” is substantially applied here.
Allowable Subject Matter
Claims 5-7 and 14-16 are objected to as being dependent upon a rejected base claim, but would be possibly allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The Examiner has identified the features of “deploying, on the VM, an infrastructure controller configured to manage a state of the control plane’, and “deploying, on the VM, a runtime controller for one or more containers configured to manage a state of the one or more containers running the containerized application”, deployed in their respective, logically separated “control plane” and “extension plane” as not being taught by any single or combination references identified in the prior art.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4, 8-10, 12-13, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen et al. Pub. No. US 2023/0185580 A1 (hereinafter Cohen) in view of Subramanian et al. Pub. No. US 2021/0311765 A1 (hereinafter Subramanian).
With respect to claim 1, Cohen teaches a method of automatically deploying a containerized application on an operating system of a device, the method comprising:
booting the device with the corresponding operating system ([0018], The host systems 110 may be booted from external stored device 140 using the live OS image 120);
powering on a hypervisor as a first user processing running on the operating system ([0017], Live OS 120 may, optionally, include a hypervisor 125 (which may also be known as a virtual machine monitor (VMM), which provides a virtual operating platform for VMs 130 and manages their execution);
powering on a container engine as a second user process running on the operating system ([0016], Host system 110 may additionally include one or more virtual machines (VMs) 130, serverless functions 134, containers 136, container orchestration platform 138 and live operating system (OS) 120.);
…; and in response to booting the VM:
automatically obtaining, by the VM, one or more intended state configuration files ([0012], Upon boot of the computing node into the live environment, processing logic may use the bootstrap configuration (e.g., Ignition config file) to configure the control plane of a container platform within the live environment) defining a control plane configuration ([0018], The initial ignition in the live OS 120 may generate configuration data for a control plane of a container orchestration platform e.g., container orchestration platform 138) including secrets, static pods, a machine configuration file, etc.) for providing services for at least deploying and managing the containerized application ([0020], the ignition included in the bootstrap component 115 may generate master configuration 224 which includes information to configure added nodes (e.g., computing devices or additional control planes) of a cluster, secrets 226 including certificates or other sensitive data for management of a cluster, static pods 228 for running services associated with the container platform control plane 222) and
…
deploying, on the VM, a control plane pod ([0026], the ignition file includes and/or instantiates one or more service to generate and operate a container control plane; [0030], the configuration information copied to the master ignition includes an ) configured according to the control plane configuration ([0022], The configuration generator 324, for example, may start the control plane and apply configurations included in the ignition file; [0026], the ignition file includes and/or instantiates one or more services to generate and operate container control plane);
deploying one or more containers based on the control plane configuration ([0010], The bootstrap node then generates one or more control plane nodes configuration from the bootstrap binary. Those control plane nodes may further create compute nodes for executing workloads of the cluster), wherein the one or more containers are deployed on the operating system via the container engine ([0017], Host system 110 may also include a container orchestration platform 138 (e.g., to manage containers 136. For example, container orchestration platform 138 may manage instantiating, scaling, networking, etc. of containers 136. In some examples, container orchestration platform 138 and container 136 may execute on the live OS 120); and
However, Cohen does not explicitly teach the obtainment of application configuration parameters or the deployment of container applications identified by the application configuration parameters on containers.
Subramanian teaches booting a virtual machine (VM) running an embedded hypervisor, wherein the VM is running on the hypervisor (FIG. 2, Pod VMs 130 running on Hypervisor 150, Pod VMs running a container engine 208; [0035], Each pod VM 130 has one or more containers 206 running therein in an execution space managed by container engine 208. The lifecycle of containers 206 is managed by pod VM agent 212)
application configuration parameters for the containerized application ([0030], Each supervisor namespace provides resource-constrained and authorization-constrained units of multi-tenancy. A supervisor namespace provides resource constraints, user-access constraints, and policies (e.g., storage policies, network policies, etc.) … Each supervisor namespace is expressed within orchestration control plane 115 using a namespace native to orchestration control plane 115 using a namespace native to orchestration control plane 115 (e.g., a Kubernetes namespace or generally a “native namespace”), which allows users to deploy applications in supervisor cluster 101 within the scope of supervisor namespaces);
deploying the containerized application identified by the application configuration parameters on the one or more containers ([0030], the user interacts with supervisor Kubernetes master 104 to deploy applications in supervisor cluster 101 within defined supervisor namespaces.).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Subramanian with the teachings of Cohen in order to provide a method that teaches nested virtualization and deployment of containerized applications associated with application configuration parameters. The motivation for applying Subramanian teaching with Cohen teaching is to provide a method that allows for coordinated control over containerized application deployments incorporated within a control plane such that provides information regarding the operational health of a software-defined data center, such that enables comparison between the observed state of the system with respect to the desired state of separate container deployments ([0017], Subramanian). Cohen and Subramanian are analogous art directed towards hypervisor-specific management and integration arrangements. Therefore, it would have been obvious for one of ordinary skill in the art to combine Subramanian with Cohen to teach the claimed invention in order to provide a unified system capable of coordinating control plane and containerized application deployments together associated with desired configuration parameters.
With regard to claim 3, Cohen teaches wherein the device running the containerized application via the one or more containers deployed on the operating system of the device comprises a worker node ([0002], A container orchestration engine (such as the RedHat™ OpenShift™ Container Platform) may be a platform for developing and running containerized applications and may allow applications to scale as needed. Container orchestration engines may comprise … one or more worker nodes on which pods may be scheduled; [0015], The host cluster is the data plane, which supports execution of workloads in VMs to implement various applications).
With regard to claim 4, Subramanian teaches the control plane pod ([0023], Virtualization management server 116 provisions one or more virtual servers as “master servers,” which function as management entities and execute on control nodes of the Kubernetes system…supervisor Kubernetes master 104 can be implemented as VM(s) 130/140 in hosted cluster 118; [0046], components executing in VMs 130/140 (e.g., supervisor Kubernetes masters 104 having custom components integrated with standard Kubernetes components)) is deployed in a control plane ([0027], Virtualization management server 116, network manager 112, and storage manager 110 comprise a virtual infrastructure (VI) control plane 113 for host cluster 118);
the one or more containers are deployed in a worker plane ([0015], The host cluster is the data plane, which supports execution of workloads in VMs to implement various applications); and
the control plane and the worker plane are logically separated planes ([0015], The virtualization management server, together with storage and network management systems, forms a virtual infrastructure (VI) control plane of the virtualized computing system. The host cluster is the data plane, which supports execution of workloads in VMs to implement various applications).
The rationale to claim 1 is applied here.
With regard to claim 8, Cohen teaches the device obtains two intended state configuration files ([0012], Upon boot of the computing node into the live environment, processing logic may use the bootstrap configuration (e.g., Ignition config file) to configure the control plane of a container platform within the live environment. For example, the bootstrap configuration may include information to generate at least a master configuration and a machine configuration service based on information of the computing node; [0013], After configuring the control plane, the processing may then extract the resulting configuration from the control plane to generate a master bootstrap configuration (e.g., a master Ignition config file)),
a first intended state configuration file of the two intended state configuration files defining the control plane configuration ([0012], Upon boot of the computing node into the live environment, processing logic may use the bootstrap configuration (e.g., Ignition config file) to configure the control plane of a container platform within the live environment. For example, the bootstrap configuration may include information to generate at least a master configuration and a machine configuration service based on information of the computing node), and
However, Cohen does not explicitly teach the second intended state configuration file associated with application configuration parameters.
Subramanian teaches a second intended state configuration file of the two intended state configuration files defining the application configuration parameters ([0030], Each supervisor namespace provides resource-constrained and authorization-constrained units of multi-tenancy. A supervisor namespace provides resource constraints, user-access constraints, and policies (e.g., storage policies, network policies, etc.) … Each supervisor namespace is expressed within orchestration control plane 115 using a namespace native to orchestration control plane 115 (e.g., a Kubernetes namespace or generally a “native namespace”), which allows users to deploy applications in supervisor cluster 101 within the scope of supervisor namespaces).
The rationale to claim 1 is applied here.
With regard to claim 9, Cohen teaches wherein the one or more intended state configuration files are automatically obtained, by the VM, from a server external to the device or a universal serial bus (USB) thumb drive ([0018], the external storage device 140 may store a live OS image 120 that includes a bootstrap component 115. The host system 110 may be booted form the external stored device 140 using the live OS image 120; [0019], The host system 110 and external storage device 140 may be coupled (e.g., may be operative coupled, communicatively coupled, may communicate data/messages with each other) via a communication bus, a CD-ROM drive, a universal serial bus (USB), or other direct connection.).
With regard to claim 10, Cohen teaches a system comprising:
one or more processors ([0034], The example computing device 600 may include a processing device (e.g., a general purpose processor, a PLD, etc.) 602); and
at least one memory ([0034], a main memory 604, (e.g., synchronous dynamic random-access memory (DRAM), read-only memory (ROM)), a static memory 606 (e.g., flash memory and a data storage device 618), which may communicate with each other via a bus 630), the one or more processors and the at least one memory configured to:
Claim 10 is computer implemented method having similar limitations as claim 1. Thus, claim 10 is rejected for the same rationale as applied to claim 1.
With regard to claim 12, it is a system having similar limitations as claim 3. Thus, claim 12 is rejected for the same rationale as applied to claim 3.
With regard to claim 13, it is a system having similar limitations as claim 4. Thus, claim 13 is rejected for the same rationale as applied to claim 4.
With regard to claim 17, it is a system having similar limitations as claim 8. Thus, claim 17 is rejected for the same rationale as applied to claim 8.
With regard to claim 18, it is a system having similar limitations as claim 9. Thus, claim 18 is rejected for the same rationale as applied to claim 9.
With regard to claim 19, Cohen teaches a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors of a computing system, cause the computing system to perform operations for automatically deploying a containerized application on an operating system of a device, the operations comprising ([0040], Examples described herein also relate to an apparatus for performing the operations described herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computing device selectively programmed by a computer program stored in the computing device. Such a computer program may be stored in a computer-readable non-transitory storage medium)
Claim 19 is a non-transitory computer-readable medium having similar limitations as claim 1. Thus, claim 10 is rejected for the same rationale as applied to claim 1.
Claims 2, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen in view of Subramanian as applied to claims 1, 10, and 19 above, and further in view of Shepherd et al. Pub. No. US 2021/0311762 A1 (hereinafter Shepherd).
With regard to claim 2, Cohen teaches the hypervisor runs in conjunction with the operating system of the device (FIG. 1, Live OS 120 comprising Hypervisor 125; [0017], Live OS 120 may, optionally, include a hypervisor 125), and
However, Cohen and Subramanian do not explicitly teach an virtual machine embedded hypervisor running in conjunction with a guest operating system.
Shepherd teaches the embedded hypervisor runs in conjunction with a guest operating system of the VM (FIG. 6, Guest OS 602 supports the execution of Container Engine 604; [0065], FIG. 6 is a block diagram depicting a VM image 600 for implementing control plane nodes and worker nodes in a guest cluster according to an embodiment … VM image 600 comprises a guest OS 602, a container engine 604).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Shepherd with the teachings of Cohen and Subramanian in order to provide a method that teaches nested virtualization of virtual machine executing an embedded hypervisor in conjunction with a guest operating system. The motivation for applying Shepherd teaching with Cohen and Subramanian teaching is to provide a method that allows for hierarchical policy and configuration setting that enables and support delegation of authority within a guest cluster ([0071], Shepherd). Cohen and Subramanian and Shepherd are analogous art directed towards hypervisor-specific management and integration arrangements. Therefore, it would have been obvious for one of ordinary skill in the art to combine Shepherd with Cohen and Subramanian to teach the claimed invention in order to provide a nested virtualization architecture with deployment of virtual machines and container application ensuring isolated configurations of virtualized execution environments.
With regard to claim 11, it is a system having similar limitations as claim 2. Thus, claim 11 is rejected for the same rationale as applied to claim 2.
With regard to claim 20, it is a non-transitory computer-readable medium having similar limitations as claim 2. Thus claim 20 is rejected for the same rationale as applied to claim 2.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 9,983,891 B1
teaches
The disclosed computer-implemented method for distributing configuration templates with application containers may include (i) identifying an application to be deployed in an application container, (ii) maintaining a configuration template comprising at least one configuration setting for the application container and code that transforms the configuration template into a configuration file during deployment of the application … (iv) deploying the deployment container image to a host computing system that comprises a container engine that creates an instance of the deployment container from the deployment container image (Abstract)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN A CASTANEDA whose telephone number is (571)272-0465. The examiner can normally be reached Monday-Friday 9:30AM-5:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/I.A.C./Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195