Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This non-final action is in response to RCE filed on 09/18/2025. In this RCE, claim 1, 8 and 15 are amended. Claims 1-20 are pending, with claims 1, 8 and 15 being independent.
Priority
This application is a continuation of and claims priority to U.S. Patent Application No. 17/107,394 filed November 30, 2020.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/18/2025 has been entered.
Response to Arguments
Claim Rejections Under 35 U.S.C. §103
Applicant’s arguments with respect to claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4, 8-9, 11, 15-16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Iwashina et al. (US 2016/0328258, Pub. Date: Nov. 10, 2016), in view of Toeroe et al. (US 2021/0266235, Provisional filed on Aug. 8, 2018), in view of Krishnan et al. (US 2018/0157511, Pub. Date: Jun. 7, 2018), in view of Lin et al. (US 2021/0208983, PCT Filed: Jun.29, 2018), in view of Pohlack (US 11,017,417, Filed: Jun. 25, 2014).
As per claim 1, Iwashina discloses a cloud computing orchestrator (Iwashina fig. 15 & 28, orchestrator 20 and para. [0063], The orchestrator 20, the VNFM 30, and the VIM 40 may be implemented by separate physical server apparatuses), comprising:
a processor (Iwashina fig. 3, CPU 101);
a memory (Iwashina fig. 3, auxiliary storage apparatus 105) comprising instructions that, when executed by the processor, cause the processor to perform operations comprising
generating a virtual network function homing request associated with a virtual network function (Iwashina fig. 15&28, generating and sending NECESSARY RESOURCE INQUIRY at S005 and para. [0082], the orchestrator 20 dispatches a VNF identification number "VNF 001" as a number for uniquely identifying the generated VNF 70 to obtain necessary resource information of the specified "VNF 10," sets the performance condition "100" and the service ID received in the signal (1) and "VNF 10" derived from "Service 1" in the VNF type),
sending the virtual network function homing request to a conductor service provided by a central placement decision system (Iwashina fig. 15&28, the orchestrator sends NECESSARY RESOURCE INQUIRY to [a service of] "VNFM1" [a central placement decision system] through a signal (2) (S005)), and
receiving, from the conductor service, a response identifying a target site selected by the conductor service (Iwashina fig. 28, a second pattern: [the service of] VNFM returns the selection of the VIM (data center) [a target site] to orchestrator and para. [0189], a pattern in which the selection of the VIM (data center) 40 [a target site] is performed by the VNFM 30 in place of the orchestrator 20 (a second pattern from the left among the patterns of FIG. 28) may be provided; Iwashina para. [0188], the term "selection" refers to the extraction of a resource area (VIM 40/data center) serving as a candidate) from a plurality of sites (Iwashina para. [0085], Candidates "DC 2," "DC 3," "DC 4," and "DC 5" can be obtained as the data center in which "VNF 10" can be arranged from Table T4 of FIG. S(c) pre-registered in the orchestrator 20), wherein the plurality of sites are configured for virtual network function placement within a cloud computing environment (Iwashina fig. 5(c) and para. [0085], Candidates "DC 2," "DC 3," "DC 4," and "DC 5" can be obtained as the data center in which "VNF 10" can be arranged from Table T4 of FIG. S(c) pre-registered in the orchestrator 20), wherein the plurality of sites configured for virtual network function placement within the cloud computing environment comprises at least a first site, a second site, and a third site (Iwashina fig. 5(c), DC1, DC2 and DC3), and wherein the target site is selected by the conductor service (Iwashina para. [0189], a pattern in which the selection of the VIM (data center) 40 [a target site] is performed by [the service of] the VNFM 30 in place of the orchestrator 20 (a second pattern from the left among the patterns of FIG. 28) may be provided) based upon the target site having a capacity sufficient to accommodate the virtual network function homing request (Iwashina para. [0087], it is determined that "VNF 10" requires "function 1" and "function 2" from Table T20 and data centers capable of providing "function 1" and "function 2" are "DC 2" and "DC 3" from Table TS of FIG. 4(b); Iwashina para. [0089], It can be seen that DCs serving as the candidates from Table T10 of FIG. 7(a) are "DC 2" and "DC 3" and an NW bandwidth between "DC 1" and "DC 2" is greater than an NW bandwidth between "DC 1" and "DC 3" from Table TS of FIG. 5(e), and "DC 2" has high priority; Iwashina para. [0188], the term "selection" refers to the extraction of a resource area (VIM 40/data center) serving as a candidate).
Iwashina does not explicitly disclose:
wherein the plurality of sites are configured in accordance with a local and geo-redundancy model, wherein the local and geo-redundancy model for virtual network function placement within the cloud computing environment comprises a cloud utilization value of at least a minimum percentage with a site availability value of at least 99.999% and a virtual machine availability of at least 99.999%, wherein the plurality of sites configured in accordance with the local and geo-redundancy model, wherein the first site comprises a configuration including a first availability zone comprising a first availability region and a second availability region, wherein the first availability region comprises a first server and the second availability region comprises a second server, wherein the first server comprises a first virtual machine and the second server comprises a second virtual machine, wherein the second site and the third site duplicate the configuration of the first site.
Toeroe teaches:
the plurality of sites are configured in accordance with a local and geo-redundancy model (Toeroe para. [0046], In table 1 three different configurations were proposed to achieve different resiliency levels represented by NS resiliency classes. Each row proposed a selection of NFVI components and redundancy models of the VNFs; See Toeroe table 1, Network service NFNs redundancy technique using Local + Geo redundancy), wherein the local and geo-redundancy model comprises a site availability value of at least 99.999% (Toeroe para. [0047], With respect to table 1, let consider only two different resiliency classes and define resiliency class "high" with an availability of equal to or greater than 99.999%).
It would been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Iwashina in view of Toeroe for the plurality of sites are configured in accordance with a local and geo-redundancy model, wherein the local and geo-redundancy model for virtual network function placement within the cloud computing environment comprises a site availability value of at least 99.999%.
One of ordinary skill in the art would have been motived because it offers the advantage of providing high network service resiliency.
Iwashina-Toeroe does not explicitly disclose:
a cloud utilization value of at least a minimum percentage with a virtual machine availability of at least 99.999%, wherein the first site comprises a configuration including a first availability zone comprising a first availability region and a second availability region, wherein the first availability region comprises a first server and the second availability region comprises a second server, wherein the first server comprises a first virtual machine and the second server comprises a second virtual machine, wherein the second site and the third site duplicate the configuration of the first site.
Krishnan teaches:
a cloud utilization value of at least a minimum percentage (Krishnan para. [0021], DRS module 142 is configured to maintain the utilization of hosts 121 in each cluster 120 between a minimum utilization threshold value and a maximum utilization threshold; Krishnan para. [0042], the utilization is typically measured or quantified via performance monitoring functions included in VM management server 140, and may be quantified in terms of computing resources in use by a host or hosts in the particular cluster, such as percentage utilization of CPU, RAM, and the like; Krishnan para. [0001], VMs are frequently employed in data centers, cloud computing platforms) with a virtual machine availability (Krishnan para. [0022], Host provisioning module 143 is configured to logically add one or more available hosts 131 to a cluster 120 or remove hosts 121 from the cluster 120 in response to a triggering event, so that utilization of computing resources of hosts 121 and availability of VMs 122 executing on hosts 121 are maintained within an optimal range).
Iwashina-Krishnan teaches a virtual machine availability within an optimal range (Krishnan para. [0022]). However, Iwashina-Krishnan does not explicitly disclose a virtual machine availability of at least 99.999%, wherein the first site comprises a configuration including a first availability zone comprising a first availability region and a second availability region, wherein the first availability region comprises a first server and the second availability region comprises a second server, wherein the first server comprises a first virtual machine and the second server comprises a second virtual machine, wherein the second site and the third site duplicate the configuration of the first site.
Lin teaches:
a virtual machine availability of at least 99.999% (Lin para. [0023], service providers have made every effort to maintain a high service availability, such as “five nines” (e.g.,99.999%), meaning less than 26 seconds down time per month per VM 115 is allowed)
It would been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to further modify Iwashina in view of Lin for a virtual machine availability of at least 99.999%.
One of ordinary skill in the art would have been motived because it offers the advantage of maintaining a high service availability in according to today’s practice.
Iwashina-Toeroe-Krishnan-Lin does not explicitly disclose:
wherein the first site comprises a configuration including a first availability zone comprising a first availability region and a second availability region, wherein the first availability region comprises a first server and the second availability region comprises a second server, wherein the first server comprises a first virtual machine and the second server comprises a second virtual machine, wherein the second site and the third site duplicate the configuration of the first site.
Pohlack teaches:
the first site (Pohlack fig. 6 and col. 14 lines 57-59, such as in FIG. 6, a data center 600 may be viewed as a collection of shared computing resources and/or shared infrastructure) comprises a configuration including a first availability zone (Pohlack fig. 6 and col. 14 lines 60-63, as shown in FIG. 6, a data center 600 may include … isolation zone 610) comprising a first availability region and a second availability region (from Fig. 5 of examined application, availability zone 104A comprising 3 regions 110, each availability region comprising a plurality of Host 502 and VM 130; see Pohlack fig. 7, availability zone 708A including 3 set of Host Computer(s) 706 and Resources 704 [each set corresponding to availability region]), wherein the first availability region comprises a first server and the second availability region comprises a second server (Pohlack fig. 6, a plurality of host 602), wherein the first server comprises a first virtual machine (Pohlack fig. 6, each of host 602 comprises virtual machine slots 604 and col. 14 lines 60-61, in FIG. 6, a data center 600 may include virtual machine slots 604) and the second server comprises a second virtual machine (Pohlack fig. 6, each of host 602 comprises virtual machine slots 604 and col. 14 lines 60-61, in FIG. 6, a data center 600 may include virtual machine slots 604), wherein the second site and the third site duplicate the configuration of the first site (Pohlack col. 2 lines 1-5, data center is one example of a computing environment in which the described embodiments can be implemented. However, the described concepts can apply generally to other computing environments, for example across multiple data centers or locations).
It would been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to further modify Iwashina in view of Pohlack for the first site comprises a configuration including a first availability zone comprising a first availability region and a second availability region, wherein the first availability region comprises a first server and the second availability region comprises a second server, wherein the first server comprises a first virtual machine and the second server comprises a second virtual machine, wherein the second site and the third site duplicate the configuration of the first site.
One of ordinary skill in the art would have been motived because it offers the advantage of allowing a deployed resource to be geographically dispersed and insulating the resource from failures in one particular location or zone (see Pohlack col. 16 lines 41-44).
As per claim 2, Iwashina-Toeroe-Krishnan-Lin-Pohlack discloses the cloud computing orchestrator according to claim 1, as set forth above, Iwashina also discloses wherein the operations further comprise:
generating a virtual network function placement request (Iwashina fig. 15, generating and sending VM ARRANGEMENT REQUEST at S015 and para. [0093], the orchestrator 20 sets the service ID dispatched in the process of S005 and resource information of Table T14 of FIG. 8(a) serving as reservation resource information (VM and NW) and requests "VNFM 1" to provide the arrangement/startup information and the NW configuration information necessary for the generation of "VNF 10" (signal (6), S015));
sending the virtual network function placement request to a valet service provided by the central placement decision system (Iwashina fig. 15 and para. [0093], requests "VNFM 1" to provide the arrangement/startup information and the NW configuration information necessary for the generation of "VNF 10" (signal (6), S015); Iwashina para. [0094], In "VNFM 1," [a service of] the detailed information output section 32 receives the signal (6));
receiving, from the valet service, a template (fig. 15, receiving VM ARRANGEMENT RESPONSE [ARRANGEMENT/ STARTUP INFORMATION, NW CONFIGURATION INFORMATION] at S017) comprising a placement for the virtual network function ([Per specification of examined application, fig. 5 element 500 and paragraph 80 indicates that template comprises information for configuring a target site]; Iwashina fig. 15 and para. [0094-0095], the detailed information output section 32 … determines the arrangement/ startup information and the NW (network) configuration information corresponding to the model numbers "001" and "005" of Table T14 of FIG. 8(a) of the reserved resource information (VM and NW) received in the signal (6). Next, the detailed information output section 32 dispatches and assigns a VM identification number to the generated virtual machine (VM) (G, S016) … [the service of] the detailed information output section 32 sends a response of Table T15 of FIG. 8(b) associated with the reserved resource information as the signal (7) to the orchestrator 20 (S017, detailed information output step)); and
instantiating the virtual network function on the target site in accordance with the template (Iwashina fig. 15 and para. [0097], the virtual server generation request section 23 requests "VIM 1" to generate "VNF 10" through a signal (8) in which the arrangement/startup information and the NW configuration information for which the format is changed for "VIM 1" and the read reservation number 1 is set (S109, virtual server generation request step)).
As per claim 4, Iwashina-Toeroe-Krishnan-Lin-Pohlack discloses the cloud computing orchestrator according to claim 2, as set forth above, Iwashina also discloses wherein the operations further comprise receiving an indication from the target site that the virtual network function has succeeded (Iwashina fig. 16, orchestrator 20 receives VNF GENERATION RESPONSE from VIM 40 at S021 and para. [0101], In the orchestrator 20, the signal (9) is received. In the orchestrator 20 receiving the signal (9), the VM identification number and the NW information set in the signal (9) are added to the information of the VNF 70 stored in the process of S015 and stored. Content of the information becomes Table T19 of FIG. 9(d); Iwashina para. [0103], The orchestrator 20 confirming that the generation of "VNF 10" and "VNF 21" determined to be generation targets in A (S004) is completed specifies "VIM-NW" which is the VIM 40 for managing a network between the data centers by Table T6 of FIG. 5(d) and a notification of IP addresses of NW information of "VNF 10" and "VNF 21" from Table T28 of FIG. 9(e) is provided to request a connection between VNFs 70 (signal (10), S024)).
Per claims 8-9 and 11, they do not teach or further define over the limitations in claims 1-2 and 4 respectively. As such, claims 8-9 and 11 are rejected for the same reasons as set forth in claims 1-2 and 4 respectively.
Per claims 15-16 and 18, they do not teach or further define over the limitations in claims 1-2 and 4 respectively. As such, claims 15-16 and 18 are rejected for the same reasons as set forth in claims 1-2 and 4 respectively.
Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Iwashina et al. (US 2016/0328258, Pub. Date: Nov. 10, 2016), in view of Toeroe et al. (US 2021/0266235, Provisional filed on Aug. 8, 2018), in view of Krishnan et al. (US 2018/0157511, Pub. Date: Jun. 7, 2018), in view of Lin et al. (US 2021/0208983, PCT Filed: Jun.29, 2018), in view of Pohlack (US 11,017,417, Filed: Jun. 25, 2014), in view of Li (US 2021/0289435, Priority Date Nov. 29, 2018).
As per claim 3, Iwashina-Toeroe-Krishnan-Lin-Pohlack discloses the cloud computing orchestrator according to claim 2, as set forth above, Iwashina also discloses wherein instantiating the virtual network function (Iwashina fig. 15&28, Orchestrator sends Resource Reservation Request [DC, Necessary Resource Information (VM, NW)] at S011 and VNF Generation Request [Reservation Number, Arrangement/ Startup Information, NW Configuration Information] at S019; Iwashina para. [0097], the virtual server generation request section 23 requests "VIM 1" to generate "VNF 10" through a signal (8) in which the arrangement/startup information and the NW configuration information for which the format is changed for "VIM 1" and the read reservation number 1 is set (S109, virtual server generation request step)) on the target site (Iwashina para. [0098], the virtual server generation section 43 receives the signal (8). Next, the virtual server generation section 43 specifies resources of "DC 2" secured in the process of E (S012) as resources for starting up the VNF 70 from a reservation number received in the signal (8)) comprises communicating with OPENSTACK Heat (Iwashina para. [0062], the VIM 40 performs management for each data center (station building). The management of the virtualization resources can be performed in a scheme according to the data center. A management scheme of the data center (a mounting scheme of management resources) is of a type such as OPENSTACK or vCenter. Generally, the VIM 40 is provided for each management scheme of the data center) to instantiate the virtual network function on a virtual machine hosted by the target site (Iwashina fig. 15 and para. [0097], the virtual server generation request section 23 requests "VIM 1" to generate "VNF 10" through a signal (8) in which the arrangement/startup information and the NW configuration information for which the format is changed for "VIM 1" and the read reservation number 1 is set (S109, virtual server generation request step)).
Iwashina does not explicitly disclose:
wherein the template comprises an OPENSTACK Heat orchestration template.
Li teaches:
the template comprises an OPENSTACK Heat orchestration template (Li para. [0044], when a user needs to deploy an application on a specified VIM platform, a deployment template language supported by the specified VIM platform is usually used to design an application template. For example, an HOT template language is used to perform design for the OpenStack platform).
It would been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to further modify Iwashina in view of Li for the template comprises and an OPENSTACK Heat orchestration template.
One of ordinary skill in the art would have been motived because it offers the advantage of providing flexibility in deploying virtual network function.
Per claims 10 and 17, they do not teach or further define over the limitations in claim 3. As such, claims 10 and 17 are rejected for the same reasons as set forth in claim 3.
Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Iwashina et al. (US 2016/0328258, Pub. Date: Nov. 10, 2016), in view of Toeroe et al. (US 2021/0266235, Provisional filed on Aug. 8, 2018), in view of Krishnan et al. (US 2018/0157511, Pub. Date: Jun. 7, 2018), in view of Lin et al. (US 2021/0208983, PCT Filed: Jun.29, 2018), in view of Pohlack (US 11,017,417, Filed: Jun. 25, 2014), in view of Slim et al. (US 2021/0133004, Priority Date Jun. 18, 2018).
As per claim 5, Iwashina-Toeroe-Krishnan-Lin-Pohlack discloses the cloud computing orchestrator according to claim 2, as set forth above, Iwashina does not explicitly disclose wherein the operations further comprise receiving an indication from the target site that the placement of the virtual network function has failed.
Slim teaches:
receiving an indication from the target site that the placement of the virtual network function has failed (Slim fig. 3 and para. [0075], the administrating entity 5 sends a message M5 indicating non-installation of the virtualized function in the data center indicated in the message M4. This non-installation may be for a number of reasons: the data center cannot be identified, the data center cannot accommodate new virtualized functions, the virtualized function to be installed is not compatible with the requirements of the data center; Slim para. [0062], This entity 5 may for example be a VIM (Virtualized Infrastructure Manager) entity in charge of the administration of the resources of the architecture 1).
It would been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to further modify Iwashina in view of Slim for receiving an indication from the target site that the placement of the virtual network function has failed.
One of ordinary skill in the art would have been motived because it offers the advantage of providing status of the installation so that the system can try to install the virtualized function on another virtual machine of the data center (see Slim para. [0075-0076]).
Per claims 12 and 19, they do not teach or further define over the limitations in claim 5. As such, claims 12 and 19 are rejected for the same reasons as set forth in claim 5.
Claims 6-7, 13-14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Iwashina et al. (US 2016/0328258, Pub. Date: Nov. 10, 2016), in view of Toeroe et al. (US 2021/0266235, Provisional filed on Aug. 8, 2018), in view of Krishnan et al. (US 2018/0157511, Pub. Date: Jun. 7, 2018), in view of Lin et al. (US 2021/0208983, PCT Filed: Jun.29, 2018), in view of Pohlack (US 11,017,417, Filed: Jun. 25, 2014), in view of Ganteaume (US 2019/0340033, Filed: May 21, 2018).
As per claim 6, Iwashina-Toeroe-Krishnan-Lin-Pohlack discloses the cloud computing orchestrator according to claim 2, as set forth above, Iwashina does not explicitly disclose wherein the operations further comprise:
creating a set of new valet group declarations; and
updating metadata associated with the set of new valet group declarations.
Ganteaume teaches:
creating a set of new valet group declarations (Ganteaume para. [0036], A deployment plan may assign VMs 104 to particular resources 108 in accordance with one or more rules in order to account for the requirements of application 102 supported by such VMs 104. These rules may be based on abstracting the requirements of application 102 … The deployment plan may be based on one or more affinity rules, diversity (or anti-affinity) rules, exclusivity rules, or pipe rules. The deployment plan may further be based on nesting groupings (e.g., rules or sets of VMs 104). For example, the abstraction may provide for certain VMs 104 to be grouped together, so that rules may be applied to groups of VMs 104 or to individual VMs 104. A group may include one or more VMs 104, or other elements 105, such as ingress points, or the like. For example, FIG. 1A shows two example groups 107); and
updating metadata associated with the set of new valet group declarations (Ganteaume para. [0036], The deployment plan may further be based on nesting groupings (e.g., rules or sets of VMs 104). For example, the abstraction may provide for certain VMs 104 to be grouped together, so that rules may be applied to groups of VMs 104 or to individual VMs 104. A group may include one or more VMs 104, or other elements 105, such as ingress points, or the like. For example, FIG. 1A shows two example groups 107).
It would been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to further modify Iwashina in view of Ganteaume for creating a set of new valet group declarations; and updating metadata associated with the set of new valet group declarations.
One of ordinary skill in the art would have been motived because it offers the advantage of providing a deployment plan accordingly (see Ganteaume para. [0036]).
As per claim 7, Iwashina-Toeroe-Krishnan-Lin-Pohlack-Ganteaume discloses the cloud computing orchestrator according to claim 6, as set forth above, Ganteaume also discloses wherein the set of new valet group declarations comprises a valet affinity group, a diversity group, and an exclusivity group (Ganteaume para. [0036], The deployment plan may be based on one or more affinity rules, diversity (or anti-affinity) rules, exclusivity rules).
Similar rationales in claim 6 is applied.
Per claims 13-14, they do not teach or further define over the limitations in claims 6-7 respectively. As such, claims 13-14 are rejected for the same reasons as set forth in claims 6-7 respectively.
Per claim 20, it does not teach or further define over the limitations in claims 6-7. As such, claim 20 is rejected for the same reasons as set forth in claims 6-7.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gupta et al. (US 9774489) Allocating Computing Resources According To Reserved Capacity;
Wilt et al. (US 20170132746) Placement Optimization For Virtualized Graphics Processing;
Ramarao et al. (US 9497136) Method And System For Providing Usage Metrics To Manage Utilzation Of Cloud Computing Resources.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINH NGUYEN whose telephone number is (571)272-4487. The examiner can normally be reached Monday-Friday: 7:30 AM - 5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KAMAL B DIVECHA can be reached at (571)272-5863. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VINH NGUYEN/Examiner, Art Unit 2453
/KAMAL B DIVECHA/Supervisory Patent Examiner, Art Unit 2453