DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the amendment filed on 11/13/2025. This Action is made FINAL.
Claims 1-20 are pending and they are presented for examination.
Response to Amendment
Applicant's arguments with respect to claims 1-20 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kutch et al. (Pub 20210117360) (hereafter Kutch) in view of Li (Pub 20230029796).
As per claim 1, Kutch teaches:
An apparatus comprising:
a network interface device comprising:
processor circuitry and
circuitry configured to perform operations of a Virtual Device Composition Module (VDCM) offloaded from a host system to the network interface device to: generate at least one virtual device interface to utilize the processor circuitry and provide the at least one virtual device interface to the host system to assign to a process to provide the process with capability to utilize the processor circuitry. ([Paragraph 98], FIG. 9 depicts an example overview of a software architecture. Various embodiments provide a peer-2-peer (P2P) communication between a NIC and WAT. Various embodiments provide permit splitting header and payload so that a packet header and/or payload can be provided to the WAT and the packet header and/or payload can be provided for processing by a CPU… A PF Host Driver can act as device driver for a WAT. A VDCM can compose SIOV virtual devices (VDEVs). A VDEV driver can provide access to WAT to a VNF or application running in VM, container, etc. NIC DP firmware (FW) can act as a data path of device driver for a NIC. More specific examples of providing a guest with access to a WAT and NIC are described herein. [Paragraph 107], For example, applications executed by any device can include a service, a microservice, cloud native microservice, workload, or software. Applications can be executed in a pipelined manner whereby a core executes an application and the application provides data for processing or access by another device. According to some embodiments, an application can execute on one or multiple cores or processors and the application can allocate a block of memory that is subject to cache line demotion as described herein. [Paragraph 301], Some examples of NIC 4100 are part of an Infrastructure Processing Unit (IPU) or data processing unit (DPU). An IPU or DPU can include a network interface with one or more programmable or fixed function processors to perform offload of operations that could have been performed by a CPU. The IPU or DPU can include one or more memory devices. In some examples, the IPU or DPU can perform virtual switch operations, manage storage transactions (e.g., compression, cryptography, virtualization), and manage operations performed on other IPUs, DPUs, servers, or devices [Paragraph 317], Network interface 4250 provides system 4200 the ability to communicate with remote devices (e.g., servers or other computing devices) over one or more networks. [Paragraph 302], In some examples, NIC 4100 can include a smartNIC. A smartNIC can include one or more of a NIC, NEXT, DPU, or IPU and include programmable processors (e.g., XPU, CPU, FPGAs, ASICS, system on chip) to perform operations offloaded by a host. In some examples, NIC 4100 includes a network interface, network interface controller or a network interface card. In some examples, a network interface can be part of a switch or a system-on-chip (SoC) with devices such as a processor or memory. [Paragraph 75], FIG. 2 illustrates a high-level block diagram of the workload acceleration technology (WAT) in relation to other server technology. In some examples, offload processors 106-0 or 106-1 can include WAT. In some examples, offload processor, WAT and NEXT can refer to similar technologies. [Paragraph 76], In some examples, CPU and WAT can share caching hierarchy and share system memory. Various embodiments allow use of any vendor's NIC (e.g., smartNICs) or lower power NICs and provide inline packet processing in WAT.)
Although Kutch discloses perform operations of a Virtual Device Composition Module (VDCM) to generate at least one virtual device interface to utilize the processor circuitry and provide the at least one virtual device interface to the host system to assign to a process to provide the process with capability to utilize the processor circuitry.) and offloading of NIC/smartNIC, NEXT, WAT, etc. (offloaded NIC/smartNIC).
Kutch does not explicitly disclose VDCM offloaded from a host system to the network interface device. (i.e. VDCM within PCIe NIC).
Abodunrin teaches VDCM offloaded from a host system to the network interface device. ([Paragraph 67], Referring now to FIG. 8, an illustrative embodiment of a PCIe NIC (e.g., with PCIe passthrough, an SR-IOV virtual device composition module (VDCM), etc.) with control plane separation is shown in which the control plane is separated into a hypervisor/container 804 that has ownership of the control plane.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, wherein VDCM generates at least one virtual device interface to assign to a process with capability to utilize the processor circuitry and NIC is offloaded, into teachings of Abodunrin wherein NIC comprises VDCM, because this would enhance the teachings of Kutch wherein by incorporating VDCM into offloaded NIC, it allows various tasks to be assigned to appropriate accelerator(s) (i.e. specialized hardware)/offload to reduce load/improve performance of the host system [Kutch 67-68, 194, Abodunrin 24]
As per claim 2, rejection of claim 1 is incorporated:
Kutch teaches wherein the processor circuitry is to perform one or more of local area network access, cryptographic processing, and/or storage access. ([paragraph 109], In some examples, VNF 1010 can dynamically program processor 1030 to process a flow for traffic based upon rules (e.g., drop packets of certain flows, decrypt on ingress, encrypt on egress). The system of FIG. 10 can use a network interface (NIC) 1050 to transmit or receive packets using a network medium. A flow can be a sequence of packets being transferred between two endpoints, generally representing a single session using a known protocol. [Paragraph 118], In some examples, if data is to be transmitted after processing received data, the data can be stored in buffer 1033 and not copied to a buffer 1210 of memory 1004. For example, VNF 1010 can initiate transmission of data by NIC 1050 from buffer 1033. Additional processing can occur prior to transmission such as encryption or packet header formation using offload processors of processor 1030.)
As per claim 3, rejection of claim 2 is incorporated:
Kutch teaches wherein the storage access comprises access to one or more Nonvolatile Memory Express (NVMe) devices. ([Paragraph 114], Network interface 1050 can provide communications with other network elements and endpoint devices. Any communications protocol can be used such as one or more of: Ethernet (IEEE 802.3), remote direct memory access (RDMA), InfiniBand, FibreChannel, Internet Wide Area RDMA Protocol (iWARP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), quick UDP Internet Connections (QUIC), RDMA over Converged Ethernet (RoCE), Peripheral Component Interconnect (PCIe), Intel QuickPath Interconnect (QPI), Intel Ultra Path Interconnect (UPI), Intel On-Chip System Fabric (IOSF), Omnipath, Compute Express Link (CXL), HyperTransport, Infinity Fabric (IF), NVLink, Advanced Microcontroller Bus Architecture (AMBA) interconnect, OpenCAPI, Gen-Z, Cache Coherent Interconnect for Accelerators (CCIX), 3GPP Long Term Evolution (LTE) (4G), 3GPP 5G, fabric interface, and variations thereof. Data can be copied or stored to virtualized storage nodes using a protocol such as NVMe over Fabrics (NVMe-oF) or NVMe.)
As per claim 4, rejection of claim 1 is incorporated:
Kutch teaches wherein the VDCM is consistent with Open Compute Project Scalable IOV (SIOV). ([Paragraph 98], A VDCM can compose SIOV virtual devices (VDEVs). A VDEV driver can provide access to WAT to a VNF or application running in VM, container, etc. NIC DP firmware (FW) can act as a data path of device driver for a NIC. More specific examples of providing a guest with access to a WAT and NIC are described herein.)
As per claim 5, rejection of claim 1 is incorporated:
Kutch teaches wherein the at least one virtual device interface comprises at least one assignable device interface (ADI), wherein the at least one ADI is consistent with Open Compute Project Scalable IOV (SIOV). ([Paragraph 170], As described herein, in some examples, an assignable device interface (ADI) can be allocated for each different ring. At (2), the NIC can provide a received packet to an available buffer and descriptor to a queue manager of NEXT. At (3), after arbitration as to which descriptor is advanced to processing by the NEXT, the NIC packet descriptor and data buffer can be identified to NIM. At (4), the NIM can translate a descriptor format to a packet processing pipeline format and at (5), the NIM can provide the translated descriptor and associated metadata and identified buffer for streaming to the packet processing pipeline (PP) for processing. [Paragraph 98], A VDCM can compose SIOV virtual devices (VDEVs). A VDEV driver can provide access to WAT to a VNF or application running in VM, container, etc. NIC DP firmware (FW) can act as a data path of device driver for a NIC. More specific examples of providing a guest with access to a WAT and NIC are described herein. [Paragraph 217], VDEV can include a virtual device where the PCIe configuration space is emulated in software in the host, while the parts of the device used by the data plane, such as queue pairs used for receipt/transmission of data, can be mapped directly to NEXT. In some examples, NEXT exposes these queue pairs as Assignable Device Interfaces (ADIs). [Paragraph 241], In some examples, NIC VF/ADI/VSI can be available for assignment to a VFIO driver and the P2PB component and a PCIe address is available for the P2PB component. In some examples, NEXT VDEV can be available (e.g., a PCIe device) for assignment to the VNF by the VMM or orchestrator.)
As per claim 6, rejection of claim 1 is incorporated:
Kutch teaches wherein the network interface device comprises circuitry configured to perform intercepted path operations consistent with Open Compute Project Scalable IOV (SIOV), wherein the intercepted path operations comprises:
device management operations, device initialization, device control, device configuration, quality of service (QoS) handling, error processing, and device reset. ([Paragraph 90], FIG. 8A depicts an example of a system. At startup, NEXT (e.g., WAT) can be enumerated as PCIe device by an operating system (OS). OS can call a NEXT driver, which can initialize NEXT and create a virtual session to a guest and guest access to NEXT resources through a virtual interface (e.g., VF or SIOV ADI). The guest can access address space where NEXT is registered and can write requests into one or more queues in address space. [Paragraph 98], A VDCM can compose SIOV virtual devices (VDEVs). A VDEV driver can provide access to WAT to a VNF or application running in VM, container, etc. NIC DP firmware (FW) can act as a data path of device driver for a NIC. More specific examples of providing a guest with access to a WAT and NIC are described herein. [Paragraph 79], Traffic Manager provides complex hierarchical quality of service (QoS) operation to determine order of packet processing or packet transmission. Packet processor can provide flexible and programmable hardware packet processing operations. Network security can provide network and application security acceleration functions such as IPsec, Datagram Transport Layer Security (DTLS), QUIC, TLS, kernel Transport Layer Security (kTLS), and so forth. Components within WAT can be interconnected through an internal fabric and memory interconnect that allows to compose the flexible entry, exit and reentry points between the software running on CPU cores and components and hardware pipeline stages in WAT. [Paragraph 137], In some examples, HIM 1612 can be accessed by a core (e.g., any of cores 1632-0 to M) of CPU 1630 using driver 1634 such as an emulated host interfaces exposed as Adaptive Virtual Function (AVF) or virtIO device, or using SR-IOV or SIOV. If an application executing on CPU 1630 requests load balancing services of work manager 1600, the application may dedicate a thread for bridging between NEXT 1610 and work manager 1600 on packet ingress and packet egress. Depending on performance requirements, a single thread might be able to perform both tasks. Such arrangement leads to threads that are unavailable for application tasks. In some examples, the application can offload management of load balancing to HIM 1612. HIM 1612 can use or include a work manager interface 1614 that can manage descriptor translation and communicate with work manager 1600 and free a thread to perform other tasks. In some examples, a work manager interface 1614 can aggregate all traffic from an application into one or more queue elements or work queues. [Paragraph 91], Management Controller (Mgmt Cntrl) 800 can be responsible for boot and bring up of the NEXT such as initialization (e.g., memory 806) and configuration of NEXT device, verify the authenticity of the firmware, reliability, availability, and serviceability (RAS) and error handing and recovery.)
Abodunrin also teaches device initialization, device control, QoS, device reset. ([Paragraph 43], The physical function manager 212, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to manage the initialization and configuration of the physical functions of the NIC 120. In some embodiments, information associated with each physical function of the NIC 120 may be stored in the physical function data 206. Similarly, the virtual function manager 214, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to manage the initialization and configuration of the virtual functions of the NIC 120. [Paragraph 53], Such configuration settings may include controlling link speed, link state, one or more quality of service (QoS) settings, etc., performing traffic encapsulation, encryption, device resets, etc., and the like. Depending on the embodiment, additional control may be provided by an agent such as the BMC, a link-partner switch (via, e.g., Data Center Bridging Capability Exchange (DCBx)), or firmware of the NIC 120. The physical function driver 310 is additionally configured to respond to requests for resources received from the virtual function driver 304. While illustratively described as including the VMM 202, in some embodiments, the destination compute device 106 may only include a single untrusted physical function. Accordingly, in such embodiments, it should be appreciated that the destination compute device 106 may not include the VMM 202.)
As per claim 7, rejection of claim 1 is incorporated:
Kutch teaches comprising a host system communicatively coupled to the network interface device, wherein the host system comprises at least one processor configured to assign the at least one virtual device interface to the process. ([Paragraph 217], FIG. 25 depicts a system. Some example operations of the system of FIG. 25 are described with respect to one or more of FIGS. 26-33. Various embodiments provide for configuration of access rights to a memory region using traps and use of a peer-to-peer binder. Some examples can provide address translation for a virtual device (VDEV) 2504 where a VDEV is represented by a Process Address Space ID (PASID) along with the bus/device/function (BDF) of the device. VDEV can include a virtual device where the PCIe configuration space is emulated in software in the host, while the parts of the device used by the data plane, such as queue pairs used for receipt/transmission of data, can be mapped directly to NEXT. In some examples, NEXT exposes these queue pairs as Assignable Device Interfaces (ADIs).)
As per claim 8, rejection of claim 1 is incorporated:
Kutch teaches wherein the assign the at least one virtual device interface to the process is consistent with an Assignable Device Interfaces (ADI) subsystem of Open Compute Project Scalable IOV (SIOV). ([Paragraph 217], FIG. 25 depicts a system. Some example operations of the system of FIG. 25 are described with respect to one or more of FIGS. 26-33. Various embodiments provide for configuration of access rights to a memory region using traps and use of a peer-to-peer binder. Some examples can provide address translation for a virtual device (VDEV) 2504 where a VDEV is represented by a Process Address Space ID (PASID) along with the bus/device/function (BDF) of the device. VDEV can include a virtual device where the PCIe configuration space is emulated in software in the host, while the parts of the device used by the data plane, such as queue pairs used for receipt/transmission of data, can be mapped directly to NEXT. In some examples, NEXT exposes these queue pairs as Assignable Device Interfaces (ADIs). [Paragraph 314], Memory subsystem 4220 represents the main memory of system 4200 and provides storage for code to be executed by processor 4210, or data values to be used in executing a routine. Memory subsystem 4220 can include one or more memory devices 4230 such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM) such as DRAM, or other memory devices, or a combination of such devices. [Paragraph 98], A WAT can expose a physical function (PF). A PF Host Driver can act as device driver for a WAT. A VDCM can compose SIOV virtual devices (VDEVs). A VDEV driver can provide access to WAT to a VNF or application running in VM, container, etc. NIC DP firmware (FW) can act as a data path of device driver for a NIC. More specific examples of providing a guest with access to a WAT and NIC are described herein.)
As per claim 9, rejection of claim 1 is incorporated:
wherein the network interface device comprises one or more of: network interface controller (NIC), SmartNIC, router, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU). ([Paragraph 76], Various embodiments allow use of any vendor's NIC (e.g., smartNICs) or lower power NICs and provide inline packet processing in WAT. [Paragraph 56], Various embodiments provide a packet processing pipeline that resides within the CPU package and is communicatively coupled to the CPU using an IO die, which in turn is connected to one or more PCIe lanes. The CPU can receive traffic from a physical NIC port directly over the IO die and the NIC port (e.g., VF, MDEV, etc.) can be configured to copy data via DMA directly into a memory within the NEXT die. In some examples, the packet processing pipeline can be a separate device and coupled to a CPU using a device interface. The packet processing pipeline and CPU can be built as Multi-Chip-Packages (MCP), System-On-Chip (SoC) or as combination of MCP/SoC and discrete devices connected over PCIe bus. In some examples, NEXT, WAT or a smartNIC can be integrated in CPU or xPU inside a package. [Paragraph 301], Some examples of NIC 4100 are part of an Infrastructure Processing Unit (IPU) or data processing unit (DPU). An IPU or DPU can include a network interface with one or more programmable or fixed function processors to perform offload of operations that could have been performed by a CPU. The IPU or DPU can include one or more memory devices. In some examples, the IPU or DPU can perform virtual switch operations, manage storage transactions (e.g., compression, cryptography, virtualization), and manage operations performed on other IPUs, DPUs, servers, or devices.)
As per claim 10, Kutch teaches:
At least one non-transitory computer-readable medium comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to:
in kernel space: ([Paragraph 87], Various embodiments provide an API for software to use to configure rules that WAT could be vendor-neutral such as Data Plane Development Kit (DPDK), vector packet processing (VPP), Linux kernel, P4, and/or others. Different APIs exist in different ecosystems, including rte_flow (DPDK) and tc flower (Linux). Some embodiments extend the existing APIs (e.g., rte_flow, tc flower, etc.) to support new features (e.g. inline TLS). [Paragraph 216], In some examples, a VNF runs in a VM and utilizes a driver to configure and utilize NEXT. A driver could be implemented as a Poll Mode Driver (PMD) or Kernel driver. In some examples, the VM supports vIOMMU. [Paragraph 220], A device driver can register itself with the kernel. [Paragraph 223], A VDEV driver can be implemented as a DPDK PMD (e.g., a device driver implemented as a user space driver that registers itself with the DPDK Ethernet framework) or as a kernel driver (e.g., a device driver implemented in the Linux kernel, that registers itself with the kernel TCP/IP stack).)
receive at least one virtual device interface to a processor circuitry of a device from the device, wherein the device performs operations of a Virtual Device Composition Module (VDCM) offloaded from a host system to the device and
assign the at least one virtual device interface to a process to provide the process with capability to utilize the processor circuitry of the device. ([Paragraph 98], FIG. 9 depicts an example overview of a software architecture. Various embodiments provide a peer-2-peer (P2P) communication between a NIC and WAT. Various embodiments provide permit splitting header and payload so that a packet header and/or payload can be provided to the WAT and the packet header and/or payload can be provided for processing by a CPU… A PF Host Driver can act as device driver for a WAT. A VDCM can compose SIOV virtual devices (VDEVs). A VDEV driver can provide access to WAT to a VNF or application running in VM, container, etc. NIC DP firmware (FW) can act as a data path of device driver for a NIC. More specific examples of providing a guest with access to a WAT and NIC are described herein. [Paragraph 107], For example, applications executed by any device can include a service, a microservice, cloud native microservice, workload, or software. Applications can be executed in a pipelined manner whereby a core executes an application and the application provides data for processing or access by another device. According to some embodiments, an application can execute on one or multiple cores or processors and the application can allocate a block of memory that is subject to cache line demotion as described herein. [Paragraph 301], Some examples of NIC 4100 are part of an Infrastructure Processing Unit (IPU) or data processing unit (DPU). An IPU or DPU can include a network interface with one or more programmable or fixed function processors to perform offload of operations that could have been performed by a CPU. The IPU or DPU can include one or more memory devices. In some examples, the IPU or DPU can perform virtual switch operations, manage storage transactions (e.g., compression, cryptography, virtualization), and manage operations performed on other IPUs, DPUs, servers, or devices [Paragraph 317], Network interface 4250 provides system 4200 the ability to communicate with remote devices (e.g., servers or other computing devices) over one or more networks.)
Although Kutch discloses perform operations of a Virtual Device Composition Module (VDCM) and offloading of NIC/smartNIC, NEXT, WAT, etc. (offloaded NIC/smartNIC).
Kutch does not explicitly disclose VDCM offloaded from a host system to the device. (i.e. VDCM within PCIe NIC).
Abodunrin teaches VDCM offloaded from a host system to the device. ([Paragraph 67], Referring now to FIG. 8, an illustrative embodiment of a PCIe NIC (e.g., with PCIe passthrough, an SR-IOV virtual device composition module (VDCM), etc.) with control plane separation is shown in which the control plane is separated into a hypervisor/container 804 that has ownership of the control plane.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, wherein VDCM generates at least one virtual device interface to assign to a process with capability to utilize the processor circuitry and NIC is offloaded, into teachings of Abodunrin wherein NIC comprises VDCM, because this would enhance the teachings of Kutch wherein by incorporating VDCM into offloaded NIC, it allows various tasks to be assigned to appropriate accelerator(s) (i.e. specialized hardware)/offload to reduce load/improve performance of the host system [Kutch 67-68, 194, Abodunrin 24]
As per claim 11, rejection of claim 10 is incorporated:
Kutch teaches wherein the device comprises one or more of: a network interface device, a storage controller, memory controller, fabric interface, processor, and/or accelerator device. ([Paragraph 76], Various embodiments allow use of any vendor's NIC (e.g., smartNICs) or lower power NICs and provide inline packet processing in WAT. [Paragraph 56], Various embodiments provide a packet processing pipeline that resides within the CPU package and is communicatively coupled to the CPU using an IO die, which in turn is connected to one or more PCIe lanes. The CPU can receive traffic from a physical NIC port directly over the IO die and the NIC port (e.g., VF, MDEV, etc.) can be configured to copy data via DMA directly into a memory within the NEXT die. In some examples, the packet processing pipeline can be a separate device and coupled to a CPU using a device interface. The packet processing pipeline and CPU can be built as Multi-Chip-Packages (MCP), System-On-Chip (SoC) or as combination of MCP/SoC and discrete devices connected over PCIe bus. In some examples, NEXT, WAT or a smartNIC can be integrated in CPU or xPU inside a package. [Paragraph 301], Some examples of NIC 4100 are part of an Infrastructure Processing Unit (IPU) or data processing unit (DPU). An IPU or DPU can include a network interface with one or more programmable or fixed function processors to perform offload of operations that could have been performed by a CPU. The IPU or DPU can include one or more memory devices. In some examples, the IPU or DPU can perform virtual switch operations, manage storage transactions (e.g., compression, cryptography, virtualization), and manage operations performed on other IPUs, DPUs, servers, or devices. [Paragraph 4], FIG. 2 illustrates a high-level block diagram of the workload acceleration technology (WAT) in relation to other server technology. [Paragraph 150], In some examples, work manager can be used in a single root input/output virtualization (SR-IOV) or Scalable I/O Virtualization (SIOV) virtual machine (VM)-enabled example usage. SR-IOV is compatible at least with specifications available from Peripheral Component Interconnect Special Interest Group (PCI SIG) including specifications such as Single Root I/O Virtualization and Sharing specification Revision 1.1 (2010) and variations thereof and updates thereto. SIOV provides for scalable sharing of I/O devices, such as network controllers, storage controllers, graphics processing units, and other hardware accelerators across a large number of containers or virtual machines. A technical specification for SIOV is Intel® Scalable I/O Virtualization Technical Specification, revision 1.0, June 2018. SR-IOV is a specification that allows a single physical PCI Express (PCIe) resource to be shared among virtual machines (VMs) using a single PCI Express hardware interface.)
As per claims 12-15, these are computer-readable medium claims corresponding to the apparatus claims 2, 4, 5 and 8. Therefore, rejected based on similar rationale.
As per claims 16-20, these are method claims corresponding to the apparatus claims 1, 2 and 4-6. Therefore, rejected based on similar rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DONG U KIM/Primary Examiner, Art Unit 2197