DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the amendment filed on 7/10/2025. This action is made FINAL.
Claims 1-4, 6-7, 22-33 are pending and they are presented for examinations.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 28-33 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because:
As per claim(s) 28-33, “computer-readable medium” is disclosed. Paragraph 14 of instant application recites: “The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors.”, which is mere example of the computer medium. In plain meaning, the computer readable medium also covers transitory signal as well as non-transitory medium. Since it does not exclude transitory “signal” storing computer-readable code within relatively short amount of time, the broadest reasonable interpretation in light of specification encompasses that the computer-readable medium is signal per se. Thus, the claims are not eligible subject matter. The examiner recommends amending the claims to recite: “computer-readable storage medium” or to explicitly include the term “non-transitory”.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 3-4, 6, 7, 24-25, 27, 30-31 and 33 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 3 (similarly claims 24 and 30) recites the limitation "the device". There is insufficient antecedent basis for this limitation in the claim. The examiner is unclear which device “the device” is referring to.
Claim 3 (similarly claims 24 and 30) recites the limitation "the TEE". There is insufficient antecedent basis for this limitation in the claim. The examiner is unclear which TEE “the TEE” is referring to.
Claim 6 recite: “The apparatus of claim 5”. Claim 6 depends on a cancelled claim 5. Therefore, the examiner is unclear which particular claim, claim 6 should depend on.
Claim 7 (similarly claims 27 and 33) recites the limitation "the TEE". There is insufficient antecedent basis for this limitation in the claim. The examiner is unclear which TEE “the TEE” is referring to.
Claims 4, 25 and 31 are rejected based on rejection of its corresponding dependent claim.
Response to Arguments
Applicant's arguments filed regarding claim 1 (page 10), “Applicant respectfully submits that Hampel’s archaic technique for implementing access control does not teach or reasonably suggest allocating an input/output (I/O) address range comprising a host physical address (HPA) and I/O pages to an I/O control structure, creating an entry in the I/O control structure for a set of the I/O pages, setting a pending bit to a first value which indicates that a remote device is authorized to access the I/O address range, and granting the remote device access to the set of I/O pages in the I/O control structure upon verifying the I/O address range for the remote device as recited by claim 1.”.
The cited limitation above was/is rejected over Lal in view of Hampel. Therefore, applicant’s argument stating only Hampel does not teach or reasonably suggest… is incorrect. Furthermore, the examiner is unclear of the applicant’s exact argument since the argument above does not provide any specificity.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6-7, 22-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lal et al. (Pub 20210117246) (hereafter Lal) in view of Hampel et al. (Pub 20160350549) (hereafter Hampel).
As per claim 1, Lal teaches:
An apparatus, comprising:
processing circuitry to:
allocate an input/output (I/O) address range comprising a host physical address (HPA) and I/O pages to an I/O control structure; ([Paragraph 84], For example, the I/O subsystem 224 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, sensor hubs, host controllers, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the memory 230 may be directly coupled to the processor 220, for example via an integrated memory controller hub. [Paragraph 20], FIG. 13A illustrates a computing environment to establish a trusted execution environment (TEE) during operation, according to implementations of the disclosure. [Paragraph 81], In use, as described further below, a trusted execution environment (TEE) established by the processor 220 securely communicates data with the accelerator 236. Data may be transferred using memory-mapped I/O (MMIO) transactions or direct memory access (DMA) transactions. For example, the TEE may perform an MMIO write transaction that includes encrypted data, and the accelerator 236 decrypts the data and performs the write. As another example, the TEE may perform an MMIO read request transaction, and the accelerator 236 may read the requested data, encrypt the data, and perform an MMIO read response transaction that includes the encrypted data. [Paragraph 112], Input/output (I/O) circuitry 450 couples the GPU 439 to one or more I/O devices 452 such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices 454 to the GPU 439 and memory 449. One or more I/O memory management units (IOMMUs) 451 of the I/O circuitry 450 couple the I/O devices 452 directly to the system memory 449. In one embodiment, the IOMMU 451 manages multiple sets of page tables to map virtual addresses to physical addresses in system memory 449. In this embodiment, the I/O devices 452, CPU(s) 446, and GPU(s) 439 may share the same virtual address space. [Paragraph 113], In one implementation, the IOMMU 451 supports virtualization. In this case, it may manage a first set of page tables to map guest/graphics virtual addresses to guest/graphics physical addresses and a second set of page tables to map the guest/graphics physical addresses to system/host physical addresses (e.g., within system memory 449). The base addresses of each of the first and second sets of page tables may be stored in control registers and swapped out on a context switch (e.g., so that the new context is provided with access to the relevant set of page tables). While not illustrated in FIG. 4C, each of the cores 443, 444, 445 and/or multi-core groups 440A-440N may include translation lookaside buffers (TLBs) to cache guest virtual to guest physical translations, guest physical to host physical translations, and guest virtual to host physical translations. [Paragraph 249], Protection of address translation is implementation specific to the platform, TEE support and virtualization scheme, etc. Any scheme that protects address translation may be used in conjunction with implementations of the disclosure.)
create an entry in the I/O control structure for a set of the I/O pages; ([Paragraph 302], Manifest 2450 may be a data structure representing the nodes and edges of a graph, such as graph 2400 of FIG. 24A. There is one entry 2455 for each node in the graph. Each node is identified by an ID 2460 and has fields for source address 2465 (client host memory address), size 2470, destination 2475 (remote host memory or GPU local memory/address) and a list of any dependencies 2480 (identifiers of nodes in the graph that it references).
Although Lal discloses of address range ([Paragraph 689], The FSM 5040 checks if the buffer requested falls within the range registers of the tenant.)
Lal does not explicitly disclose allocate an input/output (I/O) address range; and set a pending bit to a first value which indicates that a remote device is authorized to access the I/O address range; and
grant the remote device access to the set of I/O pages in the I/O control structure upon verifying the I/O address range for the remote device.
Hampel teaches allocate an input/output (I/O) address range; and ([Paragraph 21], In certain implementations, a SoC may further comprise an access control unit (e.g., a firewall) that may be configured to control access to various target devices based on pre-defined and/or run-time programmable access control data (e.g., a set of access control rules). The access control unit may be programmed by an on-chip or an external programming agent that may transmit messages comprising access control data items (e.g., access control rules). [Paragraph 36], The access control policy that is implemented by one or more access control units 140 may comprise a plurality of access control rules. In certain implementations, an access control rule may comprise an identifier of the initiator device, an identifier of the target device, a target device address range, access permissions, and/or an access authorization type. An access control rule may further comprise the required security state or level of secure execution required by an initiator to authorize the requested access. The access control policy may further indicate that certain rules are modified when the system is in a debug or higher privilege mode.)
set a pending bit to a first value which indicates that a remote device is authorized to access the I/O address range; and ([Paragraph 37], The target device address range may be represented by a starting address, a block size, and/or a range selector for specifying non-contiguous ranges, as described in more details herein below. The access permissions may be specified by a set of flags designating read, write, required security level and/or execute permissions. The access authorization type may specify whether the rule “allows” or “denies” access by the initiator(s) to the target(s). [Paragraph 25], Target devices may be provided by on-chip or off-chip memory devices, storage devices, various input/output (I/O) devices, etc. [Paragraph 52], In various illustrative examples, the above described validation of the contents of the secure memory 170 storing the access control data may be performed periodically (e.g., at a certain time interval) or responsive to a certain triggering event (e.g., responsive to receiving, from an initiator device, an access request 540 for access to a target device).)
grant the remote device access to the set of I/O pages in the I/O control structure upon verifying the I/O address range for the remote device. ([Paragraph 36], The access control policy that is implemented by one or more access control units 140 may comprise a plurality of access control rules. In certain implementations, an access control rule may comprise an identifier of the initiator device, an identifier of the target device, a target device address range, access permissions, and/or an access authorization type. An access control rule may further comprise the required security state or level of secure execution required by an initiator to authorize the requested access. The access control policy may further indicate that certain rules are modified when the system is in a debug or higher privilege mode. [Paragraph 25], Target devices may be provided by on-chip or off-chip memory devices, storage devices, various input/output (I/O) devices, etc. [Paragraph 52], In various illustrative examples, the above described validation of the contents of the secure memory 170 storing the access control data may be performed periodically (e.g., at a certain time interval) or responsive to a certain triggering event (e.g., responsive to receiving, from an initiator device, an access request 540 for access to a target device).)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Lal wherein input/output control data structure is created for TEE, input/output addresses, input/output pages are allocated to the input/output control structure, entry(ies) to the IOCS is/are created with device identifier(s) for a remote device(s), into teachings of Hampel wherein access control data structure with address range comprising host physical address for the TEE is created/initialized and allocated and flag (i.e. pending bit) is set/utilized for authentication for access by the remote device, because this would enhance the teachings of Lal wherein by using the access control data structure, TEE and its corresponding data can be protected by securing memory addresses by granting or denying access to protected contents.
As per claim 2, rejection of claim 1 is incorporated:
Lal teaches processing circuitry is further to: convert an I/O virtual address (IOVA) to one or more of a guest physical address (GPA) or HPA. ([Paragraph 112], Input/output (I/O) circuitry 450 couples the GPU 439 to one or more I/O devices 452 such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices 454 to the GPU 439 and memory 449. One or more I/O memory management units (IOMMUs) 451 of the I/O circuitry 450 couple the I/O devices 452 directly to the system memory 449. In one embodiment, the IOMMU 451 manages multiple sets of page tables to map virtual addresses to physical addresses in system memory 449. In this embodiment, the I/O devices 452, CPU(s) 446, and GPU(s) 439 may share the same virtual address space. [Paragraph 113], In one implementation, the IOMMU 451 supports virtualization. In this case, it may manage a first set of page tables to map guest/graphics virtual addresses to guest/graphics physical addresses and a second set of page tables to map the guest/graphics physical addresses to system/host physical addresses (e.g., within system memory 449).)
Hampel also teaches ([Paragraph 26], Alternatively, the access control unit may be implemented by a memory management unit (MMU) configured to enforce access control based on the access control data while translating addresses from one address space into another address space (e.g., virtual addresses to physical addresses). [Paragraph 30], In the illustrative example of FIG. 1, the interconnect 110 is represented by a network-on-chip (NoC), and the access control unit 140 is represented by a firewall configured to enforce an access control policy while transporting data frames and/or electric signals between a plurality of initiator devices and a plurality of target devices. In another illustrative example, the access control unit may be implemented by a memory management unit (MMU) configured to enforce access control based on the access control data comprising one or more address translation rules, while translating virtual addresses to physical memory addresses on various target devices.)
As per claim 3, rejection of claim 1 is incorporated:
Lal teaches wherein the processing circuitry is further to: create a direct memory access (DMA) buffer in the set of input/output pages; and
program a direct memory access (DMA) circuitry with a source address and a destination address for a direct memory access (DMA) transfer between the device and the trusted execution environment (TEE). ([Paragraph 133], Access to memory 471 and 472 may be facilitated via a memory controller 468. In one embodiment the memory controller 468 includes an internal direct memory access (DMA) controller 469 or can include logic to perform operations that would otherwise be performed by a DMA controller. [Paragraph 112], Input/output (I/O) circuitry 450 couples the GPU 439 to one or more I/O devices 452 such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices 454 to the GPU 439 and memory 449. One or more I/O memory management units (IOMMUs) 451 of the I/O circuitry 450 couple the I/O devices 452 directly to the system memory 449. In one embodiment, the IOMMU 451 manages multiple sets of page tables to map virtual addresses to physical addresses in system memory 449. In this embodiment, the I/O devices 452, CPU(s) 446, and GPU(s) 439 may share the same virtual address space. [Paragraph 113], In one implementation, the IOMMU 451 supports virtualization. In this case, it may manage a first set of page tables to map guest/graphics virtual addresses to guest/graphics physical addresses and a second set of page tables to map the guest/graphics physical addresses to system/host physical addresses (e.g., within system memory 449). [Paragraph 81], A computing device 200 for secure I/O with an accelerator device includes a processor 220 and an accelerator device 236, such as a field-programmable gate array (FPGA). In use, as described further below, a trusted execution environment (TEE) established by the processor 220 securely communicates data with the accelerator 236. Data may be transferred using memory-mapped I/O (MMIO) transactions or direct memory access (DMA) transactions. For example, the TEE may perform an MMIO write transaction that includes encrypted data, and the accelerator 236 decrypts the data and performs the write. As another example, the TEE may perform an MMIO read request transaction, and the accelerator 236 may read the requested data, encrypt the data, and perform an MMIO read response transaction that includes the encrypted data. As yet another example, the TEE may configure the accelerator 236 to perform a DMA operation, and the accelerator 236 performs a memory transfer, performs a cryptographic operation (i.e., encryption or decryption), and forwards the result. [Paragraph 229], FIG. 14 illustrates an operation flow 1400 of integrity protection of RDMA SEND in accordance with implementations of the disclosure. Operation flow 1400 depicts operations of an RDMA transaction among a plurality of different components at a source and a sink. In one implementation, the source refers to the component generating outgoing events and the sink refers to the component receiving incoming events. The source components include a source consumer (consumerSource) 1450 (e.g., consumer such as an application, accelerator, orchestrator, OS.VMM, etc.), source memory (sourceMEM) 1420, and a source NIC (sourceNIC) 1430. The sink components include a sink NIC (sinkNIC) 1440, a sink memory (sinkMEM) 1450, and a sink consumer (consumerSink) 1460 (e.g., consumer such as an application, accelerator, orchestrator, OS.VMM, etc.). [Paragraph 287], In one implementation, the data relocation and command buffer pathing for GPU remoting may operate by creating a manifest that contains the source address and other metadata for each command buffer and data structure that should be relocated from the client to remote server platform. The remote host uses the manifest to allocate memory and transfer the data structures from client to server host. The remote host then patches the command buffer entries to point to local host memory addresses allocated in the remote host's allocated memory and then submits it to the accelerator. From the accelerator's point of view, the command buffers and data structures are in local host memory of the accelerator and the accelerator is unaware that the command buffer was originally created and submitted from a different physical host machine.)
As per claim 4, rejection of claim 3 is incorporated:
Lal teaches wherein the processing circuitry is further to: receive a direct memory access (DMA) transfer comprising encrypted data; and
allow the DMA transfer to access secure memory in the I/O address range for the remote device in response to determining the remote device is authorized to access the input/output (I/O) address range. ( [Paragraph 81], A computing device 200 for secure I/O with an accelerator device includes a processor 220 and an accelerator device 236, such as a field-programmable gate array (FPGA). In use, as described further below, a trusted execution environment (TEE) established by the processor 220 securely communicates data with the accelerator 236. Data may be transferred using memory-mapped I/O (MMIO) transactions or direct memory access (DMA) transactions. For example, the TEE may perform an MMIO write transaction that includes encrypted data, and the accelerator 236 decrypts the data and performs the write. As another example, the TEE may perform an MMIO read request transaction, and the accelerator 236 may read the requested data, encrypt the data, and perform an MMIO read response transaction that includes the encrypted data. As yet another example, the TEE may configure the accelerator 236 to perform a DMA operation, and the accelerator 236 performs a memory transfer, performs a cryptographic operation (i.e., encryption or decryption), and forwards the result. As described further below, the TEE and the accelerator 236 generate authentication tags (ATs) for the transferred data and may use those ATs to validate the transactions. [Paragraph 214], The TEE 1310 may be embodied as a trusted execution environment of the computing environment 1300 that is authenticated and protected from unauthorized access using hardware support of the computing environment 1300. [Paragraph 686], For dynamic assignment, the FSM 5040 can be responsible for managing the page tables. In the case of static assignment, there may be a simpler approach, such as use of range registers configured by FSM 5040 to manage isolation of memory available to each tenant. [Paragraph 687], The FSM 5040 checks if the buffer requested falls within the range registers of the tenant. The FSM 5040 proceeds with NIC configuration and RDMA configuration when the access is validated.)
Hampel also teaches ([Paragraph 36], The access control policy that is implemented by one or more access control units 140 may comprise a plurality of access control rules. In certain implementations, an access control rule may comprise an identifier of the initiator device, an identifier of the target device, a target device address range, access permissions, and/or an access authorization type. An access control rule may further comprise the required security state or level of secure execution required by an initiator to authorize the requested access. The access control policy may further indicate that certain rules are modified when the system is in a debug or higher privilege mode. [Paragraph 25], Target devices may be provided by on-chip or off-chip memory devices, storage devices, various input/output (I/O) devices, etc. [Paragraph 52], In various illustrative examples, the above described validation of the contents of the secure memory 170 storing the access control data may be performed periodically (e.g., at a certain time interval) or responsive to a certain triggering event (e.g., responsive to receiving, from an initiator device, an access request 540 for access to a target device). [Paragraph 62], The second secure memory location 170B may be run-time programmable by the programming agent represented by a trusted execution environment (TEE) 150.)
As per claim 6, rejection of claim 5 is incorporated:
Lal teaches wherein the processing circuitry is further to: retrieve an encryption key identifier for a trusted execution environment (TEE); and assert the encryption key identifier in address bits of the DMA transfer. ([Paragraph 191], Regular mutual attestation protocols setup the communication medium (e.g., link, transport, channel, etc.) between processing elements and RNICs. Standard key exchange setup the encrypted tunnel for data transport. [Paragraph 219], In one implementation, the authentication tag, such as a MAC, is calculated using a key known between application and RNIC (authorized parties) to detect modifications by unauthorized parties… [Paragraph 265], In operation flow 2000, the consumerSource 1410 write data for an RDMA transaction to the buffer 2001. The buffer is posted to the send queue 2002 and read 2003. The sourceNIC 1430 does not store the SALT. The SALT is passed to the consumer instead. The consumerSource 1410 includes the SALT in the Work Request when it posts the Work Request in the Q 2002. The RNIC uses the stored transport key and SALT received from the application to encrypt the payload and calculate the MAC 2004. The sourceNIC 1430 then passes the encrypted and integrity-protected data to the sinkNIC 1440 through an RDMA WRITE 2005, 2006. The RNIC may store the SALT and not pass it to the application in an alternative implementation.)
Hampel also teaches ([Paragraph 37], The access permissions may be specified by a set of flags designating read, write, required security level and/or execute permissions. The access authorization type may specify whether the rule “allows” or “denies” access by the initiator(s) to the target(s). [Paragraph 36], The access control policy that is implemented by one or more access control units 140 may comprise a plurality of access control rules. In certain implementations, an access control rule may comprise an identifier of the initiator device, an identifier of the target device, a target device address range, access permissions, and/or an access authorization type. [Paragraph 39], In accordance with one or more aspects of the present disclosure, the access control unit 140 and the programming agent 150 may share a cryptographic key that may be used for authentication of programming sequences transmitted by the programming agent 150. The cryptographic key may be obtained by the access control unit 140 and the programming agent 150 from an on-chip or off-chip key management system (KMS) (not shown in FIG. 1). In certain implementations, the cryptographic key may be valid for a single use, a single session, or a certain period of time, upon expiration of which a new cryptographic key will need to be generated.)
As per claim 7, rejection of claim 1 is incorporated:
Lal teaches wherein the processing circuitry is further to: receive a request from the TEE to terminate access by the remote device to the I/O address range; and
set the pending bit to a second value which indicates that the remote device is not authorized to access the I/O address range, wherein the processing circuitry is coupled to a memory, the processing circuitry having one or more application processing circuitry or graphics processing circuitry. ([Paragraph 214], The TEE 1310 may be embodied as a trusted execution environment of the computing environment 1300 that is authenticated and protected from unauthorized access using hardware support of the computing environment 1300. [Paragraph 585], In one example, a simple form of time-based use policy specifies the duration of how long customer is allowed to use the FPGA. The time-based use policy includes a start time and a duration. During this period identified by the start time and duration, the customer may load their bitstreams multiple times if they want. But when the duration expires, the PR tenant should be evicted. The policy manager 4342 enforces this with the help of a trusted time service 4350 inside FPGA 4330. [Paragraph 600], (10) The SDM evicts the FPGA bitstream, clears tenant specific state and also clears the tenant related keys. [Paragraph 689], The FSM 5040 checks if the buffer requested falls within the range registers of the tenant. The FSM 5040 proceeds with NIC configuration and RDMA configuration when the access is validated.)
Hampel teaches set the pending bit to a second value ([Paragraph 36], The access control policy that is implemented by one or more access control units 140 may comprise a plurality of access control rules. In certain implementations, an access control rule may comprise an identifier of the initiator device, an identifier of the target device, a target device address range, access permissions, and/or an access authorization type. An access control rule may further comprise the required security state or level of secure execution required by an initiator to authorize the requested access. [Paragraph 37], The access permissions may be specified by a set of flags designating read, write, required security level and/or execute permissions. The access authorization type may specify whether the rule “allows” or “denies” access by the initiator(s) to the target(s).)
As per claims 22-27, these are method claims corresponding to the apparatus claims 1-4, 6 and 7. Therefore, rejected based on similar rationale.
As per claims 28-33, these are computer-readable medium claims corresponding to the apparatus claims 1-4, 6 and 7. Therefore, rejected based on similar rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DONG U KIM/Primary Examiner, Art Unit 2197