DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1–20 are presented for examination in a non-provisional application filed on 09/27/2023 . Drawings 3. The drawings were received on 09/27/2023 (in the filings). These drawings are acceptable. Claim Interpretation Under 35 USC § 112 The following is a quotation of 35 U.S.C. 112(f) : (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 4. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function . Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. 5. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a. “ guest system ,” b. “ virtual device interface ,” and c. “ hardware accelerator …,” recited in claim 1 , each configured to or capable of being configured to perform respective claimed functions . Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b) : (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. 6. Claim limitations: a. “ guest system ,” b. “ virtual device interface ,” and c. “ a hardware accelerator … ,” recited in claim 1 , invoke 35 U.S.C. 112(f). However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. In this instance, and as filed, the disclosure is either devoid of any STRUCTURE that performs the function in the claims, (Here, the disclosure simply does not describe or limit the claimed “data eraser apparatus” or “processing resource” to a known structure or class of structure (e.g. a CPU) capable of performing the claimed function (method) referred in claim 1), or (to the extent that a structure is sufficiently disclosed) that the structure described in the specification does not perform the entire function in the claim. 7. Therefore, claims 1–12 are indefinite and rejected under 35 U.S.C. 112(b). Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f); (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Examiner’s Remarks 8. Examiner refers to and explicitly cites particular pages, sections, figures, paragraphs or columns and lines in the references as applied to Applicant’s claims to the extent practicable to streamline prosecution. Although the cited portions of the references are representative of the best teachings in the art and are applied to meet the specific limitations of the claims, other uncited but related teachings of the references may be equally applicable as well. It is respectfully requested that, in preparing responses to the rejections, the Applicant fully considers not only the cited portions of the references, but also the references in their entirety, as potentially teaching, suggesting or rendering obvious all or one or more aspects of the claimed invention. Abbreviations 9. Where appropriate, the following abbreviations will be used when referencing Applicant’s submissions and specific teachings of the reference(s): i. figure / figures: Fig. / Figs. ii. column / columns: Col. / Cols. iii. page / pages: p. / pp. References Cited 10. (A) Dong et al ., US 2020/0174819 A1 (“ Dong ”). (B) Gong , US 2019/0114197 A1 (“Gong”). (C) Jeong et al. , US 2018/0285138 A1 (“Jeong”). (D) Cascaval et al. , US 2006/0271827 A1 (“Cascaval”). (E) Altman et al. , US 2004/0054517 A1 (“Altman”) Notice re prior art available under both pre-AIA and AIA 11. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. A. 12. Claims 1–7, 13–14, and 16–19 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Dong in view of (B) Gong and (C) Jeong. See “References Cited” section, above, for full citations of references. 13. Regarding claim 1 , (A) Dong teaches/suggests the invention substantially as claimed, including: “ A platform for emulating hardware [[offloading]], the platform, which when executed on a host system, comprises: a guest system running on the host system, the guest system being configured to generate a data processing command ” (Fig. 2 and ¶ 27: use of a single physical NVMe device by multiple guest VMs . The example ZCBY-MPT techniques involves executing native NVMe device drivers in guest YMs and initiating direct memory access (DMA) copy operations for performance-critical I/O commands (e.g., data access requests)); “ a virtual device interface communicating between the guest system and an [[accelerator]] emulator ” (¶ 29: each guest VM 202a, 202b executes a corresponding guest native NVMe driver 214a, 214b . Also in the illustrated example, the VMM 208 executes an example guest queue manager 216, an example mediator 218, an example shadow queue manager 220 , and an example host native NVMe driver 222. In the illustrated example, the NVMe drivers 214a, 214b, 222 are identified as native because the I/O function calls programmed therein are structured to interface directly with a physical hardware device such as the NVMe device 206 (e.g., directly with firmware of the NVMe device 206)); “ a hardware … emulated by the … emulator for executing the data processing command received through the virtual device interface, wherein the hardware … including an … hardware component and a controller component ” (¶ 30: In the illustrated example, the guest queue manager 216, the mediator 218, and the shadow queue manager 220 implement a virtual NVMe device 224. The NVMe device 224 is identified as virtual because it appears to and interfaces with the guest native NVMe drivers 214a, 214b as if it were physical hardware . As such, when the guest native NVMe drivers 214a, 214b communicate with the NVMe device 224, the guest native NVMe drivers 214a, 214b behave as if they are communicating with physical hardware . However, the NVMe device 224 operates in the context of "knowing" that it is not physical hardware and that it does not directly access physical hardware ( e.g., the NVMe device 206). In some examples, the virtual NVMe device 224 can be implemented using a quick emulator (QEMU) hosted hypervisor to perform hardware virtualization; ¶ 33: An advantage of emulating one or more physical resources using the virtual NVMe device 224 is that the guest native NVMe drivers 214a, 214b in the guest VMs 202a, 202b do not need to be modified to be or operate different from the host native NVMe driver 222). Dong do not teach “hardware offloading” and “a hardware accelerator emulated by the accelerator emulator.” (B) Gong however teaches “ hardware offloading ” and “ a hardware ACCELERATOR emulated by the accelerator emulator .” (¶ 69: hypervisor 102 may emulate at least one virtual accelerator for each virtual machine; ¶ 70: The hardware 103 may include but is not limited to a Central Processing Unit (CPU) that can provide a special instruction, a SoC chip (System-on-a-Chip), and other hardware devices that may provide acceleration functions , for example, a Graphics Processing Unit (GPU), and an Field Programmable Gate Array (FPGA). Acceleration is to offload some functions in a program to hardware for execution to achieve an effect of shortening a program execution time ). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (B) Gong with those of (A) Dong to provide virtual (emulated) acceleration offload for data access/storage functions. The motivation or advantage to do so is to enable faster data access and shorten request/command response times. Dong and Gong do not teach a guest system “to receive a data processing command.” (C) Jeong however teaches or suggests a guest system “ to receive a data processing command ” (Fig. 1 and ¶ 74: In response to an occurrence of a file input/output (I/O) command or receiving the file I/O command from the application 112, the guest operating system 114 requests the virtual machine monitor 140 to execute the file I/O command ; ¶ 78: the virtual machine monitor 140 executes the file I/O command. The virtual machine monitor 140 provides the result to the guest operating system 114). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (C) Jeong with those of (A) Dong and (B) Gong to enable hosted applications to issue I/O commands to the guest/VMM. The motivation or advantage to do so to provide for application request and access of emulated hardware (e.g. the NVMe for fast data storage and transport). 14. Regarding claim 2 , (A) Dong teaches or suggests: “ the controller component emulates parsing the data processing command from the guest system ” (¶ 35: mediator 218 synchronizes the guest queues 226a, 226b and shadow queues 230a, 230b so that the host native NVMe driver 222 can provide commands from the shadow queues 230a, 230b to the NVMe device 206; ¶ 53: The mediator 218 may also perform translations of one or more additional or alternative parameters; ¶ 58: The example mediator 218 (FIG. 2) parses the I/O command (block 606)). 15. Regarding claim 3 , Dong and Gong teach or suggest: “ the offloading hardware component emulates executing of the data processing command in the hardware accelerator ” ( Dong , ¶ 38: The shadow queue manager 220 of the illustrated example also operates as a scheduler to schedule when ones of the translated commands in the shadow queues 230a, 230b are to be serviced by the host native NVMe driver 222 ; ¶ 50: when the shadow queue manager 220 makes a change to the shadow queues 230a, the host native NVMe driver 222 (FIG. 2) propagates or synchronizes the change to the physical queues 231; ¶ 53: mediator 218 may also perform translations of one or more additional or alternative parameters. In the illustrated example, the mediator 218 and the shadow queue manager 220 work together to create shadow queues 230a to submit new translated commands to the NVMe device 206. In the illustrated example of FIG. 4, translated I/O commands (e.g., translated data requests) in the shadow queues 230a are processed by the host native NVMe driver 222 as described above in connection with FIG. 2 to cause the NVMe device 206 of FIG. 2 to perform the DMA operation 233 (e.g., a zero-copy operation) to copy data between the NVMe device 206 and the guest memory buffer 234a of the requesting guest VM 202a ; Gong , ¶ 70, teaching offloading to hardware ). 16. Regarding claim 4 , Dong, Gong, and Jeong teach or suggest: “the hardware accelerator is emulated in a Quick Emulator (QEMU) as a virtual device in communication with the guest system ” ( Dong , ¶ 30: the virtual NVMe device 224 can be implemented using a quick emulator (QEMU) hosted hypervisor to perform hardware virtualization; Gong , ¶ 72: VirtIO backend accelerator (backend device) 301 is a virtual accelerator emulated by a virtual machine emulator (QEMU); Jeong , Fig. 1 and ¶ 66: emulator (quick emulator (QEMU))). 17. Regarding claim 5 , Dong teaches or suggests: “ the host system includes an emulated storage device for emulating storing data processed by the hardware accelerator ” (¶ 30: In the illustrated example, the guest queue manager 216, the mediator 218, and the shadow queue manager 220 implement a virtual NVMe device 224. The NVMe device 224 is identified as virtual because it appears to and interfaces with the guest native NVMe drivers 214a, 214b as if it were physical hardware . As such, when the guest native NVMe drivers 214a, 214b communicate with the NVMe device 224, the guest native NVMe drivers 214a, 214b behave as if they are communicating with physical hardware . However, the NVMe device 224 operates in the context of "knowing" that it is not physical hardware and that it does not directly access physical hardware ( e.g., the NVMe device 206); ¶ 33: emulating one or more physical resources using the virtual NVMe device). 18. Regarding claim 6 , Dong and Gong, in combination, teach or suggest: “ the guest system further comprises an emulator library included in the guest system, the emulator library providing an application programming interface (API) to offload a data operation according to the data processing command to the hardware accelerator emulated by the accelerator emulator ” ( Dong , ¶ 29: example, each guest VM 202a, 202b executes a corresponding guest native NVMe driver 214a, 214b . Also in the illustrated example, the VMM 208 executes an example guest queue manager 216, an example mediator 218, an example shadow queue manager 220, and an example host native NVMe driver 222. In the illustrated example, the NVMe drivers 214a, 214b, 222 are identified as native because the I/O function calls programmed therein are structured to interface directly with a physical hardware device such as the NVMe device 206 (e.g., directly with firmware of the NVMe device 206; Gong , ¶ 71: The acceleration abstraction layer (AAL) 202 is mainly configured to provide a universal Application Programming Interface (API) layer for different virtual accelerators … ¶ 99: VNF of the virtual machine invokes a frontend driver API of the virtual accelerator to transfer the information (including the GPA of the to-be-accelerated data, the GPA for storing the acceleration result, and the like) required for performing the acceleration operation to the frontend driver; ¶ 70, teaching offloading to hardware ). 19. Regarding claim 7 , Dong and Gong, in combination, teach or suggest: “ the hardware accelerator is emulated by the accelerator emulator as a non-volatile memory express (NVMe) device ” ( Dong , ¶ 30: In the illustrated example, the guest queue manager 216, the mediator 218, and the shadow queue manager 220 implement a virtual NVMe device 224. The NVMe device 224 is identified as virtual because it appears to and interfaces with the guest native NVMe drivers 214a, 214b as if it were physical hardware . As such, when the guest native NVMe drivers 214a, 214b communicate with the NVMe device 224, the guest native NVMe drivers 214a, 214b behave as if they are communicating with physical hardware . However, the NVMe device 224 operates in the context of "knowing" that it is not physical hardware and that it does not directly access physical hardware ( e.g., the NVMe device 206). In some examples, the virtual NVMe device 224 can be implemented using a quick emulator (QEMU) hosted hypervisor to perform hardware virtualization; ¶ 33: An advantage of emulating one or more physical resources using the virtual NVMe device 224 is that the guest native NVMe drivers 214a, 214b in the guest VMs 202a, 202b do not need to be modified to be or operate different from the host native NVMe driver 222); Gong , ¶ 69: hypervisor 102 may emulate at least one virtual accelerator for each virtual machine; ¶ 70: The hardware 103 may include but is not limited to a Central Processing Unit (CPU) that can provide a special instruction, a SoC chip (System-on-a-Chip), and other hardware devices that may provide acceleration functions , for example, a Graphics Processing Unit (GPU), and an Field Programmable Gate Array (FPGA). Acceleration is to offload some functions in a program to hardware for execution to achieve an effect of shortening a program execution time ). “ an emulator library includes a NVMe driver for controlling the NVMe device ” ( Dong , ¶ 29: example, each guest VM 202a, 202b executes a corresponding guest native NVMe driver 214a, 214b . Also in the illustrated example, the VMM 208 executes an example guest queue manager 216, an example mediator 218, an example shadow queue manager 220, and an example host native NVMe driver 222. In the illustrated example, the NVMe drivers 214a, 214b, 222 are identified as native because the I/O function calls programmed therein are structured to interface directly with a physical hardware device such as the NVMe device 206 (e.g., directly with firmware of the NVMe device 206; Gong , ¶ 71: The acceleration abstraction layer (AAL) 202 is mainly configured to provide a universal Application Programming Interface (API) layer for different virtual accelerators … ¶ 99: VNF of the virtual machine invokes a frontend driver API of the virtual accelerator to transfer the information (including the GPA of the to-be-accelerated data, the GPA for storing the acceleration result, and the like) required for performing the acceleration operation to the frontend driver). 20. Regarding claim 13 (independent) , it is the corresponding method claim reciting similar limitations of commensurate scope as the system of claim 3 . Therefore, it is rejected on the same basis as claim 1 above, including the following rationale: (A) Dong in view of (B) Gong and (C) Jeong teaches the claims as follows: “ a guest system running on a host system providing an emulation platform ” ( Dong , Fig. 2 and ¶ 27: use of a single physical NVMe device by multiple guest VMs . The example ZCBY-MPT techniques involves executing native NVMe device drivers in guest YMs and initiating direct memory access (DMA) copy operations for performance-critical I/O commands (e.g., data access requests)); “ the guest system receiving a data processing command ” ( Jeong , Fig. 1 and ¶ 74: In response to an occurrence of a file input/output (I/O) command or receiving the file I/O command from the application 112, the guest operating system 114 requests the virtual machine monitor 140 to execute the file I/O command ; ¶ 78: the virtual machine monitor 140 executes the file I/O command. The virtual machine monitor 140 provides the result to the guest operating system 114). “ an accelerator emulator emulating the hardware accelerator ” (Dong, ¶ 30: In the illustrated example, the guest queue manager 216, the mediator 218, and the shadow queue manager 220 implement a virtual NVMe device 224. The NVMe device 224 is identified as virtual because it appears to and interfaces with the guest native NVMe drivers 214a, 214b as if it were physical hardware . As such, when the guest native NVMe drivers 214a, 214b communicate with the NVMe device 224, the guest native NVMe drivers 214a, 214b behave as if they are communicating with physical hardware . However, the NVMe device 224 operates in the context of "knowing" that it is not physical hardware and that it does not directly access physical hardware ( e.g., the NVMe device 206). In some examples, the virtual NVMe device 224 can be implemented using a quick emulator (QEMU) hosted hypervisor to perform hardware virtualization; ¶ 33: An advantage of emulating one or more physical resources using the virtual NVMe device 224 is that the guest native NVMe drivers 214a, 214b in the guest VMs 202a, 202b do not need to be modified to be or operate different from the host native NVMe driver 222). ( Gong , ¶ 69: hypervisor 102 may emulate at least one virtual accelerator for each virtual machine; ¶ 70: The hardware 103 may include but is not limited to a Central Processing Unit (CPU) that can provide a special instruction, a SoC chip (System-on-a-Chip), and other hardware devices that may provide acceleration functions , for example, a Graphics Processing Unit (GPU), and an Field Programmable Gate Array (FPGA). Acceleration is to offload some functions in a program to hardware for execution to achieve an effect of shortening a program execution time ). “ the hardware accelerator receiving the data processing command through a virtual device interface ” ( Dong , ¶ 29: each guest VM 202a, 202b executes a corresponding guest native NVMe driver 214a, 214b . Also in the illustrated example, the VMM 208 executes an example guest queue manager 216, an example mediator 218, an example shadow queue manager 220 , and an example host native NVMe driver 222. In the illustrated example, the NVMe drivers 214a, 214b, 222 are identified as native because the I/O function calls programmed therein are structured to interface directly with a physical hardware device such as the NVMe device 206 (e.g., directly with firmware of the NVMe device 206)); “ a controller component of the hardware accelerator controlling an offloading hardware component of the hardware accelerator to emulate executing the data processing command by the hardware accelerator ” ( Dong , ¶ 38: The shadow queue manager 220 of the illustrated example also operates as a scheduler to schedule when ones of the translated commands in the shadow queues 230a, 230b are to be serviced by the host native NVMe driver 222 ; ¶ 50: when the shadow queue manager 220 makes a change to the shadow queues 230a, the host native NVMe driver 222 (FIG. 2) propagates or synchronizes the change to the physical queues 231; ¶ 53: mediator 218 may also perform translations of one or more additional or alternative parameters. In the illustrated example, the mediator 218 and the shadow queue manager 220 work together to create shadow queues 230a to submit new translated commands to the NVMe device 206. In the illustrated example of FIG. 4, translated I/O commands (e.g., translated data requests) in the shadow queues 230a are processed by the host native NVMe driver 222 as described above in connection with FIG. 2 to cause the NVMe device 206 of FIG. 2 to perform the DMA operation 233 (e.g., a zero-copy operation) to copy data between the NVMe device 206 and the guest memory buffer 234a of the requesting guest VM 202a; See also — Fig. 5 and ¶¶ 55–56: the mediator 218 dispatches translated commands from the guest queues 226a to the shadow queues 230a. This is shown in the example of FIG. 5 as the mediator 218 sending Qops notifications 508 to the shadow queue manager 220 … When the host native NVMe driver 222 completes a command, the host native NVMe driver 222 writes the completion to the shadow queues 230a. In this manner, the shadow queue manager 220 sends a DBL notification 514 to the mediator 218 in response to the completion being written to the shadow queue 230a ). 21. Regarding claim 14 , Dong, Gong, and Jeong teach or suggest: “ the accelerator emulator emulating a hardware accelerator in a Quick Emulator (QEMU) as a virtual device in communication with the guest system ” ( Dong , ¶ 30: the virtual NVMe device 224 can be implemented using a quick emulator (QEMU) hosted hypervisor to perform hardware virtualization; Gong , ¶ 72: VirtIO backend accelerator (backend device) 301 is a virtual accelerator emulated by a virtual machine emulator (QEMU); Jeong , Fig. 1 and ¶ 66: emulator (quick emulator (QEMU))). 22. Regarding claim 16 (independent) , it is the corresponding computer program product claim reciting similar limitations of commensurate scope as the method of claim 13 . Therefore, it is rejected on the same basis as claim 13 above, including the following rationale: Dong teaches or suggests: “ receiving a data processing command from the guest system ” (¶ 34: to access data in the NVMe device 206, a guest VM 202a, 202b uses its guest native NVMe driver 214a, 214b to generate an I/0 command that includes a data access request (e.g., a read and/or write request)). 23. Regarding claim 17 , Dong, Gong, and Jeong teach or suggest: “ emulating with the accelerator emulator a hardware accelerator in a Quick Emulator (QEMU) as a virtual device in communication with the guest system ” ( Dong , ¶ 30: the virtual NVMe device 224 can be implemented using a quick emulator (QEMU) hosted hypervisor to perform hardware virtualization; Gong , ¶ 72: VirtIO backend accelerator (backend device) 301 is a virtual accelerator emulated by a virtual machine emulator (QEMU); Jeong , Fig. 1 and ¶ 66: emulator (quick emulator (QEMU))). 24. Regarding claim 18 , Dong teaches or suggests: “ storing data processed by the hardware accelerator in an emulated storage device emulated by the accelerator emulator ” (¶ 30: In the illustrated example, the guest queue manager 216, the mediator 218, and the shadow queue manager 220 implement a virtual NVMe device 224. The NVMe device 224 is identified as virtual because it appears to and interfaces with the guest native NVMe drivers 214a, 214b as if it were physical hardware . As such, when the guest native NVMe drivers 214a, 214b communicate with the NVMe device 224, the guest native NVMe drivers 214a, 214b behave as if they are communicating with physical hardware . However, the NVMe device 224 operates in the context of "knowing" that it is not physical hardware and that it does not directly access physical hardware ( e.g., the NVMe device 206); ¶ 33: emulating one or more physical resources using the virtual NVMe device). 25. Regarding claim 19 , Dong and Gong, in combination, teach or suggest: “ implementing with the accelerator emulator the hardware accelerator as a non-volatile memory express (NVMe) device ” ( Dong , ¶ 30: In the illustrated example, the guest queue manager 216, the mediator 218, and the shadow queue manager 220 implement a virtual NVMe device 224. The NVMe device 224 is identified as virtual because it appears to and interfaces with the guest native NVMe drivers 214a, 214b as if it were physical hardware . As such, when the guest native NVMe drivers 214a, 214b communicate with the NVMe device 224, the guest native NVMe drivers 214a, 214b behave as if they are communicating with physical hardware . However, the NVMe device 224 operates in the context of "knowing" that it is not physical hardware and that it does not directly access physical hardware ( e.g., the NVMe device 206). In some examples, the virtual NVMe device 224 can be implemented using a quick emulator (QEMU) hosted hypervisor to perform hardware virtualization; ¶ 33: An advantage of emulating one or more physical resources using the virtual NVMe device 224 is that the guest native NVMe drivers 214a, 214b in the guest VMs 202a, 202b do not need to be modified to be or operate different from the host native NVMe driver 222); Gong , ¶ 69: hypervisor 102 may emulate at least one virtual accelerator for each virtual machine; ¶ 70: The hardware 103 may include but is not limited to a Central Processing Unit (CPU) that can provide a special instruction, a SoC chip (System-on-a-Chip), and other hardware devices that may provide acceleration functions , for example, a Graphics Processing Unit (GPU), and an Field Programmable Gate Array (FPGA). Acceleration is to offload some functions in a program to hardware for execution to achieve an effect of shortening a program execution time ). “ providing a NVMe driver in an emulator library of the guest system for controlling the NVMe device to execute the data processing command ” ( Dong , ¶ 29: example, each guest VM 202a, 202b executes a corresponding guest native NVMe driver 214a, 214b . Also in the illustrated example, the VMM 208 executes an example guest queue manager 216, an example mediator 218, an example shadow queue manager 220, and an example host native NVMe driver 222. In the illustrated example, the NVMe drivers 214a, 214b, 222 are identified as native because the I/O function calls programmed therein are structured to interface directly with a physical hardware device such as the NVMe device 206 (e.g., directly with firmware of the NVMe device 206; Gong , ¶ 71: The acceleration abstraction layer (AAL) 202 is mainly configured to provide a universal Application Programming Interface (API) layer for different virtual accelerators … ¶ 99: VNF of the virtual machine invokes a frontend driver API of the virtual accelerator to transfer the information (including the GPA of the to-be-accelerated data, the GPA for storing the acceleration result, and the like) required for performing the acceleration operation to the frontend driver). B. 26. Claims 8–10, 15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Dong in view of (B) Gong and (C) Jeong, as applied to claims, 1, 13, and 16 above, and further in view of (D) Cascaval . 27. Regarding claim 8 , Dong teaches or suggests: “the guest system” (Fig. 2 and ¶ 27: example guest VMs (shown as guest VM-A 202a and guest VM-B 202b); ¶ 18: Virtualization technologies involve a single physical platform hosting multiple guest virtual machines (VMs ). To allocate use of hardware resources (e.g., central processing units (CPUs), network interface cards (NICs), storage, memory, graphics processing units (CPUs), etc.), a number of virtualization techniques were developed that enables virtualizing such physical hardware resources to allocatable virtual resources. For example, a single physical CPU could be allocated as multiple virtual CPUs to different VMs. Each VM identifies corresponding virtual CPU(s) as its own CPU(s), but in actuality each VM is using only a portion of the same underlying physical CPU that is also used by other VMs). Dong, Gong, and Jeong do not teach “a profiler acquiring performance statistics from the guest system.” (D) Cascaval , in the context of Dong, Gong, and Jeong’s teachings, however teaches or suggests implementing: “ a profiler acquiring performance statistics from the guest system ” (¶ 25: an API for integrated performance event monitoring across the execution layers of a computer system . The API is an interface implemented by the underlying performance monitoring infrastructure that provides a protocol for the cooperation between two types of monitoring clients: (1) event producers that generate monitoring information, and (2) event consumers that process and regulate the information that is monitored; ¶ 26: An event producer is an execution layer that emits performance events to the monitoring infrastructure through the API; ¶ 62: monitoring consumer can register a set of events in a particular context at the statistic level of detail . Registration returns a statistics handle, allocates the necessary data structures needed to compute a statistic on any event in the set, and informs event notification about this statistic handle; ¶ 70: After a handle has been enabled, the handle's internal data structure is read through this operation). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (D) Cascaval with those of (A) Dong, (B) Gong, and C) Jeong to provide for performance monitoring of the guest virtual machine and/or applications. The motivation or advantage to do so to allow for performance (and resource) tuning of the host machine. 28. Regarding claim 9 , Cascaval teaches or suggests: “ wherein the profiler runs with a provided user application or an emulator library to collect the performance statistics ” (¶ 25: an API for integrated performance event monitoring across the execution layers of a computer system ; ¶ 26, ¶ 62, and ¶ 70, as applied in rejecting claim 8 above; Fig. 1 and ¶ 20: Each of the layers in an execution stack will generate multiple events during its execution; ¶ 30: when tracking performance events from the operating system (e.g., page faults), the tool may only be interested in those events attributed to the application thread on which the tool is focusing). 29. Regarding claim 10 , Cascaval teaches or suggests: “ wherein the profiler or an emulator library includes a profiler library that provides an API for hooks or callbacks of performance statistics to the profiler ” (¶ 35: An event callback is a routine that, through the API, can be installed to be invoked in response to the occurrence of specific events or event statistics ; ¶ 49: Event notification signals to the monitoring infrastructure that an event has occurred and provides a mechanism to pass specific event attributes to the monitoring infrastructure; ¶ 51: If a consumer has registered an event statistics for this event in the current event context, and if the statistic has been enabled, then the statistics is updated by applying the statistics function to the current event. Finally, if a consumer has registered an event callback for this event in the current event context, and if the callback has been enabled then the callback function will be invoked). 30. Regarding claim 15 , it is the corresponding method claim reciting similar limitations of commensurate scope as the system of claim 8 . Therefore, it is rejected on the same basis as claim 8 above. 31. Regarding claim 20 , it is the corresponding computer program product claim reciting similar limitations of commensurate scope as the system of claim 8 . Therefore, it is rejected on the same basis as claim 8 above. C. 32. Claims 11–12 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Dong in view of (B) Gong, (C) Jeong and (D) Cascaval , as applied to claim 8 above, and further in view of (E) Altman . 33. Regarding claim 11 , Dong and Gong teach or suggest: “ wherein the accelerator emulator is configured to emulate the hardware accelerator ” ( Dong , ¶ 30: In the illustrated example, the guest queue manager 216, the mediator 218, and the shadow queue manager 220 implement a virtual NVMe device 224. The NVMe device 224 is identified as virtual because it appears to and interfaces with the guest native NVMe drivers 214a, 214b as if it were physical hardware . As such, when the guest native NVMe drivers 214a, 214b communicate with the NVMe device 224, the guest native NVMe drivers 214a, 214b behave as if they are communicating with physical hardware . However, the NVMe device 224 operates in the context of "knowing" that it is not physical hardware and that it does not directly access physical hardware ( e.g., the NVMe device 206). In some examples, the virtual NVMe device 224 can be implemented using a quick emulator (QEMU) hosted hypervisor to perform hardware virtualization; ¶ 33: An advantage of emulating one or more physical resources using the virtual NVMe device 224 is that the guest native NVMe drivers 214a, 214b in the guest VMs 202a, 202b do not need to be modified to be or operate different from the host native NVMe driver 222); Gong , ¶ 69: hypervisor 102 may emulate at least one virtual accelerator for each virtual machine; ¶ 70: The hardware 103 may include but is not limited to a Central Processing Unit (CPU) that can provide a special instruction, a SoC chip (System-on-a-Chip), and other hardware devices that may provide acceleration functions , for example, a Graphics Processing Unit (GPU), and an Field Programmable Gate Array (FPGA). Acceleration is to offload some functions in a program to hardware for execution to achieve an effect of shortening a program execution time ); and “ threads … provided by one or more CPU cores ” ( Dong , ¶ 37: mediator 218 uses dedicated CPU cores/threads to poll). Dong , Gong, Jeong and Cascaval do not teach but — (E) Altman teaches or suggests: “ emulate … on a thread pool ” (¶ 16: means for improving performance of emulation by partitioning emulation tasks into larger number of threads; ¶ 17: a thread pool for holding threads, a thread processor for accessing a memory of the host system, and for determining which thread in the thread pool to select for emulation ; Fig. 5 and ¶ 53: the thread processor (engine) 510 decides which thread in the thread pool 530 to select for emulation, and thereby processes (schedules) the threads held in the thread pool 530; ¶ 66: Each processor 710A, 710B, 710C, 710D, etc. along with its resources is emulated, respectively, as a thread 720A, 720B, 720C, 710D , etc.). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (E) Altman with those of (A) Dong, (B) Gong, (C) Jeong, and (D) Cascaval to emulate virtual hardware (devices) using thread pools. The motivation or advantage to do so to allow for the dynamic creation and emulation (virtualization) of processor resources using freely allocatable processor threads. 34. Regarding claim 12 , Dong, Gong, and Altman teach or suggest: “ wherein the accelerator emulator is configured to emulate a plurality of hardware accelerators operated in parallel on the thread pool ” ( Dong , ¶ 17: virtualization of nonvolatile memory express (NVMe) devices … service multiple I/O requests using parallel I/O processing ; Gong , ¶ 73: A hypervisor 403 emulates a plurality of virtual accelerators, for example, a virtual accelerator 1-4031, a virtual accelerator 2-4032, a virtual accelerator 3-4033, and a virtual accelerator 4-4034; Altman , ¶ 94: partition the tasks of emulation further into independent parallel threads that can be exploited even better by a host multiprocessing system). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN C WU whose telephone number is (571)270-5906. The examiner can normally be reached Monday through Friday, 8:30 A.M. to 5:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J. Li can be reached on (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENJAMIN C WU/ Primary Examiner, Art Unit 2195 March 19, 2026