DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1–20 are presented for examination in a non-provisional application filed on 12/30/2022 . Drawings 3. The drawings were received on 12/30/2022 (in the filings). These drawings are acceptable. Claim Rejections - 35 USC § 101 (Computer Medium) 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4. Claims 15–20 are rejected under 35 U.S.C. 101 because it is directed to non-statutory subject matter and thus do not fall within at least one of the four categories of patent eligible subject matter. 5. As to claims 15–20 , they are directed to a “ computer readable storage medium comprising a set of instructions .” Under current Office examination procedure, and absent clear definition or exclusion by the Applicant to the contrary, the broadest reasonable interpretation of a computer readable storage medium can encompass non-statutory , transitory forms of signal transmission, such as a propagating electrical or electromagnetic signal per se. See MPEP § 2106.03, Eligibility Step 1: The Four Categories of Statutory Subject Matter. Accordingly, the claimed “ computer readable storage medium ” is directed to non-statutory subject matter . Applicant is advised to amend this portion of the claim to recite a “ non-transitory computer readable storage medium ” to overcome the 101 rejection. Examiner’s Remarks 6. Examiner refers to and explicitly cites particular pages, sections, figures, paragraphs or columns and lines in the references as applied to Applicant’s claims to the extent practicable to streamline prosecution. Although the cited portions of the references are representative of the best teachings in the art and are applied to meet the specific limitations of the claims, other uncited but related teachings of the references may be equally applicable as well. It is respectfully requested that, in preparing responses to the rejections, the Applicant fully considers not only the cited portions of the references, but also the references in their entirety, as potentially teaching, suggesting or rendering obvious all or one or more aspects of the claimed invention. Abbreviations 7. Where appropriate, the following abbreviations will be used when referencing Applicant’s submissions and specific teachings of the reference(s): i. figure / figures: Fig. / Figs. ii. column / columns: Col. / Cols. iii. page / pages: p. / pp. References Cited 8. (A) Makhervaks et al. , US 2021/0349841 A1 (“ Makhervaks ”). (B) Naven et al. , US 2014/0376548 A1 (“ Naven”). (C) Thomas et al. , US 10,360,155 B1 (“Thomas”). Notice re prior art available under both pre-AIA and AIA 9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. A. 10. Claims 1–2, 5, 9–10, 12, 15–16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Makhervaks . See “References Cited” section, above, for full citations of references. 11. Regarding claim 1 , (A) Makhervaks teaches/suggests the invention substantially as claimed, including: “ A semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic to ” (¶ 125: logic processor may include one or more physical processors (hardware) configured to execute software instructions. Additionally or alternatively, the logic processor may include one or more hardware logic circuits or firmware devices configured to execute hardware-implemented logic or firmware instructions); “ route, via a switch, application data in a data transfer message between a physical storage device and a host system, the host system to interface with a virtual function of an infrastructure processing unit (IPU), by remapping a transaction identifier field in the data transfer message between a first transaction identifier associated with the virtual function and a second transaction identifier associated with the physical storage device ” (¶ 2: server system may further comprise a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device may be configured to virtualize hardware resources of the plurality of SSD devices and present a virtual SSD device to the host software of the one or more compute nodes ; ¶ 25: The LNV device 108 may be configured to manage read/write requests for the locally attached NVMe devices; ¶ 30: The LNV device 302 has NVMe functions or virtual functions 318 and LNV function 320 that are PCIe functions; Fig. 11 and ¶¶ 76–78 : FIG. 11 illustrates an example of RID and tag remapping performed by the NT switch … As illustrated, using a RID and TAG remapping table 1012, the NT switch 1110 may be configured to remap the SSD RID used in the request to the LNV RID . Specifically, the NT switch 1110 will remap the RID to the LNV function RID for the PCIe domain of the target host 1102 of that request. For example, if the first SSD device is making a request to host1, then the NT switch 1110 may be configured to remap the SSD1 RID in the request to the LNV F1 RID to route the request to the host1; Fig. 12 and ¶¶ 79–80 : FIG. 12 illustrates an example of mapping virtual functions (VF) to physical functions (PF) for the LNV device … In this example, the NT switch 1202 may be configured to map those VFs 1206 to PFs 1208 of the NT switch 1202, and to present those VFs 1206 as the PFs 1208 … In another example, the NT switch 1202 may be further configured to present a subset of the VFs of the LNV device 1200 as VFs 1210 associated with one of the PFs 1208 represented by the NT switch 1202; ¶ 121: methods and processes may be implemented as a computer-application program or service , ¶ 122: Computing system 2000 may take the form of one or more personal computers, server computers, tablet computers, home-entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smart phone), and/or other computing devices, and wearable computing devices such as smart wristwatches and head mounted augmented reality devices the Examiner notes : these are examples of “applications”). “ wherein the physical storage device is to be managed by the IPU ” (¶ 25: The LNV device 200 is configured to generate NVMe commands 210 and place those commands into the submission queues (SQ) of respective SSDs 202 ( e.g. NVMe devices) that are allocated to the LNV device 200 and offloaded onto hardware 208). “… to route the application data between the host system and the physical storage device ” (Fig. 11 and ¶¶ 76–78 : FIG. 11 illustrates an example of RID and tag remapping performed by the NT switch … the NT switch 1110 may be configured to remap the SSD1 RID in the request to the LNV F1 RID to route the request to the host1). Makhervaks do not expressly teach “wherein to route the application data … includes to bypass temporary storage of the application data in a memory local to the IPU.” However Makhervaks teaches that the “The LNV device may generate backend NVMe commands and place those commands as SQEs in the backend NVME SQ of backend memory ” which is separate from (not local to) the LNV. (Fig. 14 and ¶ 89: The LNV device 1412 may generate backend NVMe commands and place those commands as SQEs 1422 in the backend NVME SQ 1424 of backend memory 1426; Fig. 1 and ¶ 26: standard NVMe interface defines a set of submission queues (SQ) 112 and a set of completion queues (CQ) 114. New disk read/write requests for the standard NVMe device 106 may be submitted by the standard NVMe storage stack of a VM 102 to a SQ 112. The standard NVMe device 106, whose functions are performed by the LNV 108 and the virtualized locally attached NVMe devices 110, will perform read the request in the SQ 112, execute the request, and report completion of the request to the CQ 114 to inform the standard NVMe storage stack 104 of the VM 102 that the request has been completed). Accordingly, it is inherent in or would have been obvious to a person of ordinary skill in the art in view of Makhervaks’ teachings that Makhervaks’ application data is routed between the host system and the physical storage device which includes “bypass temporary storage of the application data in a memory local to the IPU ” (in which read requests are placed in the backend memory and not the LNV). 12. Regarding claim 2 , Makhervaks teaches or suggests: “ wherein the data transfer message is a WRITE request issued by the physical storage device, and wherein to perform remapping the transaction identifier field in the data transfer message the logic is to: (¶ 76: When one of the SSD devices 1108 initiates a read/write request to one of the hosts 1102, that request includes a RID of that SSD device); substitute, in the write request, the first transaction identifier associated with the virtual function in place of the second transaction identifier associated with the physical storage device ” (Fig. 11 and ¶¶ 76–78 : FIG. 11 illustrates an example of RID and tag remapping performed by the NT switch … As illustrated, using a RID and TAG remapping table 1012, the NT switch 1110 may be configured to remap the SSD RID used in the request to the LNV RID . Specifically, the NT switch 1110 will remap the RID to the LNV function RID for the PCIe domain of the target host 1102 of that request. For example, if the first SSD device is making a request to host1, then the NT switch 1110 may be configured to remap the SSD1 RID in the request to the LNV F1 RID to route the request to the host1). 13. Regarding claim 5 , Makhervaks teaches or suggests: “ maintain a remapping table to hold the first transaction identifier and the second transaction identifier ” (¶ 78: the NT switch 1110 may further keep track of tag remapping using the RID and tag remapping table 1012 ). 14. Regarding claims 9–10 and 12 , they are the corresponding system claims reciting similar limitations of commensurate scope as the apparatus of claims 1–2 and 5, respectively . Therefore, they are rejected on the same basis as claims 1–2 and 5 above, including the following rationale: Makhervaks teaches or suggests: “ a host system comprising a host processor coupled to a host memory; an infrastructure processing unit (IPU); (LNV device) a plurality of storage devices; and a multi-root (MR) switch coupled to the host system, the IPU and the plurality of storage devices ” (NT switch for multiple hosts/domains) ( Fig 9 and ¶ 70 : FIG. 9 illustrates a multi-host configuration 900 that shares a same set of LNV device 902 and SSD devices 904 among a plurality of compute nodes for a plurality of hosts 906. Each host may include separate host memory devices 912 … NT switch 908 may be configured to create separate PCI domains for HOST1, HOST2, HOST3, HOST4 ; Figs. 5 and 6 and ¶ 47: each compute node 500 in the node cluster 502 includes at least one processor 508 communicatively coupled). 15. Regarding claims 15–16 and 18 , they are the corresponding computer program product claims reciting similar limitations of commensurate scope as the apparatus of claims 1–2 and 5, respectively . Therefore, they are rejected on the same basis as claims 1–2 and 5 above. B. 16. Claims 3–4, 6–7, 11, 13, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Makhervaks , as applied to claims 1, 9, and 15 above, and further in view (B) Naven . 17. Regarding claim 3 , Makhervaks teaches or suggests: “… a data completion issued by the host system, and wherein to perform remapping the transaction identifier field in the data transfer message the logic is to : (¶ 26: execute the request, and report completion of the request to the CQ 114 to inform the standard NVMe storage stack 104 of the VM 102 that the request has been completed; ¶ 78: NT switch 1110 may further keep track of tag remapping using the RID and tag remapping table 1012. For example, the NT switch 1012 may remap tag0 for a request from the first SSD device to Tag1, remap tag0 for a request from the second SSD device to tag2, and remap tag0 for a request from the third SSD device to tag3. Completion of the read request will also include a corresponding tag that was sent to the host for the request, and the NT switch 1110 may remap those tags back to the local tag of the respective SSD device 1108 using the table 1012 ); substitute, in the data completion, the second transaction identifier associated with the physical storage device in place of the first transaction identifier associated with the virtual function ” (¶ 26: execute the request, and report completion of the request to the CQ 114 to inform the standard NVMe storage stack 104 of the VM 102 that the request has been completed; ¶ 78: the NT switch 1110 may remap those tags back to the local tag of the respective SSD device 1108 using the table 1012). Makhervaks do not expressly teach that the data transfer message is data completion (message). (but see ¶ 26: “execute the request, and report completion of the request ” which highly suggests this feature). (B) Naven however teaches or suggests: “ the data transfer message is data completion (message)” (¶ 79: FIG. 8 schematically illustrates the structure of a standard memory write ( or memory read) data packet header 40 …. A PCIe transaction may be made up of a request data packet and one or more corresponding completion data packets ; ¶¶ 105 and 106: performing the memory read operation specified in the memory read request data packet 100) and generates a memory read completion data packet 102 …. The memory read completion data packet 102 is then transmitted to the server 1 ). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (B) Naven with those of (A) Makhervaks to generate and return a data completion packet in response to receiving and performing a read request. The motivation or advantage to do so is to complete the PCIe transaction/protocol. 18. Regarding claim 4 , Makhervaks and Naven teach or suggest: “ substitute, in a read request to be issued by the physical storage device, the first transaction identifier associated with the virtual function in place of the second transaction identifier associated with the physical storage device; and route, via the switch, the read request to the host system ” ( Makhervaks , ¶ 76: When one of the SSD devices 1108 initiates a read/write request to one of the hosts 1102, that request includes a RID of that SSD device; Fig. 11 and ¶¶ 76–78 : FIG. 11 illustrates an example of RID and tag remapping performed by the NT switch … As illustrated, using a RID and TAG remapping table 1012, the NT switch 1110 may be configured to remap the SSD RID used in the request to the LNV RID . Specifically, the NT switch 1110 will remap the RID to the LNV function RID for the PCIe domain of the target host 1102 of that request. For example, if the first SSD device is making a request to host1, then the NT switch 1110 may be configured to remap the SSD1 RID in the request to the LNV F1 RID to route the request to the host1); “ wherein the data completion is to be issued by the host system responsive to the read Request ” ( Makhervaks — ¶ 26: execute the request, and report completion of the request to the CQ 114 to inform the standard NVMe storage stack 104 of the VM 102 that the request has been completed; ¶ 78: NT switch 1110 may further keep track of tag remapping using the RID and tag remapping table 1012. For example, the NT switch 1012 may remap tag0 for a request from the first SSD device to Tag1, remap tag0 for a request from the second SSD device to tag2, and remap tag0 for a request from the third SSD device to tag3. Completion of the read request will also include a corresponding tag that was sent to the host for the request, and the NT switch 1110 may remap those tags back to the local tag of the respective SSD device 1108 using the table 1012 ; Naven — ¶ 79: FIG. 8 schematically illustrates the structure of a standard memory write ( or memory read) data packet header 40 …. A PCIe transaction may be made up of a request data packet and one or more corresponding completion data packets ; ¶¶ 105 and 106: performing the memory read operation specified in the memory read request data packet 100) and generates a memory read completion data packet 102 …. The memory read completion data packet 102 is then transmitted to the server 1 ). 19. Regarding claim 6 , Makhervaks and Naven teach or suggest: “ wherein the first transaction identifier includes a virtual requester identifier associated with the virtual function, wherein the second transaction identifier includes a requester identifier for the physical storage device, and wherein the transaction identifier field includes a requester identifier field ” ( Makhervaks — Fig. 11 and ¶¶ 76–78 : FIG. 11 illustrates an example of RID and tag remapping performed by the NT switch … As illustrated, using a RID and TAG remapping table 1012, the NT switch 1110 may be configured to remap the SSD RID used in the request to the LNV RID . Specifically, the NT switch 1110 will remap the RID to the LNV function RID for the PCIe domain of the target host 1102 of that request. For example, if the first SSD device is making a request to host1, then the NT switch 1110 may be configured to remap the SSD1 RID in the request to the LNV F1 RID to route the request to the host1; ¶ 78: NT switch 1110 may further keep track of tag remapping using the RID and tag remapping table 1012. For example, the NT switch 1012 may remap tag0 for a request from the first SSD device to Tag1, remap tag0 for a request from the second SSD device to tag2, and remap tag0 for a request from the third SSD device to tag3; Fig. 12 and ¶¶ 79–80 : FIG. 12 illustrates an example of mapping virtual functions (VF) to physical functions (PF) for the LNV device … In this example, the NT switch 1202 may be configured to map those VFs 1206 to PFs 1208 of the NT switch 1202, and to present those VFs 1206 as the PFs 1208 … In another example, the NT switch 1202 may be further configured to present a subset of the VFs of the LNV device 1200 as VFs 1210 associated with one of the PFs 1208 represented by the NT switch 1202; Naven — Fig. 8 and ¶ 79: FIG. 8 schematically illustrates the structure of a standard memory write ( or memory read) data packet header 40 …. the header 40 comprises a sixteen bit requester ID field 40a indicating the device that issued the data packet to which header 40a belongs. As described above, the requester ID field of a PCIe data packet comprises a function, device and bus number. The header 40 further comprises an eight bit tag field 40b. A PCIe transaction may be made up of a request data packet and one or more corresponding completion data packets. Each request data packet is associated with a value which is stored in the tag field 40b ). 20. Regarding claim 7 , Makhervaks and Naven teach or suggest: “ wherein the first transaction identifier further includes a first tag, wherein the second transaction identifier further includes a second tag, and wherein the transaction identifier field further includes a TAG FIELD ” ( Makhervaks — Fig. 11 and ¶¶ 76–78 ; and Fig. 12 and ¶¶ 79–80 , as applied in rejecting claim 6 above; Naven — Fig. 8 and ¶ 79: as applied in rejecting claim 6 above). 21. Regarding claims 11 and 13 , they are the corresponding system claims reciting similar limitations of commensurate scope as the apparatus of claims 4 and 7, respectively . Therefore, they are rejected on the same basis as claims 4 and 7 above. 22. Regarding claims 17 and 19 , they are the corresponding computer program product claims reciting similar limitations of commensurate scope as the apparatus of claims 4 and 7, respectively . Therefore, they are rejected on the same basis as claims 4 and 7 above. C. 23. Claims 8, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Makhervaks , as applied to claims 1, 9, and 15 above, and further in view (C) Thomas. 24. Regarding claim 8 , Makhervaks teaches or suggests: “ transferring stored data, via the switch … while bypassing temporary storage of the stored data in the memory local to the IPU ” (see ¶ 2, ¶ 25, and ¶ 30, as applied in rejecting claim 1 above; Fig. 11 and ¶¶ 76–78 : FIG. 11 illustrates an example of RID and tag remapping performed by the NT switch … As illustrated, using a RID and TAG remapping table 1012, the NT switch 1110 may be configured to remap the SSD RID used in the request to the LNV RID . Specifically, the NT switch 1110 will remap the RID to the LNV function RID for the PCIe domain of the target host 1102 of that request. For example, if the first SSD device is making a request to host1, then the NT switch 1110 may be configured to remap the SSD1 RID in the request to the LNV F1 RID to route the request to the host1; Fig. 12 and ¶¶ 79–80 : FIG. 12 illustrates an example of mapping virtual functions (VF) to physical functions (PF) for the LNV device … In this example, the NT switch 1202 may be configured to map those VFs 1206 to PFs 1208 of the NT switch 1202, and to present those VFs 1206 as the PFs 1208 … In another example, the NT switch 1202 may be further configured to present a subset of the VFs of the LNV device 1200 as VFs 1210 associated with one of the PFs 1208 represented by the NT switch 1202; Fig. 14 and ¶ 89: The LNV device 1412 may generate backend NVMe commands and place those commands as SQEs 1422 in the backend NVME SQ 1424 of backend memory 1426; Fig. 1 and ¶ 26: standard NVMe interface defines a set of submission queues (SQ) 112 and a set of completion queues (CQ) 114. New disk read/write requests for the standard NVMe device 106 may be submitted by the standard NVMe storage stack of a VM 102 to a SQ 112. The standard NVMe device 106, whose functions are performed by the LNV 108 and the virtualized locally attached NVMe devices 110, will perform read the request in the SQ 112, execute the request, and report completion of the request to the CQ 114 to inform the standard NVMe storage stack 104 of the VM 102 that the request has been completed). Makhervaks do not teach “to perform data compaction by transferring stored data … between the physical storage device and another physical storage device.” (C) Thomas , in the context of Makhervaks’ teachings , however teaches or suggests: “ to perform data compaction by transferring stored data … between the physical storage device and another physical storage device ” ( Col. 7, lines 1–5 : allows compaction of data as the data is moved from the fast tier to the slow tier . This reduces write amplification (write delay) in the slow tier layer; Col. 9, lines 33–38 : Typically, the slow tier 204 is a larger and less expensive memory (e.g., implemented using TLC technology). For example, the slow tier 204 may be capable of storing more data in one of its memory cells than the fast tier 206 … data from a fast tier block 216 is copied to a slow tier block 218 and data from a fast tier block 220 is copied to a slow tier block 222 …. Thus, data may be compacted as it is copied into the slow tier 204 ; Col. 10, lines 65–67 : the copy operation involves compaction, data from multiple SLC blocks (e.g., the fast tier blocks 216 and 220) could be written to a single TLC block). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (C) Thomas with those of (A) Makhervaks to compact data when relocating data between different SSD storage devices (tiers). The motivation or advantage to do so is to optimize the spaces and usages of different storage tiers (types) and to improve data/storage access performances. 25. Regarding claim 14 , it is the corresponding system claim reciting similar limitations of commensurate scope as the apparatus of claim 8 . Therefore, it is rejected on the same basis as claim 8 above. 26. Regarding claim 20 , it is the corresponding computer program product claim reciting similar limitations of commensurate scope as the apparatus of claim 8 . Therefore, it is rejected on the same basis as claim 8 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (a) Saghi et al., US 2014/0281106 A1 , teaching direct routing between address spaces through a nontransparent PCIe bridge. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN C WU whose telephone number is (571)270-5906. The examiner can normally be reached Monday through Friday, 8:30 A.M. to 5:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J. Li can be reached on (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENJAMIN C WU/ Primary Examiner, Art Unit 2195 March 28, 2026