Prosecution Insights
Last updated: April 19, 2026
Application No. 18/066,155

Accessing Multiple Physical Partitions of a Hardware Device

Final Rejection §103
Filed
Dec 14, 2022
Examiner
CASTANEDA, IVAN ALEXANDER
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Ati Technologies Ulc
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
2 granted / 3 resolved
+11.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
34 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
14.7%
-25.3% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
6.9%
-33.1% vs TC avg
§112
18.6%
-21.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is in response to claims filed on 12/16/2025. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 7 of , filed 12/16/2025, with respect to claim objection of claim 4 have been fully considered and are persuasive. The objection of 07/02/2025 has been withdrawn. Applicant’s arguments with respect to claims 1, 8, and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-13, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Maharana et al. Pub. No. US 2021/0311665 A1 (hereinafter Maharana) in view of Duluk, Jr. et al. Pub. No. US 2021/0073125 A1 (hereinafter Duluk). With regard to claim 1, Maharana teaches a method comprising ([0039], FIG. 6 is a flow diagram of an example method of NVMe direct virtualization in a memory sub-system in accordance with some embodiments of the present disclosure): exposing a physical function of a hardware device on a bus, the physical function corresponding to a physical partition of multiple physical partitions of the hardware device ([0022], In one embodiment, the NVMe virtualization module 113 executes firmware or other logic to provide a number of virtual NVMe controllers in memory sub-system 110. NVMe virtualization module 113 associates each virtual NVME controller with a certain portion of the underlying memory components 112A to 112N, where each portion is addressable by a unique namespace. NVMe virtualization module 113 further assigns a corresponding PCIe physical function to each virtual NVMe controller, causing each virtual NVMe controller to appear as a separately addressable PCIe device (i.e., a physical controller) connected to the PCIe bus between the memory sub-system 110 and the host system 120. Host system 120, including separate virtual machines or partitions running thereon, can thus access each portion of the memory components 112A to 112N represented by a virtual NVMe controller separately and in parallel over the physical host interface (e.g., PCIe bus); [0025] As described above, NVMe virtualization module 113 associates one of physical functions 212-218 with each of virtual NVMe controllers 202-208 in order to allow each virtual NVMe controller 202-208 to appear as a physical controller on PCIe bus 210) …; receiving, via the physical function, a request to perform one or more operations ([0024], Each of virtual NVMe controllers 202-208 manages storage access operations for the corresponding portion of the underlying memory components 112A to 112N, with which it is associated. For example, virtual NVMe controller 202 may receive data access requests from host system 120 over PCIe bus 210, including requests to read, write, or erase data in a first portion of memory component 112A. In response to the request, virtual NVMe controller 202 may perform the requested memory access operation on the data stored at an identified address in the first portion and return requested data and/or a confirmation or error message to the host system 120, as appropriate; [0026], As noted above, each physical function 212-218 (Examiner notes: each corresponding to a NVMe controller) can be assigned to any one of virtual machines 232-236 in the host system 120. When I/O data is received at a virtual NVMe controller 202-208 from a virtual machine 232-236 a virtual machine driver provides a guest physical address for a corresponding read/write command); and performing the one or more operations on the physical partition ([0026], NVMe virtualization module 113 translates the physical function number to a bus, device, and function (BDF) number and then adds the command to a direct memory access (DMA) operation to perform the DMA operation on the guest physical address). Maharana teaches a partitioning of a memory into respective portions (Maharana, Abstract). Maharana does not explicitly teach that a physical partition includes one or more compute resources and one or more memory resources. Duluk teaches wherein the physical partition includes one or more compute resources and one or more memory resources (Abstract, A parallel processing unit (PPU) can be divided into partitions. Each partition is configured to operate similarly to how the entire PPU operates. A given partition includes a subset of the computational and memory resources associated with the entire PPU; [0077], With any of the above configurations, device driver 122 and hypervisor 124 interoperate in order to subdivide various compute, graphics, and memory resources included in PPU 200 into separate “PPU partitions” … A given PPU partition operates in substantially similar manner to PPU 200 as a whole.) It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Duluk with the teachings of Maharana in order to provide a method that teaches a physical partitioning comprising of compute and memory resources. The motivation for applying Duluk teaching with Maharana teaching is to provide a method that allows for the partitioning of computing resources, including memory and processing resources, for consumption by user processes, as combining the known memory resource partitioning system of Maharana with the parallel processing unit partitioning technique of Duluk constitutes the application of a known technique to a known system in order to yield predictable results. Maharana and Duluk are analogous art directed towards the logical partitioning and management of resources. Therefore, it would have been obvious for one of ordinary skill in the art to combine Duluk with Maharana to teach the claimed invention in order to provide a method of dividing processing and memory resources into isolated partitions, thereby improving the utilization of resources with their associated process. With regard to claim 2, Maharana teaches the method of claim 1, wherein the bus comprises a peripheral component interconnect express bus (FIG. 2, PCIe Bus 210; [0023], FIG. 2 illustrates an example physical host interface between a host system and a memory sub-system implementing NVMe direct virtualization in accordance with some embodiments of the present disclosure. In one embodiment, the controller 115 of memory sub-system 110 is connected to host system 120 over a physical host interface, such as PCIe bus 210). With regard to claim 3, Maharana teaches the method of claim 1, further comprising: exposing an additional physical function of the hardware device on the bus, the additional physical function corresponding to a device management module of the hardware device ([0027], Further, each physical function 212-218 may be implemented in either a privileged mode or normal mode … Typically a first physical function can implement a privileged mode and the remainder of the physical functions can implement a normal mode) that manages the multiple physical partitions of the hardware device ([0027], When Implemented in the privileged mode, the physical function has a single point of management that can control resource manipulation and storage provisioning for other functions implemented in the normal mode. In addition, a physical function in the privileged mode can perform management options, including for example, enabling/disabling of multiple physical functions, storage and quality of service provisioning (QoS), firmware and controller updates, vendor unique statistics and events, diagnostics, secure erase/encryption, among others). With regard to claim 4, Maharana teaches the method of claim 3, further comprising: receiving configuration information corresponding to software, the configuration information indicating resources requested for execution of the software ([0030], The associated virtual NVMe controller 202-208 may appear as a virtual storage resource to each of virtual machines 232, 234, 236 which the guest operating system or guest applications running therein can access; [0037], In one embodiment, controller 115 further includes quality of service (QoS) module 522 and sideband management (SM) bus 524. QoS can implement individual quality of service management for each virtual NVMe controller 202-208. When a large storage device, such as one of memory components 112A to 112N is sliced into smaller partitions, each controlled by a virtual NVMe controller 202-208, and that each can be used by different clients (e.g., virtual machines on host system 120), it may be beneficial to associate QoS characteristics with each individual partition (Examiner notes: A quality of service is associated with a plurality of sets of requirements and limitations governing an application’s access to system resources, enforced through controls such as time limits and predefined configuration of resource allocations). To meet these requirements, QoS module 522 attaches QoS controls (Examiner notes: QoS controls receives service specification and allocates system resources to ensure software is executed with its defined service level) to each virtual NVMe controller 202-208); and configuring, based on the received configuration information, the physical partition including the indicated resources ([0037], The QoS controls may include, for example, an individual storage partition size, bandwidth, or other characteristics. QoS module 522 may monitor the performance of virtual NVMe controllers 202-208 over time and reconfigure resource assignments as needed to ensure compliance with QoS requirements). With regard to claim 5, Maharana teaches the method of claim 4, the software comprising a software container, an application, or a software stack ([0028], Host system 120 runs multiple virtual machines 232, 234, 236, by executing a software layer 224, often referred to as “hypervisor,” above the hardware and below the virtual machines, as schematically shown in FIG. 2 … One or more applications may be running on each virtual machine under the guest operating system). With regard to claim 8, Maharana teaches a system comprising ([0016], FIG. 1 illustrates an example computing environment 100 that includes a memory sub-system 110 in accordance with some embodiments of the present disclosure): a bus, the bus configured to expose a physical function ([0022], In one embodiment, the NVMe virtualization module 113 executes firmware or other logic to provide a number of virtual NVMe controllers in memory sub-system 110. NVMe virtualization module 113 associates each virtual NVME controller with a certain portion of the underlying memory components 112A to 112N, where each portion is addressable by a unique namespace. NVMe virtualization module 113 further assigns a corresponding PCIe physical function to each virtual NVMe controller, causing each virtual NVMe controller to appear as a separately addressable PCIe device (i.e., a physical controller) connected to the PCIe bus between the memory sub-system 110 and the host system 120. Host system 120, including separate virtual machines or partitions running thereon, can thus access each portion of the memory components 112A to 112N represented by a virtual NVMe controller separately and in parallel over the physical host interface (e.g., PCIe bus); [0025] As described above, NVMe virtualization module 113 associates one of physical functions 212-218 with each of virtual NVMe controllers 202-208 in order to allow each virtual NVMe controller 202-208 to appear as a physical controller on PCIe bus 210) and the bus configured to receive a request to perform one or more operations ([0024], Each of virtual NVMe controllers 202-208 manages storage access operations for the corresponding portion of the underlying memory components 112A to 112N, with which it is associated. For example, virtual NVMe controller 202 may receive data access requests from host system 120 over PCIe bus 210, including requests to read, write, or erase data in a first portion of memory component 112A. In response to the request, virtual NVMe controller 202 may perform the requested memory access operation on the data stored at an identified address in the first portion and return requested data and/or a confirmation or error message to the host system 120, as appropriate; [0026], As noted above, each physical function 212-218 (Examiner notes: each corresponding to a NVMe controller) can be assigned to any one of virtual machines 232-236 in the host system 120. When I/O data is received at a virtual NVMe controller 202-208 from a virtual machine 232-236 a virtual machine driver provides a guest physical address for a corresponding read/write command); and a hardware device, the hardware device including multiple physical ([0023], The virtual NVMe controllers 202-208 are virtual entities that appear as physical controllers to other devices, such as host system 120, connected to PCIe bus by virtue of a physical function 212-218 associated with each virtual NVMe controller 202-208), …, wherein the multiple physical partitions includes a physical partition corresponding to the physical function ([0022], NVMe virtualization module 113 further assigns a corresponding PCIe physical function to each virtual NVMe controller, causing each virtual NVMe controller to appear as a separately addressable PCIe device (i.e., a physical controller) connected to the PCIe bus between the memory subsystem 110 and the host system 120). Maharana teaches a partitioning of a memory into respective portions (Maharana, Abstract). Maharana does not explicitly teach that a physical partition includes one or more compute resources and one or more memory resources. Duluk teaches each of the multiple physical partitions including one or more compute resources and one or more memory resources (A given partition includes a subset of the computational and memory resources associated with the entire PPU; [0077], With any of the above configurations, device driver 122 and hypervisor 124 interoperate in order to subdivide various compute, graphics, and memory resources included in PPU 200 into separate “PPU partitions” … A given PPU partition operates in substantially similar manner to PPU 200 as a whole.) which is substantially similar to claim 1 and therefore rejected with similar rationale. Examiner notes: It would be obvious to one of ordinary skill in the art to recognize that the system of claim 8 is being substantially recited again for the method of claim 1. With regard to claim 9, Maharana teaches the system of claim 8, wherein the bus comprises a peripheral component interconnect express bus (FIG. 2, PCIe Bus 210; [0023], FIG. 2 illustrates an example physical host interface between a host system and a memory sub-system implementing NVMe direct virtualization in accordance with some embodiments of the present disclosure. In one embodiment, the controller 115 of memory sub-system 110 is connected to host system 120 over a physical host interface, such as PCIe bus 210). With regard to claim 10, Maharana teaches the system of claim 8, further comprising: an additional physical function exposable on the bus ([0027], Further, each physical function 212-218 may be implemented in either a privileged mode or normal mode … Typically a first physical function can implement a privileged mode and the remainder of the physical functions can implement a normal mode); and a device management module, corresponding to the additional physical function, that manages the multiple physical partitions of the device ([0027], When implemented in the privileged mode, the physical function has a single point of management that can control resource manipulation and storage provisioning for other functions implemented in the normal mode. In addition, a physical function in the privileged mode can perform management options, including for example, enabling/disabling of multiple physical functions, storage and quality of service provisioning (QoS), firmware and controller updates, vendor unique statistics and events, diagnostics, secure erase/encryption, among others). With regard to claim 11, Maharana teaches the system of claim 10, wherein the device management module is to: receive, via the bus and the additional physical function, configuration information corresponding to software, the configuration information indicating resources requested for execution of the software ([0030], The associated virtual NVMe controller 202-208 may appear as a virtual storage resource to each of virtual machines 232, 234, 236 which the guest operating system or guest applications running therein can access; [0037], In one embodiment, controller 115 further includes quality of service (QoS) module 522 and sideband management (SM) bus 524. QoS can implement individual quality of service management for each virtual NVMe controller 202-208. When a large storage device, such as one of memory components 112A to 112N is sliced into smaller partitions, each controlled by a virtual NVMe controller 202-208, and that each can be used by different clients (e.g., virtual machines on host system 120), it may be beneficial to associate QoS characteristics with each individual partition (Examiner notes: A quality of service is associated with a minimum set of requirements and limitations governing an application’s access to system resources, enforced through controls such as time limits and predefined resource allocations). To meet these requirements, QoS module 522 attaches QoS controls (Examiner note: QoS controls receives service specification and allocates system resources to ensure software is executed with its defined service level) to each virtual NVMe controller 202-208); and configure, based on the received configuration information, the physical partition including the indicated resources ([0037], The QoS controls may include, for example, an individual storage partition size, bandwidth, or other characteristics. QoS module 522 may monitor the performance of virtual NVMe controllers 202-208 over time and reconfigure resource assignments as needed to ensure compliance with QoS requirements). With regard to claim 12, Maharana teaches the system of claim 11, wherein the configuration information is received from a management application that provides and interface to manage resources in the device ([0040], At operation 610, the processing device provides a plurality of virtual memory controllers, such as virtual NVMe controllers 202-208. The virtual NVMe controllers 202-208 are virtual entities that appear as physical controllers to other devices, such as host system 120, connected to PCIe bus 210 by virtue of a physical function 212-218 associated with each virtual NVMe controller 202-208. In one embodiment, the virtual memory controllers are created inside controller 115, but may not be used until they are enabled, such as in response to input received from a system administrator via a management interface). With regard to claim 15, Maharana teaches a computing device comprising ([0047], FIG. 7 illustrates an example machine of a computer system 700 within a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein, can be executed): a hardware device, the hardware device including a physical function exposed on the bus to receive, via the bus ([0023], The virtual NVMe controllers 202-208 are virtual entities that appear as physical controllers to other devices, such as host system 120, connected to PCIe bus by virtue of a physical function 212-218 associated with each virtual NVMe controller 202-208), a request to perform one or more operations ([0024], Each of virtual NVMe controllers 202-208 manages storage access operations for the corresponding portion of the underlying memory components 112A to 112N, with which it is associated. For example, virtual NVMe controller 202 may receive data access requests from host system 120 over PCIe bus 210, including requests to read, write, or erase data in a first portion of memory component 112A. In response to the request, virtual NVMe controller 202 may perform the requested memory access operation on the data stored at an identified address in the first portion and return requested data and/or a confirmation or error message to the host system 120, as appropriate), and the hardware device further including a physical partition, corresponding to the physical function ([0022], NVMe virtualization module 113 further assigns a corresponding PCIe physical function to each virtual NVMe controller, causing each virtual NVMe controller to appear as a separately addressable PCIe device (i.e., a physical controller) connected to the PCIe bus between the memory subsystem 110 and the host system 120), to perform the one or more operations, wherein the physical partition is one of multiple physical partitions of the hardware device ([0026], As noted above, each physical function 212-218 can be assigned to any one of virtual machines 232-236 in the host system 120. When I/O data is received at a virtual NVMe (Examiner notes: each corresponding to a NVMe controller) control 202-208 from a virtual machine 232-236, a virtual machine driver provides a guest physical address for a corresponding read/write command. NVMe virtualization module 113 translates the physical function number to a bus, device, and function (BDF) number and then adds the command to a direct memory access (DMA) operation to perform the DMA operation on the guest physical address), Maharana teaches a partitioning of a memory into respective portions (Maharana, Abstract). Maharana does not explicitly teach that a physical partition includes one or more compute resources and one or more memory resources. Duluk teaches and wherein the physical partition includes one or more compute resources and one or more memory resources (Abstract, A parallel processing unit (PPU) can be divided into partitions. Each partition is configured to operate similarly to how the entire PPU operates. A given partition includes a subset of the computational and memory resources associated with the entire PPU; [0077], With any of the above configurations, device driver 122 and hypervisor 124 interoperate in order to subdivide various compute, graphics, and memory resources included in PPU 200 into separate “PPU partitions” … A given PPU partition operates in substantially similar manner to PPU 200 as a whole.) which is substantially similar to claim 1 and therefore rejected with similar rationale. Examiner notes: It would be obvious to one of ordinary skill in the art to recognize that the device of claim 15 is being substantially recited again for the method of claim 1. With regard to claim 16, Maharana teaches the computing device of claim 15, wherein the bus comprises a peripheral component interconnect express bus (FIG. 2, PCIe Bus 210; [0023], FIG. 2 illustrates an example physical host interface between a host system and a memory sub-system implementing NVMe direct virtualization in accordance with some embodiments of the present disclosure. In one embodiment, the controller 115 of memory sub-system 110 is connected to host system 120 over a physical host interface, such as PCIe bus 210). With regard to claim 17, Maharana teaches the computing device of claim 15, further comprising: an additional physical function exposed on the bus ([0027], Further, each physical function 212-218 may be implemented in either a privileged mode or normal mode … Typically a first physical function can implement a privileged mode and the remainder of the physical functions can implement a normal mode); and a device management module, corresponding to the additional physical function, that manages the multiple physical partitions of the hardware device ([0027], When Implemented in the privileged mode, the physical function has a single point of management that can control resource manipulation and storage provisioning for other functions implemented in the normal mode. In addition, a physical function in the privileged mode can perform management options, including for example, enabling/disabling of multiple physical functions, storage and quality of service provisioning (QoS), firmware and controller updates, vendor unique statistics and events, diagnostics, secure erase/encryption, among others). With regard to claim 18, Maharana teaches the computing device of claim 17, wherein the device management module is to: receive, via the bus and the additional physical function, configuration information corresponding to software, the configuration information indicating resources requested for execution of the software ([0030], The associated virtual NVMe controller 202-208 may appear as a virtual storage resource to each of virtual machines 232, 234, 236 which the guest operating system or guest applications running therein can access; [0037], In one embodiment, controller 115 further includes quality of service (QoS) module 522 and sideband management (SM) bus 524. QoS can implement individual quality of service management for each virtual NVMe controller 202-208. When a large storage device, such as one of memory components 112A to 112N is sliced into smaller partitions, each controlled by a virtual NVMe controller 202-208, and that each can be used by different clients (e.g., virtual machines on host system 120), it may be beneficial to associate QoS characteristics with each individual partition (Examiner notes: A quality of service is associated with a minimum set of requirements and limitations governing an application’s access to system resources, enforced through controls such as time limits and predefined resource allocations). To meet these requirements, QoS module 522 attaches QoS controls (Examiner note: QoS controls receives service specification and allocates system resources to ensure software is executed with its defined service level) to each virtual NVMe controller 202-208); and configure, based on the received configuration information, the physical partition including the indicated resources ([0037], The QoS controls may include, for example, an individual storage partition size, bandwidth, or other characteristics. QoS module 522 may monitor the performance of virtual NVMe controllers 202-208 over time and reconfigure resource assignments as needed to ensure compliance with QoS requirements). Claims 6, 13, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Maharana et al. Pub. No. US 2021/03116655 (hereinafter Maharana) in view of Duluk, Jr. et al. Pub. No. US 2021/0073125 A1 (hereinafter Duluk). With regard to claim 6, Maharana teaches the method of claim 1, further comprising: exposing a … physical function of the hardware device on the bus, the … physical function corresponding to a … physical partition of the multiple physical partitions ([0022], In one embodiment, the NVMe virtualization module 113 executes firmware or other logic to provide a number of virtual NVMe controllers in memory sub-system 110. NVMe virtualization module 113 associates each virtual NVME controller with a certain portion of the underlying memory components 112A to 112N, where each portion is addressable by a unique namespace. NVMe virtualization module 113 further assigns a corresponding PCIe physical function to each virtual NVMe controller, causing each virtual NVMe controller to appear as a separately addressable PCIe device (i.e., a physical controller) connected to the PCIe bus between the memory sub-system 110 and the host system 120. Host system 120, including separate virtual machines or partitions running thereon, can thus access each portion of the memory components 112A to 112N represented by a virtual NVMe controller separately and in parallel over the physical host interface (e.g., PCIe bus); [0025] As described above, NVMe virtualization module 113 associates one of physical functions 212-218 with each of virtual NVMe controllers 202-208 in order to allow each virtual NVMe controller 202-208 to appear as a physical controller on PCIe bus 210); receiving, via the … physical function, a request to perform at least one operation ([0026], As noted above, each physical function 212-218 can be assigned to any one of virtual machines 232-236 in the host system 120. When I/O data is received at a virtual NVMe controller 202-208 from a virtual machine 232-236 a virtual machine driver provides a guest physical address for a corresponding read/write command); and performing the at least one operation on the … physical partition ([0026], NVMe virtualization module 113 translates the physical function number to a bus, device, and function (BDF) number and then adds the command to a direct memory access (DMA) operation to perform the DMA operation on the guest physical address). However, Maharana does not explicitly teach the limitation of additional physical functions and additional physical partitions. In other embodiments, Maharana teaches additional physical functions … additional physical partitions ([0023], In other embodiments, however, there may be any other number (additional) of NVMe controllers, each having a corresponding physical function; [0058], In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in illustrative sense rather than a restrictive sense) It would have been obvious to one of ordinary skill in the art at the time the invention was filed to combine the embodiments disclosed by Maharana in order to provide a method that teaches an additional plurality of physical functions corresponding to additional physical partitions receiving and performing requests. The motivation for combining the embodiments disclosed by Maharana is to provide a method that allows for a desired number of physical functions to be provisioned (Maharana, [0035]), enabling greater granularity over partitioning computing resources to meet quality of service requirements (Maharana, [0037]). Maharana is art directed towards device partitioning and inter-process communication. Therefore, it would have been obvious for one of ordinary skill in the art to combine the embodiments of Maharana to teach the claimed invention in order to provide a method that can be performed across additional physical functions corresponding to physical partitions, thereby allowing a desired number of partitions to be provisioned to enable greater resource control. With regard to claim 13, Maharana teaches the system of claim 8, further comprising: an … physical function exposable on the bus ([0022], In one embodiment, the NVMe virtualization module 113 executes firmware or other logic to provide a number of virtual NVMe controllers in memory sub-system 110. NVMe virtualization module 113 associates each virtual NVME controller with a certain portion of the underlying memory components 112A to 112N, where each portion is addressable by a unique namespace. NVMe virtualization module 113 further assigns a corresponding PCIe physical function to each virtual NVMe controller, causing each virtual NVMe controller to appear as a separately addressable PCIe device (i.e., a physical controller) connected to the PCIe bus between the memory sub-system 110 and the host system 120. Host system 120, including separate virtual machines or partitions running thereon, can thus access each portion of the memory components 112A to 112N represented by a virtual NVMe controller separately and in parallel over the physical host interface (e.g., PCIe bus); [0025] As described above, NVMe virtualization module 113 associates one of physical functions 212-218 with each of virtual NVMe controllers 202-208 in order to allow each virtual NVMe controller 202-208 to appear as a physical controller on PCIe bus 210) to receive a request to perform at least one operation ([0026], As noted above, each physical function 212-218 can be assigned to any one of virtual machines 232-236 in the host system 120. When I/O data is received at a virtual NVMe controller 202-208 from a virtual machine 232-236 a virtual machine driver provides a guest physical address for a corresponding read/write command); and an … physical partition, corresponding to the … physical function to perform the at least one operation ([0026], NVMe virtualization module 113 translates the physical function number to a bus, device, and function (BDF) number and then adds the command to a direct memory access (DMA) operation to perform the DMA operation on the guest physical address), wherein the … physical partition is one of the multiple physical partitions ([0035], As described above, each virtual NVMe controllers 202-208 appears as a separate physical PCIe device connected to PCIe bus 210 by virtue of each having a separate physical function). However, Maharana does not explicitly teach the limitation of additional physical functions and additional physical partitions. In other embodiments, Maharana teaches additional physical functions … additional physical partitions ([0023], In other embodiments, however, there may be any other number of NVMe controllers, each having a corresponding physical function; [0058], In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in illustrative sense rather than a restrictive sense) which is substantially similar to claim 6 and therefore rejected with similar rationale. Examiner notes: It would be obvious to one of ordinary skill in the art to recognize that the device of claim 13 is being substantially recited again for the method of claim 6. With regard to claim 19, Maharana teaches the computing device of claim 15, further comprising: an … physical function exposed on the bus ([0022], In one embodiment, the NVMe virtualization module 113 executes firmware or other logic to provide a number of virtual NVMe controllers in memory sub-system 110. NVMe virtualization module 113 associates each virtual NVME controller with a certain portion of the underlying memory components 112A to 112N, where each portion is addressable by a unique namespace. NVMe virtualization module 113 further assigns a corresponding PCIe physical function to each virtual NVMe controller, causing each virtual NVMe controller to appear as a separately addressable PCIe device (i.e., a physical controller) connected to the PCIe bus between the memory sub-system 110 and the host system 120. Host system 120, including separate virtual machines or partitions running thereon, can thus access each portion of the memory components 112A to 112N represented by a virtual NVMe controller separately and in parallel over the physical host interface (e.g., PCIe bus); [0025] As described above, NVMe virtualization module 113 associates one of physical functions 212-218 with each of virtual NVMe controllers 202-208 in order to allow each virtual NVMe controller 202-208 to appear as a physical controller on PCIe bus 210) to receive a request to perform at least one operation ([0026], As noted above, each physical function 212-218 can be assigned to any one of virtual machines 232-236 in the host system 120. When I/O data is received at a virtual NVMe controller 202-208 from a virtual machine 232-236 a virtual machine driver provides a guest physical address for a corresponding read/write command); and an … physical partition, corresponding to the … physical function, to perform the at least one operation ([0026], NVMe virtualization module 113 translates the physical function number to a bus, device, and function (BDF) number and then adds the command to a direct memory access (DMA) operation to perform the DMA operation on the guest physical address), wherein the … physical partition is one of the multiple physical partitions ([0035], As described above, each virtual NVMe controllers 202-208 appears as a separate physical PCIe device connected to PCIe bus 210 by virtue of each having a separate physical function). However, Maharana does not explicitly teach the limitation of additional physical functions and additional physical partitions. In other embodiments, Maharana teaches additional physical functions … additional physical partitions ([0023], In other embodiments, however, there may be any other number of NVMe controllers, each having a corresponding physical function; [0058], In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in illustrative sense rather than a restrictive sense) which is substantially similar to claim 6 and therefore rejected with similar rationale. Examiner notes: It would be obvious to one of ordinary skill in the art to recognize that the device of claim 19 is being substantially recited again for the method of claim 6. Claims 7, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Maharana et al. Pub. No. US 2021/03116655 (hereinafter Maharana) in view of Duluk, Jr. et al. Pub. No. US 2021/0073125 A1 (hereinafter Duluk) as applied to claim 1 above, and further in view of Kovacevic Pub. No. US 2020/0409732 A1 (hereinafter Kovacevic). With regard to claim 7, Maharana teaches the method of claim 1, wherein the request is received from software … of a host rather than via a hypervisor ([0024], Each of virtual NVMe controllers 202-208 manages storage access operations for the corresponding portion of the underlying memory components 112A to 112N, with which it is associated. For example, virtual NVMe controller 202 may receive data access requests from host system 120 over PCIe bus 210, including requests to read, write, or erase data in a first portion of memory component 112A). However, Maharana does not explicitly teach that the received software of a host is via a kernel mode driver. Kovacevic teaches via a kernel mode driver ([0068], The kernel mode 610 also includes a kernel I/O manager 645 that manages the communication between applications and the interfaces provided by device drivers … Kernel Device Drivers receive data from applications, filters the data, and pass it to a lower-level driver that supports device functionality) It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Kovacevic with the teachings of Maharana and Duluk in order to provide a method that teaches request received by software of a host occurs through the kernel mode driver of the host. The motivation for applying Kovacevic teaching with Maharana and Duluk teaching is to provide a method that allows for an abstracted interface of the hardware components present on a device to be provided, enabling operating systems and user mode programs access to device components and resources without requiring the precise knowledge of the implementation of such components (Kovacevic, [0068]), provides protection to the OS from erroneous or malicious code (Kovacevic, [0057]), and among other functionality (Kovacevic, [0060]-[0067]). Maharana, Duluk, and Kovacevic are analogous art directed towards I/O management. Therefore, it would have been obvious for one of ordinary skill in the art to combine Kovacevic with Maharana and Duluk to teach the claimed invention in order to provide a method for securely routing host software request across diverse hardware configurations. With regard to claim 14, Maharana teaches the system of claim 8, wherein the request is received from software … of a host rather than via a hypervisor ([0024], Each of virtual NVMe controllers 202-208 manages storage access operations for the corresponding portion of the underlying memory components 112A to 112N, with which it is associated. For example, virtual NVMe controller 202 may receive data access requests from host system 120 over PCIe bus 210, including requests to read, write, or erase data in a first portion of memory component 112A). However, Maharana does not explicitly teach that the received software of a host is via a kernel mode driver. Kovacevic teaches via a kernel mode driver ([0068], The kernel mode 610 also includes a kernel I/O manager 645 that manages the communication between applications and the interfaces provided by device drivers … Kernel Device Drivers receive data from applications, filters the data, and pass it to a lower-level driver that supports device functionality) which is substantially similar to claim 7 and therefore rejected with similar rationale. Examiner notes: It would be obvious to one of ordinary skill in the art to recognize that the device of claim 14 is being substantially recited again for the method of claim 7. With regard to claim 20, Maharana teaches the computing device of claim 15, wherein the request is received from software … of a host than via a hypervisor ([0024], Each of virtual NVMe controllers 202-208 manages storage access operations for the corresponding portion of the underlying memory components 112A to 112N, with which it is associated. For example, virtual NVMe controller 202 may receive data access requests from host system 120 over PCIe bus 210, including requests to read, write, or erase data in a first portion of memory component 112A). However, Maharana does not explicitly teach that the received software of a host is via a kernel mode driver. Kovacevic teaches via a kernel mode driver ([0068], The kernel mode 610 also includes a kernel I/O manager 645 that manages the communication between applications and the interfaces provided by device drivers … Kernel Device Drivers receive data from applications, filters the data, and pass it to a lower-level driver that supports device functionality) which is substantially similar to claim 7 and therefore rejected with similar rationale. Examiner notes: It would be obvious to one of ordinary skill in the art to recognize that the device of claim 20 is being substantially recited again for the method of claim 7. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN A CASTANEDA whose telephone number is (571)272-0465. The examiner can normally be reached Monday-Friday 9:30AM-5:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /I.A.C./Examiner, Art Unit 2195 /Aimee Li/Supervisory Patent Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Dec 14, 2022
Application Filed
Jun 17, 2025
Non-Final Rejection — §103
Sep 03, 2025
Examiner Interview Summary
Sep 03, 2025
Applicant Interview (Telephonic)
Dec 16, 2025
Response Filed
Feb 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585483
MANAGING DEPLOYMENT AND MIGRATION OF VIRTUAL COMPUTING INSTANCES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+100.0%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month