DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending in this application.
Response to Arguments
Applicant’s arguments regarding the rejections of claims 2, 8, and 16 under 35 U.S.C. 112a have been fully considered and are persuasive. The rejections have been withdrawn.
Applicant’s arguments regarding the rejections of claims 1-20 under 35 U.S.C. 112b have been fully considered and are persuasive. The rejections have been withdrawn. However, new 35 U.S.C. 112b rejections are applied to claims 1-20 based on the amendments.
Applicant's arguments regarding the 35 U.S.C. 101 rejections of claims 1-20 have been fully considered but they are not persuasive.
Regarding the 35 U.S.C. 101 rejection, the applicant argues the following in the remarks:
Applicants point to the following language (with certain portions emphasized in the reproduction below) in amended claims 1, 7, and 15, indicating that claims 1, 7, and 15 are not directed to a mental process and cannot be performed in the mind: receiving a request message from a memory manager executing on a processor, the request message including a job descriptor that specifies a memory operation to be performed and an indication of one or more addresses; in response to an execution pipeline of a plurality of execution pipelines being designated to process commands for the one or more addresses of the job descriptor, selecting the execution pipeline; processing the one or more commands by the execution pipeline associated with the virtual memory manager (VMM).
In addition to the above, claims 1, 17, and 15 have been amended to recite "in response to an execution pipeline of a plurality of execution pipelines being designated to process incoming commands for the one or more addresses, selecting the execution pipeline, wherein each execution pipeline of the plurality of execution pipelines is designated to process job descriptors for different address ranges" and "translating the job descriptor into one or more commands for the job descriptor for transmission to a virtual memory manager for the execution pipeline" which are not mental processes.
Examiner has thoroughly considered Applicant’s arguments, but respectfully finds them unpersuasive or moot for at least the following reasons:
As to the emphasized portions in point (a), examiner argues that the fact that a memory manager is executing on a processor just applies judicial exceptions on a generic computing component that neither integrates the judicial exceptions into a practical application nor recites significantly more. The limitation “an execution pipeline of a plurality of execution pipelines being designated to process commands for the one or more addresses of the job descriptor” merely recites an attribute of the technological environment that neither integrates the judicial exceptions into a practical application nor recites significantly more. Processing the one or more commands is a mental process since it can merely involve mentally aggregating commands together or mentally separating the one or more commands into packets. The limitation “by the execution pipeline associated with the virtual memory manager (VMM)” merely recites generic computing components.
As to point (b), the examiner argues that “in response to an execution pipeline of a plurality of execution pipelines being designated to process incoming commands for the one or more addresses” and “wherein each execution pipeline of the plurality of execution pipelines is designated to process job descriptors for different address ranges” are attributes of the technological environment, “for transmission to a virtual memory manager for the execution pipeline” is an intended use limitation, and “selecting the execution pipeline” and “translating the job descriptor into one or more commands” are mental processes. The limitation “selecting the execution pipeline” is a mental process since a human can mentally choose an execution pipeline. The limitation “translating the job descriptor into one or more commands” is a mental process since for example, a human can mentally translate a job into assembly code.
Applicant's arguments regarding the 35 U.S.C. 103 rejections of claims 1-20 have been fully considered but they are not persuasive.
Regarding the 35 U.S.C. 103 rejection, the applicant argues the following in the remarks:
Guo and Waterman fail to teach wherein each execution pipeline of the plurality of execution pipelines is designated to process job descriptors for a different address range.
Examiner has thoroughly considered Applicant' s arguments, but respectfully finds them unpersuasive for at least the following reasons:
As to point (a), the examiner respectfully disagrees. Waterman recites in [0026] “The integrated circuit 110 includes a first store unit 140 configured to write data to the memory system via the L1 cache 150, and a second store unit 142 configured to bypass the L1 cache 150 and write data to the memory system via the L2 cache 152” and [0061] “the first store unit may be a load/store pipeline and the second store unit may be a store-only pipeline”. Waterman discloses that a load/store pipeline can perform a write job in L1 cache whereas a store pipeline can perform a write job in L2 cache.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per claims 1, 7, and 15 (line numbers refer to claim 1):
Lines 6-9 recite "in response to an execution pipeline of a plurality of execution pipelines being designated to process incoming commands for the one or more addresses, selecting the execution pipeline, wherein each execution pipeline of the plurality of execution pipelines is designated to process job descriptors". The limitation recites that an execution pipeline is designated to process incoming commands but also recites that an execution pipeline is designated to process job descriptors. Are incoming commands equivalent to job descriptors?
Line 9 recite "process job descriptors" and lines 3-4 recite "receiving a request message from a memory manager executing on a processor, the request message including a job descriptor" so it is unclear how multiple job descriptors can be processed when only one job descriptor is received.
Line 10 recites "the job descriptor" and line 9 recites "job descriptors" and line 4 recites "a job descriptor", so it is unclear if "the job descriptor" refers to one of the job descriptors or refers back to "a job descriptor" in line 4.
Claims 2-6, 8-14, 16-20 are dependent claims of claims 1, 7, and 15, respectively, and fail to resolve the deficiencies of claims 1, 7, and 15, so they are rejected for the same reasons.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (abstract idea) without significantly more.
As per claim 1, in step 1 of the 101 analysis, the examiner has determined that the claim
is directed to a method. Therefore, the claim is directed to one of the four statutory categories of invention.
The limitations “for managing diversified virtual memory” and “for transmission to a virtual memory manager for the execution pipeline” are intended use limitations that have no weight.
In step 2A prong 1 of the 101 analysis, the examiner has determined that the claim recites
a judicial exception. Specifically, the limitation “selecting the execution pipeline”, “translating the job descriptor into one or more commands for the job descriptor”, and “processing the one or more commands” are mental processes. Selecting the execution pipeline is a mental process since humans can use their judgement to select an execution pipeline. Humans can translate the job descriptor into the one or more commands for the job descriptor by mentally translating the job descriptor into a series of commands. Processing the one or more commands is a mental process since humans can aggregate commands into a package or separate the one or more commands into separate packages.
In step 2A prong 2 of the 101 analysis, the examiner has determined that the additional
elements, alone or in combination do not integrate the judicial exceptions into a practical
application for the following rationale:
The limitation "receiving a request message from a memory manager" represents an
insignificant, extra-solution activity. The term "extra-solution activity" can be understood as
"activities incidental to the primary process or product that are merely a nominal or tangential
addition to the claim" (MPEP 2106.05(g)). The examiner has determined that the limitation
"receiving a request message from a memory manager" is directed to a mere data gathering
activity which is a category of insignificant extra-solution activities (MPEP 2106.05(g)).
The limitations “the request message including a job descriptor that specifies a memory operation to be performed and an indication of one or more addresses”, “in response to an execution pipeline of a plurality of execution pipelines being designated to process incoming commands for the one or more addresses”, and “wherein each execution pipeline of the plurality of execution pipelines is designated to process job descriptors for a different address range” merely describe attributes of the technological environment in with the abstract idea is operating. The courts have identified that generally linking the use of a judicial exception into a technological environment do not integrate a judicial exception into a practical application (MPEP 2106.04(d)(I)).
The limitations "executing on a processor" and "by the execution pipeline associated with the virtual memory manager (VMM)" apply judicial exceptions on a generic computer. "Alappat's rationale that an otherwise ineligible algorithm or software could be made patent-eligible by merely adding a generic computer to the claim was superseded by the Supreme Court's Bilski and Alice Corp. decisions" so therefore applying judicial exceptions on processor, execution pipeline, and VMM which are generic computing components does not integrate the judicial exceptions into a practical application (MPEP 2106.05(b)).
In step 2B of the 101 analysis, the examiner has determined that the additional elements,
alone or in combination do not recite significantly more than the abstract ideas identified above
for the following rationale:
The limitation "receiving a request message from a memory manager" represents an
insignificant, extra-solution activity. The limitation "receiving a request message from a memory manager" is well-understood, routine, or conventional because it is directed to "receiving or transmitting data" (MPEP 2106.05(d)). This is an additional element that the courts have
recognized as well understood, routine, or conventional (MPEP 2106.05(d)). The citation of
court cases in the MPEP meets the Berkheimer evidentiary burden since citation of a court case
in the MPEP is one of the 4 types of evidentiary support that can be used to prove that the
additional elements are well-understood, routine, or conventional (see 125 USPQ2d 1649
Berkheimer v. HP, Inc.). Thus, the limitation does not amount to significantly more than the
abstract idea.
The limitations “the request message including a job descriptor that specifies a memory operation to be performed and an indication of one or more addresses”, “in response to an execution pipeline of a plurality of execution pipelines being designated to process incoming commands for the one or more addresses”, and “wherein each execution pipeline of the plurality of execution pipelines is designated to process job descriptors for a different address range” merely describe attributes of the technological environment and therefore do not amount to significantly more than the exception itself (MPEP 2106.05(h)).
The limitations "executing on a processor" and "by the execution pipeline associated with the virtual memory manager (VMM)" apply judicial exceptions on a generic computer and therefore do not provide significantly more.
As per claim 7, it is a system claim of claim 1, so it is rejected for similar reasons. Additionally, it recites “an engine”, “circuitry of a job controller”, and “circuitry of an execution pipeline” which is equivalent to merely stating the judicial exception and adding the words “apply it”. The limitations merely use a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
As per claim 15, it is a non-transitory computer-readable medium claim of claim 1, so it is rejected for similar reasons. Additionally, it recites “a non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform operations” which is equivalent to merely stating the judicial exception and adding the words “apply it”. The limitation merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
As per claim 2 (and similarly for claims 8 and 16), it recites a mental process and attributes of the technological environment that neither integrate the judicial exceptions into a practical application nor recite significantly more.
As per claim 3 (and similarly for claims 9 and 17), it recites an insignificant extra solution activity that is well understood, routine, or conventional because it is directed to "receiving or transmitting data" (MPEP 2106.05(d)). Therefore, the additional elements neither integrate the judicial exceptions into a practical application nor recite significantly more.
As per claim 4 (and similarly for claims 10 and 18), it recites mental processes.
As per claim 5 (and similarly for claims 11 and 19), it recites mental processes.
As per claim 6 (and similarly for claims 12 and 20), it recites attributes of the technological environment and insignificant extra solution activities that are well understood, routine, or conventional because they are directed to "receiving or transmitting data over a network" (MPEP 2106.05(d)). Therefore, the additional elements neither integrate the judicial exceptions into a practical application nor recite significantly more.
As per claim 13, it recites attributes of the technological environment that neither integrate the judicial exceptions into a practical application nor recite significantly more.
As per claim 14, it recites attributes of the technological environment that neither integrate the judicial exceptions into a practical application nor recite significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 7, 8, and 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. (US 20240020241 A1 hereinafter Guo) in view of Waterman et al. (US 20230367715 A1 hereinafter Waterman).
Guo was cited in a prior office action.
As per claim 1, Guo teaches a method for managing diversified virtual memory, the method comprising ([0108] The L2 cache hardware 1876 is coupled to one or more other levels of cache and eventually to a main memory; [0044] each core may include dedicated Level 1 (L1) cache 412 and Level 2 (L2) cache 411 for caching instructions and data according to a specified cache management policy; [0034] Extended IOMMU with PASID support offers Shared Virtual Memory (SVM) function which allows hardware subsystems to access the memory by direct memory access (DMA) using virtual addresses):
receiving a request message from a memory manager executing on a processor, the request message including a job descriptor that specifies a memory operation to be performed and an indication of one or more addresses ([0036] the job descriptor, or the information contained therein, is stored into the target identified by the destination operand. The job descriptor may be stored into a queue, register, or local cache associated with the hardware subsystem. Next, at block 106, the hardware subsystem processes the stored job descriptor to identify the virtual addresses of required data and responsively generates one or more DMA requests for these data using the virtual addresses; Abstract a job descriptor describing a job to be performed. The job descriptor includes virtual addresses of memory locations in which data required to perform the job are stored; [0038] virtual addresses that are referenced directly in the job descriptor; [0047] FIG. 5 is a diagram illustrating an exemplary job descriptor according to an embodiment. Job descriptor 500 may include control fields 510 and command fields 520. The control fields 510 store information such as the PASID associated with the software application/thread that created the job descriptor, the privilege level associated with the job descriptor, and a prefetch mode to indicate whether address pre-translation should be performed for the addresses referenced in job descriptor. The command field 520 may include pointers (e.g., 522 and 524) to other descriptors such as the request buffer descriptor 530 and response buffer descriptor 550; [0048] The request buffer 530 pointed to by pointer 522 may store command and parameters 532 for specifying the action(s) to be taken by the target hardware subsystem. In addition, the request buffer 530 may store pointers to scattered payloads that need to be processed by the hardware subsystem. For example, request buffer 530 may store pointers 534 and 536 which are the memory addresses of where payloads 536 and 540 are stored, respectively.);
an execution pipeline of a plurality of execution pipelines being designated to process incoming commands for the one or more addresses ([0048] The CPU, as part of the translation pipeline, parses the job descriptor 500 and locates all of the addresses that require translation. For example, the CPU may determine from the job descriptor: [0049] Virtual address of the request buffer descriptor 522 [0050] Virtual address of the response buffer descriptor 524 [0051] Virtual addresses of the payloads 534, 538; [0042] The pre-translation pipeline begins by the CPU 310 evoking the pre/parallel translation interface 342 provided by IOMMU 340 to submit a pre-translation request. The interface provided by IOMMU may be implemented as a register set or a hidden channel (i.e. side channel). Information provided in the pre-translation request may include the BDF of the hardware subsystem, PASID of the software application/thread, and/or one or more virtual addresses to be translated. The BDF and PASID may be used by the page table walk engine 346 to identify the page table from which address translations are obtained; [0039] Returning to the datapath pipeline, at block 206, as the hardware subsystem processes the job descriptor from its job queue, it identifies the data required for performing the job and responsively generates one or more DMA requests using the virtual addresses of the data. At block 208, the IOMMU receives and processes the DMA requests using the address translations that are already in the IOTLB to obtain the corresponding physical address for each virtual address in the DMA request. Next, at block 210, the IOMMU access the memory using the physical addresses and provides the retrieved data to the hardware subsystem to performs the job.);
translating the job descriptor into one or more commands for the job descriptor for transmission to a virtual memory manager for the execution pipeline; and processing the one or more commands by the execution pipeline associated with the virtual memory manager (VMM) ([0039] Returning to the datapath pipeline, at block 206, as the hardware subsystem processes the job descriptor from its job queue, it identifies the data required for performing the job and responsively generates one or more DMA requests using the virtual addresses of the data. At block 208, the IOMMU receives and processes the DMA requests using the address translations that are already in the IOTLB to obtain the corresponding physical address for each virtual address in the DMA request.).
Guo fails to teach in response to an execution pipeline of a plurality of execution pipelines being designated to process incoming commands for the one or more addresses, selecting the execution pipeline, wherein each execution pipeline of the plurality of execution pipelines is designated to process job descriptors for a different address range.
However, Waterman teaches in response to an execution pipeline of a plurality of execution pipelines being designated to process incoming commands for the one or more addresses, selecting the execution pipeline ([0036] FIG. 4 is a flow chart of an example of a technique 400 for selecting a load-store pipeline by checking an inner cache for tags matching an address associated with a first beat of a vector instruction. The technique 400 includes searching 410 the L1 cache for a tag matching the address associated with the first beat of the store instruction. At 415, if a matching tag is found, then the technique 400 includes, responsive to finding a matching tag in the L1 cache, selecting 420 the first store unit (e.g., the first store unit 140); [0016] multiple store units (e.g., load/store pipelines); [0025] The processor core 120 includes a first store unit 140 that is configured to execute memory access instructions (e.g., store and/or load instructions). The first store unit 140 is configured to write data to the memory system via the L1 cache 150.), wherein each execution pipeline of the plurality of execution pipelines is designated to process job descriptors for a different address range ([0026] The integrated circuit 110 includes a first store unit 140 configured to write data to the memory system via the L1 cache 150, and a second store unit 142 configured to bypass the L1 cache 150 and write data to the memory system via the L2 cache 152; [0061] the first store unit may be a load/store pipeline and the second store unit may be a store-only pipeline).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Guo with the teachings of Waterman to improve performance (see Waterman [0022] Implementations, described herein may provide advantages over conventional processors, such as, for example, increasing the memory bandwidth of a processor core while reducing the chance of cache invalidation events and/or improving performance of the processor core.).
As per claim 2, Guo and Waterman teach the method of claim 1. Guo teaches wherein selecting the execution pipeline is based on the execution pipeline being designated to process job descriptors for a range of physical addresses that corresponds to the one or more addresses of the indication in the job descriptor ([0042] In operation, a software application or thread submits a job by calling an enqueue command instruction which specifies a job descriptor 322 stored in the system memory 320. The enqueue engine 312 in the CPU, in response to the execution of the enqueue command instruction, initiates a pre-translation pipeline along with a data path pipeline. The pre-translation pipeline begins by the CPU 310 evoking the pre/parallel translation interface 342 provided by IOMMU 340 to submit a pre-translation request. The interface provided by IOMMU may be implemented as a register set or a hidden channel (i.e. side channel). Information provided in the pre-translation request may include the BDF of the hardware subsystem, PASID of the software application/thread, and/or one or more virtual addresses to be translated. The BDF and PASID may be used by the page table walk engine 346 to identify the page table from which address translations are obtained…the page table walk engine may access the translation table of the hardware subsystem (i.e. second level page table) and/or the host page table (i.e. first level page table) to retrieve the desired physical address translation).
As per claim 7, it is a system claim of claim 1, so it is rejected for similar reasons. Additionally, Guo teaches an engine ([0041] an enqueue engine 312), circuitry of a job controller ([0043] storing the job descriptor into the job queue 332 of the hardware subsystem 330. From the job queue 332, jobs are dispatched to the hardware interface 334 to be processed by the processor 336); circuitry of an execution pipeline ([0043] Concurrently with the translation pipeline, the datapath pipeline begins with the enqueue engine 312 storing the job descriptor into the job queue 332 of the hardware subsystem 330. From the job queue 332, jobs are dispatched to the hardware interface 334 to be processed by the processor 336. During the processing, one or more DMA requests containing host virtual addresses or I/O virtual addresses (IOVA) are submitted to the IOMMU or root complex to access data).
As per claim 8, it is a system claim of claim 2, so it is rejected for similar reasons.
As per claim 13, Guo and Waterman teach the system of claim 7. Guo teaches wherein the memory operation specified by the job descriptor comprises allocation, deletion, migration, or a combination thereof, of memory data ([0062] In a VM environment, control commands are frequently used during VM transitions. For example, the control command PASID reset is typically triggered each time when a guest application shuts down. The purpose of the PASID reset command is to inform the hardware subsystem to go through the pending queue and remove all inflight requests associated with an application-assigned host PASID to release resource; [0058-0059] There are several benefits for using enqueue command instructions and job descriptors to submit jobs to hardware subsystems. Besides simplifying the job submission process by hiding hardware semantics from software applications as mentioned above, another benefit of using the enqueue command instruction is the automatic translation of process address space identifiers (PASIDs). PASIDs are used to share a single hardware subsystem across multiple software threads or processes while providing each thread or process with a corresponding address space. PASID can be extended to virtualized environments through the concept of guest PASIDs (gPASID) and host PASIDs (hPASID).).
As per claim 14, Guo and Waterman teach the system of claim 7. Guo teaches wherein the memory operation specified by the job descriptor comprises invalidation, clearing, or a combination thereof, of cache data ([0062] In a VM environment, control commands are frequently used during VM transitions. For example, the control command PASID reset is typically triggered each time when a guest application shuts down. The purpose of the PASID reset command is to inform the hardware subsystem to go through the pending queue and remove all inflight requests associated with an application-assigned host PASID to release resource; [0058-0059] There are several benefits for using enqueue command instructions and job descriptors to submit jobs to hardware subsystems. Besides simplifying the job submission process by hiding hardware semantics from software applications as mentioned above, another benefit of using the enqueue command instruction is the automatic translation of process address space identifiers (PASIDs). PASIDs are used to share a single hardware subsystem across multiple software threads or processes while providing each thread or process with a corresponding address space. PASID can be extended to virtualized environments through the concept of guest PASIDs (gPASID) and host PASIDs (hPASID)).
As per claim 15, it is a non-transitory computer-readable medium claim of claim 1, so it is rejected for similar reasons. Additionally, Guo teaches a non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform operations ([0137] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein).
As per claim 16, it is a non-transitory computer-readable medium claim of claim 2, so it is rejected for similar reasons.
Claims 3, 9, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Guo and Waterman, as applied to claims 1, 7, and 15 above, in view of Bourd et al. (US 20120017069 A1 hereinafter Bourd).
Bourd was cited in a prior office action.
As per claim 3, Guo and Waterman teach the method of claim 1.
Guo and Waterman fail to teach wherein the execution pipeline receives job descriptors in an order that is according to priority values associated with the job descriptors.
However, Bourd teaches wherein the execution pipeline receives job descriptors in an order that is according to priority values associated with the job descriptors (claim 12 distributing the second command and the fourth command to the second one of the plurality of processing pipelines in an order based on the priority values of the second command and the fourth command.).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Guo and Waterman with the teachings of Bourd to prioritize commands with precedence (see Bourd [0070] A higher priority command has execution precedence.).
As per claims 9 and 17, they are system and non-transitory computer-readable medium claims of claim 3, so they are rejected for the same reasons.
Claims 4, 10, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Guo and Waterman, as applied to claims 1, 7, and 15 above, in view of Moshe et al. (US 20220164299 A1 hereinafter Moshe).
Moshe was cited in a prior office action.
As per claim 4, Guo and Waterman teach the method of claim 1.
Guo and Waterman fail to teach generating a command sequence according to an interface protocol of the VMM associated with the execution pipeline.
However, Moshe teaches generating a command sequence according to an interface protocol of the VMM associated with the execution pipeline ([0068] In some embodiments, host access service 548 may include a peer impersonator 558 configured to map peer access parameters 556 into messages and/or access or session requests that comply with storage interface protocol 532; [0056] Storage bus interface 516 may include a physical interface for connecting to a host using an interface protocol that supports storage device access. For example, storage bus interface 516 may include a PCIe, SATA, SAS, or similar storage interface connector supporting NVMe access to solid state media comprising non-volatile memory devices 520; [0037] From the perspective of storage devices 120, storage interface bus 108 may be referred to as a host interface bus and provides a host data path between storage devices 120 and host 102; [0068] peer impersonator 558 may use a PCIe configuration for host DRAM access based on direct memory access virtual addresses. Peer access parameters 556 may include the virtual addresses allocated to the peer storage device and knowledge of the virtual addresses may be sufficient for accessing the host memory bus from the same storage bus).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Guo and Waterman with the teachings of Moshe to provide requests that are compatible with a storage interface protocol (see Moshe [0068] In some embodiments, host access service 548 may include a peer impersonator 558 configured to map peer access parameters 556 into messages and/or access or session requests that comply with storage interface protocol 532;)
As per claims 10 and 18, they are system and non-transitory computer-readable medium claims of claim 4, so they are rejected for the same reasons.
Claims 5, 11, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Guo, Waterman, and Moshe, as applied to claims 4, 10, and 18 in view of Park et al. (US 20040252689 A1 hereinafter Park).
Park was cited in a prior office action.
As per claim 5, Guo, Waterman, and Moshe teach the method of claim 4. Moshe teaches wherein the processing further comprises: packing the one or more commands into packets ([0073] messaging service 566 may send packetized data payloads over the control bus using block write and block read commands between buffers in the peer storage devices.).
Guo, Waterman, and Moshe fail to teach wherein commands that can be performed in parallel are combined into one packet.
However, Park teaches wherein commands that can be performed in parallel are combined into one packet ([0032] As presently preferred, packet controller 120 transforms an m-bit serial data packet, typically including address and command signals, into an m-bit parallel data packet.).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Guo, Waterman, and Moshe with the teachings of Park to reduce resource usage (see Park [0074] As described above, the transmission of data packets including the command and the address signals is advantageous in that it allows reduction of the number of pins in a memory system constructed using MCP or SIP techniques.).
As per claims 11 and 19, they are system and non-transitory computer-readable medium claims of claim 5, so they are rejected for the same reasons.
Claims 6, 12, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Guo, Waterman, Moshe, and Park, as applied to claims 5, 11, and 19 above, in view of Hu et al. (US 20180225254 A1 hereinafter Hu).
Hu was cited in a prior office action.
As per claim 6, Guo, Waterman, Moshe, and Park teach the method of claim 5. Moshe teaches wherein the processing further comprises: receiving feedback, indicating completion of the performance of commands in the packets; and sending a completion message, indicating completion of the memory operation specified by the job descriptor ([0049] Response message 330 may be used by storage devices 120 to send messages back to a peer storage device that has requested data, such as responding to a recovery request by sending the requested internal operation data or responding to a host access request to provide host access parameters; [0073] messaging service 566 may send packetized data payloads over the control bus using block write and block read commands between buffers in the peer storage devices).
Guo, Waterman, Moshe, and Park fail to teach receiving feedback from the VMM associated with the execution pipeline, indicating completion of the performance of commands in the packets.
However, Hu teaches receiving feedback from the VMM associated with the execution pipeline, indicating completion of the performance of commands in the packets ([0041] In block 330, the pooled memory controller 140 notifies the source compute node 110 and the destination compute node 110 that the packet data copy is complete; [0048] In block 426, the compute node 110 waits for a notification from the pooled memory controller 140 that the network packet send is complete).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Guo, Waterman, Moshe, and Park with the teachings of Hu to reduce latency (see Hu [0013] improve networking throughput and reduce latency).
As per claims 12 and 20, they are system and non-transitory computer-readable medium claims of claim 6, so they are rejected for the same reasons.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HSING CHUN LIN whose telephone number is (571)272-8522. The examiner can normally be reached Mon - Fri 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.L./Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195