DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 25, 27-38, and 40-50 are pending. Claims 25, 30-32, 38, and 43-45 have been amended as per Applicants' request. Claims 26 and 39 have been canceled as per Applicants' request.
Papers Submitted
It is hereby acknowledged that the following papers have been received and placed of record in the file:
Amended Claims as filed on March 16, 2026
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 16, 2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 25, 27-31, 38, and 40-44 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pinho et al. (US 2021/0149805) (hereinafter Pinho) (published May 20, 2021) in view of De Perthuis (US 2011/0078760) (hereinafter De) (published March 31, 2011).
Regarding Claims 25 and 38, taking claim 25 as exemplary, Pinho disclose a system, comprising: a peripheral bus;
“Storage array 112 may be directly connected to the other components of the storage system 100 or may be connected to the other components of the storage system 100, for example, by an InfiniBand (IB) bus or fabric” (Pinho [0020])
a processing device comprising (i) a first bus interface to connect to the peripheral bus, and
“As shown in FIG. 1, in some embodiments the storage system 100 has physical resources including a number of CPU processor cores 114, operating system 116, cache 118, and other physical resources” (Pinho [0018])
“Storage array 112 may be directly connected to the other components of the storage system 100 or may be connected to the other components of the storage system 100, for example, by an InfiniBand (IB) bus or fabric” (Pinho [0020])
(ii) a processor to assign a memory region in a memory that is coupled to the peripheral bus; and
“Storage resources of the storage array 112, in some embodiments, are presented as logical units (LUNs) to the data clients 110 (See FIG. 3). Data associated with data client 110 is stored in one or more user filesystems, and each user file system is stored in a separate logical storage volume, referred to herein as a Logical Unit (LUN)” (Pinho [0021] LUNs are memory regions in the memory that have been assigned)
a peripheral device comprising: a second bus interface to connect to the peripheral bus; and
“Storage array 112 may be directly connected to the other components of the storage system 100 or may be connected to the other components of the storage system 100, for example, by an InfiniBand (IB) bus or fabric” (Pinho [0020])
circuitry, to obtain information indicative of an access pattern with which the memory region is accessed, the access pattern comprising at least one or more of i) sequential access pattern, and ii) random access pattern;
“with different data access patterns, may access the storage system resources concurrently, and each access pattern traverses the address space of the system distinctly. For instance, some workloads might be sequential, while other workloads might be random; some workloads might traverse the entire address space, while other workloads might be concentrated in a small range of addresses” (Pinho [0030])
“As shown in FIG. 7, the method starts with an initialization step (block 700) that may include obtaining data related to I/O access patterns (traces) and preprocessing steps on the obtained data to compose necessary data structures (See e.g. FIG. 6, 600, 610, 620, 630, 640)” (Pinho [0065])
responsively to the obtained information, set a memory-access policy, the memory-access policy specifying at least that data is to be read from the memory region without prefetching when the obtained information indicates that the memory region is accessed in a random access pattern, and that data is to be read from the memory region with prefetching when the obtained information indicates that the memory region is accessed in a sequential access pattern; and
“According to some embodiments, a mechanism is described that automatically and dynamically enables and disables a prefetch cache policy on a per LUN basis, depending on the predicted pollution anticipated by application of the cache policy to the cache, given a sequentiality profile of the current workload” (Pinho [0032])
“the system is able to achieve high hit ratios by turning on prefetching when the workload exhibits primarily sequential access patterns, and the system is able to reduce pollution by turning off prefetching when the workload exhibits primarily random access patterns. By implementing this policy determination and adjustment process periodically, the cache management system is able to prevent pollution from building up in the cache, and is able to implement cache management on a per-LUN basis” (Pinho [0034])
“The sequentiality profile is used as input to a trained predictive pollution model 780 which is used to predict the pollution level of the cache over the following period of time (FIG. 7 block 710). The predicted pollution level is then used to implement a prefetch switching decision (FIG. 7 block 715) which is used to govern operation of the cache for a subsequent period of time (FIG. 7 block 720)” (Pinho [0066] also see fig. 7)
But does not explicitly state access data in the memory region, using Direct Memory Access (DMA) over the peripheral bus, in accordance with the memory-access policy.
De discloses access data in the memory region, using Direct Memory Access (DMA) over the peripheral bus, in accordance with the memory-access policy.
“The policy enforcer 32 is configured to operate so that each time an IP unit 24 performs a DMA access, the access will have to go through the policy enforcer 32. The enforcer 32 will compute which memory zone is targeted by the access and apply the policy decided by the policy checker 30, for example, by checking a table” (De [0031])
It would have been obvious before the effective filing date of the invention to one of ordinary skill in the art to combine the DMA access in accordance with the memory access policy in De with the system in Pinho. The motivation for doing so would be to improve efficiency by offloading the memory transaction from the CPU.
Regarding Claims 27 and 40, De further discloses wherein the processor of the processing device is to provide to the peripheral device context information describing the memory region, and wherein the circuitry of the peripheral device is to access the data in the memory region in accordance with the context information.
“The policy checker 30 will compare this request against the policy of the system. Typically, a request will include the following information, region selected and access type (whether read, write, execute, complex operation). The request will be interpreted and the policy enforcement unit updated accordingly” (De [0030])
“The policy enforcer 32 is configured to operate so that each time an IP unit 24 performs a DMA access, the access will have to go through the policy enforcer 32. The enforcer 32 will compute which memory zone is targeted by the access and apply the policy decided by the policy checker 30, for example, by checking a table” (De [0031] see fig. 4 the chart provides the context information for the multiple memory zones/regions)
Regarding Claims 28 and 41, Pinho further discloses wherein the memory-access policy further specifies a caching policy for caching, in the peripheral device, portions of the data or portions of context information describing the memory region.
“As shown in FIG. 2, in some embodiments portions of the cache are allocated to different LUNs, such that each LUN has a distinct allocation of the cache. According to some embodiments, by adjusting cache policies applied to the cache partitions, the storage system can seek to better meet its SLA obligations for IOs on the LUNs” (Pinho [0025] cache policies define what gets cached)
Regarding Claims 29 and 42, Pinho further discloses wherein the memory-access policy further specifies a prefetching policy for prefetching, in the peripheral device, portions of context information describing the memory region.
“According to some embodiments, a mechanism is described that automatically and dynamically enables and disables a prefetch cache policy on a per LUN basis, depending on the predicted pollution anticipated by application of the cache policy to the cache, given a sequentiality profile of the current workload” (Pinho [0032] prefetch policy define what gets prefetched)
Regarding Claims 30 and 43, Pinho further discloses wherein the information indicative of the access pattern further comprises one or more of: a pattern of addresses that characterizes access to the memory region; an access frequency that characterizes access to the memory region; an access direction that characterizes access to the memory region; a location of the memory region; and whether the memory region is pinned or unpinned.
“This is not ideal because several applications, with different data access patterns, may access the storage system resources concurrently, and each access pattern traverses the address space of the system distinctly. For instance, some workloads might be sequential, while other workloads might be random; some workloads might traverse the entire address space, while other workloads might be concentrated in a small range of addresses” (Pinho [0030])
Regarding Claims 31 and 44, Pinho further discloses wherein the circuitry of peripheral device is to obtain the information indicative of the access pattern by tracking memory-access transactions performed in the memory region.
“In some embodiments, the cache management system 128 does not know the type of application that generated the I/O, but rather only has access to storage telemetry data in the form of I/O traces. An IO trace, as that term is used herein, is a collection of pieces of information associated with an IO operation that indicates what type of I/O operation the application issued (e.g., ‘read’ or ‘write’), the size of the operation, a timestamp associated with the operation, and in indication of an address in the LUN's addressable space. An example of such storage telemetry data is shown below in Table I” (Pinho [0043])
“As shown in FIG. 7, the method starts with an initialization step (block 700) that may include obtaining data related to I/O access patterns (traces) and preprocessing steps on the obtained data to compose necessary data structures (See e.g. FIG. 6, 600, 610, 620, 630, 640)” (Pinho [0065])
Claims 32-35, 37, 45-48, and 50 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pinho (published May 20, 2021) and De (published March 31, 2011) as applied to claims 25 and 38 above, and further in view of Shih et al. (US 2017/0344298) (hereinafter Shih) (published November 30, 2017).
Regarding Claims 32 and 45, the combination of Pinho and De disclosed the system of claim 25 and method of 38, but does not explicitly state wherein the processor of the processing device is to generate a hint that is indicative of the access pattern with which the memory region is accessed and to provide the hint to the peripheral device as the information indicative of the access pattern.
Shih and Pinho discloses wherein the processor of the processing device is to generate a hint that is indicative of the access pattern with which the memory region is accessed and to provide the hint to the peripheral device as the information indicative of the access pattern.
“At block 304, the hypervisor 112 may access a service policy (e.g., 142, FIG. 1) associated with the VM that issued the I/O command. In some embodiments, for example, the service policy 142 may be accessed and processed by a vSCSI instance 122 associated with the VM” (Shih [0037] the policy is set responsive to the information from the I/O command)
“in some embodiments, the hypervisor may use information received with the I/O command to identify a service policy 142 that is associated with the VM. In other embodiments, the hypervisor may use information received with the I/O command that identifies the application that issued the I/O command to identify the service policy 142” (Shih [0038] the information received with the I/O command from the processing device is the hint)
“the system is able to achieve high hit ratios by turning on prefetching when the workload exhibits primarily sequential access patterns, and the system is able to reduce pollution by turning off prefetching when the workload exhibits primarily random access patterns. By implementing this policy determination and adjustment process periodically, the cache management system is able to prevent pollution from building up in the cache, and is able to implement cache management on a per-LUN basis” (Pinho [0034])
It would have been obvious before the effective filing date of the invention to one of ordinary skill in the art to combine the sending of information and setting of policy based on the information in Shih with the memory system in the combination of Pinho and De. The motivation for doing so would be easier identification of the correct policy to apply.
Regarding Claims 33 and 46, Shih further discloses wherein the processor of the processing device is to select the hint from a defined set of hints, and wherein the circuitry of the peripheral device is to select the memory-access policy from a defined set of memory-access policies.
“In some embodiments, for example, the service policy 142 may be accessed and processed by a vSCSI instance 122 associated with the VM. In some embodiments, a VM may have one or more associated service policies 142. A service policy 142 may specify certain performance levels or requirements accorded to the VM. For example, the service policy 142 may guarantee a minimum I/O latency for the VM, provide for data encryption, provide data compression, and so on” (Shih [0037] see fig.1 the vSCSI selects the service policy from the defined set of policies 142)
“in some embodiments, the hypervisor may use information received with the I/O command to identify a service policy 142 that is associated with the VM. In other embodiments, the hypervisor may use information received with the I/O command that identifies the application that issued the I/O command to identify the service policy 142” (Shih [0038] information sent with the I/O would have to be from a defined set to be used in the determination of a policy)
Regarding Claims 34 and 47, Pinho further discloses wherein one or both of (i) the processor of the processing device and (ii) the circuitry of the peripheral device, are to adaptively modify one or more of: one or more of the hints; one or more of the memory-access policies; and a mapping between the hints and the memory-access policies.
“According to some embodiments, a mechanism is described that automatically and dynamically enables and disables a prefetch cache policy on a per LUN basis, depending on the predicted pollution anticipated by application of the cache policy to the cache, given a sequentiality profile of the current workload” (Pinho [0032] the prefetch cache policy is modified based on the predicted pollution)
Regarding Claims 35 and 48, Shih further discloses wherein the memory-access policy, and a mapping between the hint and the memory-access policy, are internal to the peripheral device and are not accessible to the processing device.
“in some embodiments, the hypervisor may use information received with the I/O command to identify a service policy 142 that is associated with the VM. In other embodiments, the hypervisor may use information received with the I/O command that identifies the application that issued the I/O command to identify the service policy 142” (Shih [0038] see fig. 1, the policies 142 is internal to the hypervisor and the mapping from the hint/information with the I/O command to the policy is performed by the hypervisor, this information would not be accessible to the processing device sending the I/O command)
Regarding Claims 37 and 50, Shih further discloses wherein the hint is an ad-hoc hint that is valid for a defined time period or for one or more memory-access transactions to be performed in the memory region.
“In other embodiments, a service policy 142 may be associated with particular applications that can execute on the VM. Accordingly, in some embodiments, the hypervisor may use information received with the I/O command to identify a service policy 142 that is associated with the VM. In other embodiments, the hypervisor may use information received with the I/O command that identifies the application that issued the I/O command to identify the service policy 142” (Shih [0038] the hint/information received with the I/O command is for the memory access to a memory region)
Claims 36 and 49 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pinho (published May 20, 2021), De (published March 31, 2011), and Shih (published November 30, 2017) as applied to claims 32 and 45 above, and further in view of Kass (US 2015/0186271) (hereinafter Kass) (published July 02, 2015).
Regarding Claims 36 and 49, the combination of Pinho and De disclosed the system of claim 32 and method of 45, but does not explicitly state wherein one or both of (i) the processor of the processing device and (ii) the circuitry of the peripheral device, are to provide an Application Programming Interface (API) for specifying one or more of: the hint; the memory-access policy; and a mapping between the hint and the memory-access policy.
Kass discloses wherein one or both of (i) the processor of the processing device and (ii) the circuitry of the peripheral device, are to provide an Application Programming Interface (API) for specifying one or more of: the hint; the memory-access policy; and a mapping between the hint and the memory-access policy.
“In a third aspect, one or more application programming interfaces provide access to memory allocation and parameters thereof relating to zero or more cache eviction policies and/or zero or more virtual address modification policies associated with memory received via a memory allocation request” (Kass [0027])
It would have been obvious before the effective filing date of the invention to one of ordinary skill in the art to combine the use of APIs in Kass with the memory system in the combination of De, Shatil, Weiner, and Shih. The motivation for doing so would be to improve the compatibility as disclosed in Cornwell. “The provided application programming interfaces are usable by various software elements, such as any one or more of basic input/output system, driver, operating system, hypervisor, and application software elements. Memory allocated via the application programming interfaces is optionally managed via one or more heaps, such as one heap per unique combination of values for each of any one or more parameters including eviction policy, virtual address modification policy, structure-size, and element-size parameters” (Kass [0027])
Response to Arguments
Applicant’s arguments, see pages 9-11 of remarks, filed March 16, 2026, with respect to the 35 USC § 101 rejection of claims 25-50 have been fully considered and are persuasive. The 35 USC § 101 rejection of claims 25-50 has been withdrawn.
Applicant’s arguments, see pages 11-13 of remarks, filed March 16, 2026, with respect to the rejection(s) of claim(s) 25-50 under 35 USC § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Pinho et al. (US 2021/0149805) and De Perthuis (US 2011/0078760) for claims 25, 27-31, 38, and 40-44, further in view of Shih et al. (US 2017/0344298) for claims 32-35, 37, 45-48, and 50, and further in view of Kass (US 2015/0186271) for claims 36 and 49.
Pinho discloses the newly amended limitations of the claims with regards to detecting access patterns and adapting the access policies in response to the type of access pattern. De is now used as a secondary reference to Pinho to disclose the use of DMA with the enforcement of memory access policies.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY LI whose telephone number is (571)270-5967. The examiner can normally be reached Monday to Friday 10:00 AM to 6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan P Savla can be reached at (571) 272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.L./Examiner, Art Unit 2137
/Arpan P. Savla/Supervisory Patent Examiner, Art Unit 2137