Detailed Action
Response to Amendment
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are presented for examination.
Claims 1,3,12,14 and 20 are amended.
Claims 4-9,11,15-17 and 19 are originally presented.
Claims 2,10,13 and 18 are cancelled.
Claims 1,3-9,11-12,14-17,19 and 20 are rejected.
This Action is Non-Final.
Response to Arguments
Applicant's arguments filed 12/16/2025 have been fully considered but they are not persuasive.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. Claims 1,3-9,11-12,14-17,19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Madhusudana et al.(US Patent Application Pub. No: 20150089132 A1) in view of Pawlowski et al. (US Patent Application Pub. No: 20220222075 A1), and further in view of Auernhammer (US Patent Application Pub. No: 20110010480 A1) and RISINGER et al. (US Patent Application Pub. No: 20170286284 A1).
As per claim 1,Madhusudana teaches a method comprising:
receiving an input/output (I/O) request [Paragraph 0012, The storage system 100 is configured with a controller 102 that is operable to receive and process I/O requests from a host 101 to the various drives 105 and 106.];
categorizing the I/O request according to a request size classification [Paragraphs 0012; 0016,…, the controller 102 categorizes the I/O requests into types based on sizes of the I/O requests, in the process element 202.];
storing the I/O request to a statically pinned memory [Paragraph 0003, The storage systems process input/output (I/O) requests with one or more storage controllers to direct data to and from the storage volumes. The size and the configuration of the storage volumes are generally static, regardless of the type or size of the I/O request.], if the I/O request is categorized into a first classification [claim 1; Paragraph 0003,…, categorize the input/output requests into types based on sizes of the input/output requests, wherein a first type of the input/output requests has a size that is smaller than a second type of the input/output requests;…];
storing the I/O request in a pinned memory pool [Paragraph 0022, … the controller 102 allocates a portion of space in the HDD group 256-1 based on the larger I/O requests.], if the I/O request is categorized into a second classification [Abstract, claim 1, … categorize the input/output requests into types based on sizes of the input/output requests, wherein a first type of the input/output requests has a size that is smaller than a second type of the input/output requests; and reconfigure the logical volumes from the hard disk drives and the solid-state drives based on the types of the input/output requests to the logical volumes,…]; and
storing the I/O request in a dynamically allocated memory, if the I/O request is categorized into a third classification [Paragraph 0012, The controller 102 is also operable to analyze the incoming I/O requests from the host 101 and to categorize them into types which may be used to dynamically allocate/ configure the logical volume 110 according to those types. For example, the controller 102 may configure the logical volume 110 from the HDDs 106-1-106-2 to accommodate a variety of I/O requests, as illustrated in FIG. 1.].
Madhusudana discloses categorizing the input/output requests into first type of the input/output requests and second type of the input/output requests based on sizes of the input/output requests but does not explicitly disclose wherein the I/O request comprises a direct memory access (DMA) operation;
wherein storing the I/O request in the pinned memory pool comprises using a scatter/gather list with non-contiguous data buffers;
the I/O request is categorized into a third classification.
Pawlowski discloses wherein the I/O request comprises a direct memory access (DMA) operation [Paragraphs 0098-0099;0102-0103, DMA operations are a common tool for moving large data structures in the background. Existing DMA implementations typically place the DMA controller in the input/output (I/O) interface and trigger copies via memory-mapped input/output (MMIO) gates.];
wherein storing the I/O request in the pinned memory pool comprises using a scatter/ gather list with non-contiguous data buffers [Paragraphs 0121; 0133, A DMA scatter instruction scatters data across a destination memory region, where the destination memory region is a non-contiguous memory region. A DMA gather instruction gathers data from a source memory region and stores the data in a destination memory region, where the source memory region is a non-contiguous memory region and the destination memory region is a contiguous memory region.].
It would have been obvious one ordinary skill in the art before the effective filling date of the claimed invention, to include Pawlowski's the processor has a decode circuitry for decoding instruction to perform a direct memory access (DMA) operation into Madhusudana’s a method for creating storage volumes from a storage device for the benefit of enables utilizing a DMA controller that is placed in an input/output (I/O) interface and trigger copies through memory-mapped input-output (MMIO) gates such that DMA implementations can be achieved with relatively limited functionality, thus supporting straightforward transfers of contiguous data from one memory location to another memory location in an efficient manner (Pawlowski, [0095]) to obtain the invention as specified as claim 1.
Madhusudana and Pawlowski do not explicitly disclose the I/O request is categorized into a third classification.
Auernhammer disclose the I/O request is categorized into a third classification [Fig.8; Paragraph 0051,…, data requests may be categorized by the number of requests required to complete processing the transfer. FIG. 8 illustrates five I/O transfers T0-T4. T0 is a low priority request comprising nine packets as payload data, of which insertion is allowed for the first seven packets. T1-T4 are higher priority requests such as a device request or a pushed request that can be transferred in one packet. In this example, T0 arrives first and starts to be processed. T1-T4 which arrive later in the I/O device and require only one packet, are processed by the device in between the packets of T0 to maintain resource usage fairness among requests.].
It would have been obvious one ordinary skill in the art before the effective filling date of the claimed invention, to include Auernhammer's a system for supporting push-pull direct memory access operation by utilizing an input/ output controller-processor interconnect coupling into Pawlowski's the processor has a decode circuitry for decoding instruction to perform a direct memory access (DMA) operation and Madhusudana’s a method for creating storage volumes from a storage device for the benefit of enables utilizing a DMA controller that is placed in an input/output (I/O) interface and trigger copies through memory-mapped input-output (MMIO) gates such that DMA implementations can be achieved with relatively limited functionality, thus supporting straightforward transfers of contiguous data from one memory location to another memory location in an efficient manner (Pawlowski, [0095]) and improves slot-usage by allowing flexible operation, reduces necessity to increase a slot-number to bridge increasing worst-case data latencies in the processor-interconnect, and avoids slot-blocking by long-latency memory fetches and multi-cache line requests (Auernhammer,[0025]) to obtain the invention as specified as claim 1.
Madhusudana, Pawlowski and Auernhammer do not explicitly disclose wherein the statically pinned memory, the pinned memory pool, and the dynamically allocated memory comprise respective regions of volatile system memory accessible to a DMA engine and used to store data for performing the DMA operation.
RISINGER discloses wherein the statically pinned memory, the pinned memory pool [The host-side memory pool may comprise pinned memory,…], and the dynamically allocated memory comprise respective regions of volatile system memory accessible to a DMA engine and used to store data for performing the DMA operation [Paragraphs 0022;0044;0071,Using memory pools to provide dynamic memory allocation by the data driver improves performance of the behavioral recognition system in light of memory constraints of a GPU device, which typically has significantly less memory than a CPU (e.g., a CPU may have 128 GB memory, whereas a GPU may have 6 GB of memory).].
It would have been obvious one ordinary skill in the art before the effective filling date of the claimed invention, to include RISINGER’s a system allocating dynamic memory in a behavioral recognition system into Auernhammer's a system for supporting push-pull direct memory access operation by utilizing an input/ output controller-processor interconnect coupling and Pawlowski's the processor has a decode circuitry for decoding instruction to perform a direct memory access (DMA) operation and also Madhusudana’s a method for creating storage volumes from a storage device for the benefit of enables utilizing a DMA controller that is placed in an input/output (I/O) interface and trigger copies through memory-mapped input-output (MMIO) gates such that DMA implementations can be achieved with relatively limited functionality, thus supporting straightforward transfers of contiguous data from one memory location to another memory location in an efficient manner (Pawlowski, [0095]) and improves slot-usage by allowing flexible operation, reduces necessity to increase a slot-number to bridge increasing worst-case data latencies in the processor-interconnect, and avoids slot-blocking by long-latency memory fetches and multi-cache line requests (Auernhammer,[0025]); and also ensuring that syntax allows a machine learning engine to learn, identify, and recognize patterns of behavior without aid or guidance of predefined activities; and allocating memory from the memory pool avoids allocating available memory in GPU, thus avoiding a synchronizing event and allowing other processes in the GPU to continue executing in the event that the request is specified to the GPU; and also cloud computing allows a user to access virtual computing resources in the cloud without regard for underlying physical systems used to provide the computing resources (RISINGER,[0071]) to obtain the invention as specified as claim 1.
As per claim 3, Madhusudana, Pawlowski ,Auernhammer and RISINGER teach all the limitations of claim 1 above, wherein Madhusudana and Auernhammer teach, a method, wherein the first classification includes a first request size, the second classification includes a second request size [Madhusudana, claim 1; Paragraph 0003,…, categorize the input/output requests into types based on sizes of the input/output requests, wherein a first type of the input/output requests has a size that is smaller than a second type of the input/output requests;…], and the third classification comprises a third request size [Auernhammer, Fig.8; Paragraph 0051,…, data requests may be categorized by the number of requests required to complete processing the transfer. FIG. 8 illustrates five I/O transfers T0-T4. T0 is a low priority request comprising nine packets as payload data, of which insertion is allowed for the first seven packets. T1-T4 are higher priority requests such as a device request or a pushed request that can be transferred in one packet. In this example, T0 arrives first and starts to be processed. T1-T4 which arrive later in the I/O device and require only one packet, are processed by the device in between the packets of T0 to maintain resource usage fairness among requests.].
As per claim 4, Madhusudana, Pawlowski ,Auernhammer and RISINGER teach all the limitations of claim 3 above, wherein Madhusudana teaches, a method, wherein the first request size comprises less than four kilobytes [Madhusudana, claim 1; Paragraphs 0003-0004,…, categorize the input/output requests into types based on sizes of the input/output requests, wherein a first type of the input/output requests has a size that is smaller than a second type of the input/output requests;…].
As per claim 5, Madhusudana, Pawlowski ,Auernhammer and RISINGER teach all the limitations of claim 4 above, wherein Madhusudana teaches, a method, wherein the second request size comprises greater than four kilobytes and less than four megabytes [Madhusudana, claim 1; Paragraphs 0003-0004,…, categorize the input/output requests into types based on sizes of the input/output requests, wherein a first type of the input/output requests has a size that is smaller than a second type of the input/output requests;…].
As per claim 6, Madhusudana, Pawlowski ,Auernhammer and RISINGER teach all the limitations of claim 5 above, wherein Madhusudana teaches, a method, wherein the third request size comprises greater than four megabytes [Madhusudana, claim 1; Paragraphs 0003-0004,…, a storage system includes a plurality HDDs and a plurality of SDDs and a storage controller operable to manage the HDDs and SDDs as a plurality of logical volumes, and categorize input/output requests to the logical volumes into types based on sizes of the input/output requests (e.g., smaller and larger).].
As per claim 7, Madhusudana, Pawlowski ,Auernhammer and RISINGER teach all the limitations of claim 1 above, wherein Madhusudana teaches, a method, further comprising designating at least one block list in the statically pinned memory for I/O read requests [Madhusudana, Paragraphs 0003-0004; 0021, The controller 102 may then write the data of the I/O request to one or more of the HDDs of the HDD group 256-3 that make up the logical volume 110-2. Over time, the controller 102 compiles statistical information of the I/O requests to the logical volume 110-2 such that the controller 102 can optimize I/O requests to the logical volume 110-2 by dynamically allocating space on other drive groups 255 and 256.].
As per claim 8, Madhusudana, Pawlowski ,Auernhammer and RISINGER teach all the limitations of claim 7 above, wherein Madhusudana teaches, a method, further comprising designating at least two block lists in the statically pinned memory for I/O write requests [Madhusudana, Paragraphs 0003-0004; 0021, The controller 102 may then write the data of the I/O request to one or more of the HDDs of the HDD group 256-3 that make up the logical volume 110-2. Over time, the controller 102 compiles statistical information of the I/O requests to the logical volume 110-2 such that the controller 102 can optimize I/O requests to the logical volume 110-2 by dynamically allocating space on other drive groups 255 and 256.].
As per claim 9, Madhusudana, Pawlowski ,Auernhammer and RISINGER teach all the limitations of claim 8 above, wherein Madhusudana teaches, a method, further comprising storing the I/O write requests in a first of the at least two block lists in the statically pinned memory until a first capacity is reached and storing the I/O write requests in a second of the at least two block lists in the statically pinned memory after the first capacity is reached [Madhusudana, Paragraphs 0003-0004; 0021, The controller 102 may then write the data of the I/O request to one or more of the HDDs of the HDD group 256-3 that make up the logical volume 110-2. Over time, the controller 102 compiles statistical information of the I/O requests to the logical volume 110-2 such that the controller 102 can optimize I/O requests to the logical volume 110-2 by dynamically allocating space on other drive groups 255 and 256.].
As per claim 11, Madhusudana, Pawlowski ,Auernhammer and RISINGER teach all the limitations of claim 1 above, wherein Madhusudana teaches, a method, further comprising releasing pinned memory blocks in the dynamically allocated memory upon completion of the I/O request [Madhusudana, Paragraphs 0003-0004; 0021;0024, Subsequently, in the context of an I/O, the controller 102 analyzes the I/O pattern and allocates blocks dynamically from the drive pool, either from HDDs/SSDs. In some instances, the controller 102 analyzes the incoming I/O requests to the logical volumes using statistical analysis and/or mathematical optimization techniques. In any case, the inventive concepts herein provide optimal I/O performance and deterministic latency for different I/O request types and provide dynamic allocation of storage space depending on the I/O requests.].
As per claims 12, 14-17 and 19, claims 12 ,14-17 and 19 are rejected in accordance to the same rational and reasoning as the above claims 1,3-9 and 11 above, wherein claims 12 ,14-17 and 19 are the system claims for the method of claims 1,3-9 and 11.
As per claim 20, claims 20 is rejected in accordance to the same rational and reasoning as the above claim 1 above, wherein claim 20 is the device claim for the method of claim 1.
Conclusion
RELEVANT ART CITED BY THE EXAMINER
The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c).
References Considered Pertinent but not relied upon
KANNO et al. (US Patent Application Pub. No: 20210064520 A1) teaches a memory system includes a nonvolatile memory and a controller.; and in response to receiving a first write command from a host, the controller determines a first physical address indicative of a physical storage location of the nonvolatile memory to which first write data associated with the first write command is to be written, and updates an address translation table such that the first physical address is associated with a logical address of the first write data. KANNO discloses the controller starts updating the address translation table before the transfer of the first write data is finished or before the write of the first write data to the nonvolatile memory is finished.
Baryshnikov et al. (US Patent Application Pub. No: 20080209428 A1) teaches a system has a receiving component that inputs a request; and a governor component assigns each incoming request to a workload group and assigns each workload group to a resource pool. Baryshnikov discloses the governor component has a recognition component that determines the existence of an incoming request; also the governor component has an association component that classifies the request with a workload group based on a classification logic. Baryshnikov suggests the governor component has an optimization component that classifies the request with a workload group based on previous classifications.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GETENTE A YIMER whose telephone number is (571)270-7106. The examiner can normally be reached on Monday-Friday 6:30-3:00.Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, IDRISS N ALROBAYE can be reached on 571-270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair my.uspto.gov/pair/ PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GETENTE A YIMER/Primary Examiner, Art Unit 2181