DETAILED ACTION
This Action is responsive to the Amendments filed on 11/03/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-6 and 8 are amended. Claims 1-6 and 8 are pending and have been examined.
Claim Objections
Claims 1 and 8 are objected to because of the following informalities:
Claim 1 recites “a top position of the standby request queue table” in both the 27th and 30th lines. If applicant intends for Claim 1 to reference the same top position of the same standby request queue table, examiner recommends applicant amend Claim 1, 30th line to instead read “the top position of the standby request queue table”.
Claim 8 recites substantially similar language as Claim 1 and is therefore objected to according to the same rationale provided above.
Claim 8 recites “the standby request queue” in the final line. In order to improve the clarity of the claim by maintaining consistent nomenclature throughout the claims, examiner recommends applicant amend Claim 8, final line instead to read “the standby request queue table”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-6 and 8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claim 1,
Claim 1, 29-30th lines recites “the I/O resource retained in a top position of the standby request queue table”, the scope of which cannot be determined due to numerous reasonable interpretations. In particular, examiner cannot determine whether the aforementioned “the I/O resource” corresponds to:
The same claimed “I/O resource” which is moved from a top position of the standby request queue table to an end position of the standby resource queue table (see Claim 1, lines 27-28); or
An I/O resource which is located in a top position of the standby request queue table after the I/O resource of lines 27-28 is moved into an end position of the standby resource queue table.
Stated another way, examiner cannot determine whether or not “the I/O resource” recited in lines 27 and 29 correspond to the same I/O resource. Therefore, the scope of Claim 1 is indefinite, and the claim is rejected under 35 U.S.C. 112(b). For the purposes of prior art, examiner will interpret Claim 1 according to the first interpretation provided above (i.e., the aforementioned I/O resources correspond to the same I/O resource).
Claims 2-6 depend on Claim 1 and are therefore similarly rejected under 35 U.S.C. 112(b). Claim 8 recites substantially similar language as Claim 1 and is therefore rejected for the same rationale provided above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Love (US 8959249 B1)(hereafter referred to as Love) further in view of Twohig (US 20240256179 A1)(hereafter referred to as Twohig) and Hassan et al. (US 20220006855 A1)(hereafter referred to as Hassan).
Regarding Claim 1,
Love discloses the following limitations:
A storage system (Fig. 1) comprising a storage node (Server 151, Fig. 1) for processing an I/O request to one or a plurality of units of cloud storage (Storage Array 111, Fig. 1)(“As shown in FIG. 1, server 151 has access to storage array 111 over network 121 … storage array 111 may be any type of remote or cloud-based storage volume, any may across one or more backend disks for storing data for the VMs hosted by server 151.” [Col. 3, 35-45th lines]),
wherein a processor for the storage node (“the instruction execution system, apparatus or device” [Col. 2, 60-65th lines]) executes an I/O processing … (“I/O scheduling” [Col. 1] // I/O Scheduler 103a, Fig. 1) to perform the I/O request and an I/O response (“a VM transmitting requests to and receiving responses from a remote cloud-based storage array” [Col. 2, 35-40th lines]) transmitted from the cloud storage in response to the I/O request (“I/O scheduling … is an operating system process for organizing read/write requests, commands and/or operations submitted to a storage volume. This process may be performed by an I/O scheduler software application” [Col. 1, 20-30th lines] // “Server 151 may be any computer capable of hosting a VM … In an embodiment, an I/O scheduler may reside on each VM … The requests and responses transmitted between servers 151 and storage array 111 may be referred to collectively as “in-flight operations” or “in-flight ops” [Col. 3, 30-45th lines]) – As taught in Cols. 1 + 2 and shown in Fig. 1, a server 151 uses an I/O scheduler 103a to process read and write requests directed to storage array 111—
wherein the I/O processing … is configured to retain (Fig. 2, step 209) a plurality of I/O resources, each of the plurality of I/O resources including both information (“A priority tag” [Col. 4, 1st line]) and a buffer to be used for processing relating to the I/O request (“The I/O scheduler can utilize a priority tag … associated with a request … a priority tag can form part of a request” [Col. 3, final ¶ + Col. 4, 1-25th lines] // “Requests may be submitted to a storage volume in the form of a queue” [Col. 1, 35-45th lines] // “In block 203, the I/O scheduler 103a monitors for a request intended for transmission to storage array 111 … In block 209, requests are grouped in the queue that corresponds to their level of priority … the I/O scheduler creates and uses a plurality of corresponding priority queues” [Col. 4, 25th line-end + Col. 5, 1-10th lines]) – As shown in Fig. 2, I/O schedulers “group” (i.e., “retain”) requests (i.e., “a plurality of I/O resources”) using queues. In this case, examiner considers a priority tag forming “part” of an I/O request as “information” associated with the I/O request; and the other “part” of the I/O request as “a buffer” associated with the I/O request-- ,
… a standby request queue table and a standby resource queue table (“a plurality of corresponding priority queues” [Col. 5] // Col. 4, 55-60th lines) – As previously discussed, a queue stores a plurality of requests. Examiner accordingly considers a queue of Love as reading on the claimed concept of a “request queue table”, under the BRI of the claimed language. As taught in Cols. 4 and 5, an I/O scheduler can place a given I/O request in any of a plurality (e.g., at least three) of queues. In this case, examiner considers at least two of the queues where an I/O scheduler does not place a given I/O request as “a standby request queue table” and “a standby resource queue table”, respectively (e.g., a second and a third queue table storing I/O requests).
wherein the processor causes the I/O processing … to:
transmit (Fig. 2, step 211) the I/O request to the cloud storage in response to a request from a host (VM 101a, Fig. 1)(“In block 211 of Fig. 2, requests are transmitted to storage array 111 according to the level of priority of the respective queue” [Col. 4, 25th line – end + Col. 5, 1-20th lines]);
perform standby processing (Col. 5) for waiting to receive the I/O response to the I/O request after transmitting the I/O request to the cloud storage until an elapse of a first time-out time (“a pre-set time interval” [Col. 5])(“the I/O scheduler can consider the time a request has been on the queue. If a request has not been serviced within a pre-set time interval … the I/O scheduler may move the delayed request” [Col. 5, 50-60th lines]) – As detailed in Col. 5, the I/O scheduler considers the amount of time a request has been located within a queue and identifies requests which are located in the queue for longer than “a pre-set time interval”, which examiner considers as “standby processing”--;
transfer the I/O response to the host if having received the I/O response from the cloud storage before the elapse of the first time-out time (“responses to requests are transmitted back to a virtual machine” [Col. 6, 45-50th lines]) – One of ordinary skill in the art would accordingly understand that responses to the requests which are serviced within the pre-set time interval (as discussed above with respect to Col. 5) would be transmitted back to the virtual machine (i.e., “the host” which originally sent the request)-;
move an I/O resource of the plurality of I/O resources related to the I/O request from the I/O processing … and … retain the I/O resource if not having received the I/O response from the cloud storage before the elapse of the first time-out (“If a request has not been serviced within a pre-set time interval … the I/O scheduler may move the delayed request” [Col. 5, 50-60th lines]); and
store the I/O resource, which has been moved from the I/O processing …, in the standby request queue table (“the I/O scheduler may move the request to another queue” [Col. 5, 50-06th lines] // Col.1, 35-45th lines) – As clarified in Col. 5, delayed I/O requests are moved into another queue. In this case, examiner considers the queue into which the delayed request is moved as “the standby request queue table” (i.e., a queue table which is distinct from an initial queue table where an I/O scheduler retains an I/O request),
Love is silent regarding a “thread” structure included within an I/O scheduler which processes I/O requests. Specifically, Love does not explicitly disclose the following limitations:
a processor for the storage node executes an I/O processing thread to perform the I/O request and an I/O response
However, Twohig discloses the following limitations:
a processor (¶0006) for the storage node executes an I/O processing thread to perform the I/O request and an I/O response (“receiving of the request may include enqueuing the request in the first queue … determining of how to handle the request can include assigning the request to a thread of the controller” [0005] // “each thread is capable of servicing at most one request at a time … a request is completed on a given thread” [0037] // “Request controllers can include internal request queues and threads for servicing API requests” [0048] // Fig. 3) – As shown in Fig. 3, a storage controller 312 processes client requests directed to storage resources 306 using respective queues 314, similar to how an I/O scheduler 103a processes VM requests directed to cloud storage 111 out of a priority queue, as detailed in Love Fig. 1. Examiner accordingly considers a controller 312 of Twohig Fig. 3 as analogous to an I/O scheduler 103a of Love. As taught in Twohig ¶0048, a controller 312 includes a plurality of threads (i.e., at least “an I/O processing thread”) which are assigned to process requests and which can only process at most one request at a time.
Love and Twohig are considered analogous to the claimed invention because they all relate to the same field of scheduling and performing I/O requests in a distributed storage environment. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Love with the teachings of Twohig and realize a storage node which includes an I/O processing thread to perform I/O requests using a queue structure. Using threads to perform I/O request processing would be expected to improve the throughput at which a storage node can process I/O requests by enabling parallel processing of requests while still coordinating access to shared resources, as disclosed in Twohig ¶¶0037; 0040: “Controller 210 may be configured to allocate a plurality of threads 216a, 216b, etc. (216 generally) for servicing API requests in parallel” [0037] // “In some cases, threads can coordinate their behavior using semaphores, shared locks, counters, and/or other programming structures usable for controlling access to shared resources.” [0040]
Although Love Fig. 1 shows that server 151 comprises several I/O schedulers 103 which perform I/O requests; and Twohig ¶0048-49 discloses that the processing threads within controllers 312 are allocated specifically to process particular types of requests, the combined teachings of Love and Twohig do not disclose a type of thread specifically for processing requests which exceed the first time-out time. Additionally, the combined teachings of Love and Twohig do not explicitly disclose distinct “standby request” and “standby resource” queues used for processing requests exceeding the first time-out time. Specifically, the combined teachings of Love and Twohig do not explicitly disclose the following limitations:
a response standby processing thread
wherein the response standby processing thread has a standby request queue table and a standby resource queue table
move an I/O resource … related to the I/O request … to the response standby processing thread and cause the response standby processing thread to retain the I/O resource
However, Hassan discloses that reconciler threads perform request processing on requests which have exceeded a pre-defined timeout period. Specifically, Hassan discloses the following limitations:
a response standby processing thread (Reconciler Threads 520, Fig. 5)
wherein the response standby processing thread has a standby request queue table (Default Queue 530, Fig. 5) and a standby resource queue table (Slow Queue 532, Fig. 5)(Fig. 5 // ¶¶0052; 0055) – As shown in Hassan Fig. 5 and detailed in ¶¶0052; 0055, reconciler threads 520 process requests using both a default queue 530 and a slow queue 532--
move (¶0052) an I/O resource … related to the I/O request (In-Progress Request 508, Fig. 5)… to the response standby processing thread and cause the response standby processing thread to retain the I/O resource (“the resource management system may implement a reconciler to update an in-progress request 508 for which the pre-defined timeout period has elapsed. In some embodiments, the system implements a reconciler instance 510 to fetch the request … When called, the illustrated reconciler instance 510 implements a request fetcher 540 to place the in-progress request 508 in a default queue 530.” [0051-52] // Fig. 5 // ¶¶0029-30 // Fig. 1) – As shown in Hassan Fig. 1 and as described in ¶0029-30, a request clearing 130 coupled to a resource management system 120 processes and persists resource requests 110 directed towards downstream services 112, similar to how the Love Fig. 1 I/O scheduler 103a coupled to a server 151 processes and persists requests directed towards storage 111. Examiner accordingly considers the resource management system 120 of Hassan Fig. 1 as analogous to the server 151 of Love Fig. 1. As disclosed in Hassan ¶0051, an in-progress request 508 has an elapsed “pre-defined timeout period”, which examiner considers as analogous to the request of Love Col. 5 which has not been serviced “within a pre-set time interval” (i.e., “the I/O request”). As disclosed in Hassan ¶0051-52, the timed-out request is placed by a reconciler into a default queue 530 to be processed by a reconciler thread. In this context, examiner considers a reconciler thread 520 as reading on the claimed concept of a “response standby processing” type of “thread”--
Love, Twohig, and Hassan are all considered analogous to the claimed invention because they all relate to the same field of a storage node scheduling and queuing requests which are directed to downstream resources. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Love and Twohig with the teachings of Hassan and realize a storage node comprising a dedicated thread and two dedicated queues for performing requests which have exceeded a first time-out time. Identifying and reconciling requests which have not been completed within a pre-defined time allow for better resource tracking and metadata management with reduced operational cost in centralized systems which process requests directed to downstream resources, as disclosed in Hassan ¶0048: “a reconciler may interrogate the downstream service after a pre-defined timeout … In some embodiments, a reconciler instance acts to update and track requests that have not completed the completion stage within a pre-defined time, which may allow better resource tracking and metadata management with reduced operational cost.” [0048]
The combined teachings of Love, Twohig, and Hassan additionally disclose the following limitations:
wherein the processor causes the response standby processing thread (Hassan, Reconciler Threads 520, Fig. 5) to move (Hassan, ¶0055) the I/O resource from a top position (Love, “the front of the queue” [Col. 5, 1-10th lines]) of the standby request queue table to an end position (Love, “the rear of the queue” [Col. 5, 1-10th lines]) of the standby resource queue table and execute the standby processing relating to the I/O response which uses the I/O resource retained in a top position of the standby request queue table. (Hassan, “In some embodiments, downstream service 513 is unresponsive or does not communicate with the reconciler instance 510 rapidly, and the in-progress request is transferred to a slow queue 532 … to reduce average wait times in the default queue” [0055] // Twohig, “On a first come, first serve basis, API requests can be removed from the head of the queue” [0037] // Love, “requests are ordered according to priority. In this fashion, higher priority requests may be placed at the front of the queue so that they are serviced before lower priority requests, which may be placed at the rear of the queue” [Col. 5, 1-10th lines]) – As disclosed in Hassan ¶0055, when downstream service 513 is unresponsive, a request 508 is moved (i.e., from a default queue 530; see ¶0052) into a slow queue 532. Examiner accordingly considers moving a request from a default queue 530 into a slow queue 532 as “mov[ing]” an I/O resource “from the standby request queue table” to “the standby resource queue table” so that the slow queue 532 can retain the request instead of default queue 530 (i.e., “executes standby processing” while awaiting a response “if the I/O resource is retained in” slow queue 532). As taught in Twohig, requests in a queue are performed in order from “the head of the queue” (i.e., the request in “a top position” of a queue is performed). As taught in Love Col. 5, a given request can be placed into “the rear of” a queue (i.e., into “an end position” of a queue) so other requests can be performed before the given request. One of ordinary skill in the art would accordingly understand that the in-progress request of Hassan which is moved from a default queue 530 to a slow queue 532 would be moved from a top position of the default queue 530 to an end position of the slow queue 532 because requests in a queue are performed in order. It similarly follows that the moved request would once again be serviced from the slow queue 532 once reaching “a top position” of the slow queue 532.
Regarding Claim 8,
Love discloses the following limitations:
An I/O request processing method for a storage system (Fig. 1) and to be executed by the storage system including cloud storage (Storage Array 111, Fig. 1) and a storage node (Server 151, Fig. 1) for processing an I/O request to one or a plurality of units of cloud storage (Storage Array 111, Fig. 1)(“As shown in FIG. 1, server 151 has access to storage array 111 over network 121 … storage array 111 may be any type of remote or cloud-based storage volume, any may across one or more backend disks for storing data for the VMs hosted by server 151.” [Col. 3, 35-45th lines]),
wherein a processor for the storage node (“the instruction execution system, apparatus or device” [Col. 2, 60-65th lines]) executes an I/O processing … (“I/O scheduling” [Col. 1] // I/O Scheduler 103a, Fig. 1) to perform the I/O request and an I/O response (“a VM transmitting requests to and receiving responses from a remote cloud-based storage array” [Col. 2, 35-40th lines]) transmitted from the cloud storage in response to the I/O request (“I/O scheduling … is an operating system process for organizing read/write requests, commands and/or operations submitted to a storage volume. This process may be performed by an I/O scheduler software application” [Col. 1, 20-30th lines]) // “Server 151 may be any computer capable of hosting a VM … In an embodiment, an I/O scheduler may reside on each VM … The requests and responses transmitted between servers 151 and storage array 111 may be referred to collectively as “in-flight operations” or “in-flight ops” [Col. 3, 30-45th lines]) – As taught in Cols. 1 + 2 and shown in Fig. 1, a server 151 uses an I/O scheduler 103a to process read and write requests directed to storage array 111--,
… wherein the I/O processing … is configured to retain (Fig. 2, step 209) a plurality of I/O resources, each of the plurality of I/O resources including both information (“A priority tag” [Col. 4, 1st line]) and a buffer to be used for processing relating to the I/O request (“The I/O scheduler can utilize a priority tag … associated with a request … a priority tag can form part of a request” [Col. 3, final ¶ + Col. 4, 1-25th lines] // “Requests may be submitted to a storage volume in the form of a queue” [Col. 1, 35-45th lines] // “In block 203, the I/O scheduler 103a monitors for a request intended for transmission to storage array 111 … In block 209, requests are grouped in the queue that corresponds to their level of priority … the I/O scheduler creates and uses a plurality of corresponding priority queues” [Col. 4, 25th line-end + Col. 5, 1-10th lines]) – As shown in Fig. 2, I/O schedulers “group” (i.e., “retain”) requests (i.e., “a plurality of I/O resources”) using queues. In this case, examiner considers a priority tag forming “part” of an I/O request as “information” associated with the I/O request; and the other “part” of the I/O request as “a buffer” associated with the I/O request--
… a standby request queue table and a standby resource queue table(“a plurality of corresponding priority queues” [Col. 5] // Col. 4, 55-60th lines) – As previously discussed, a queue stores a plurality of requests. Examiner accordingly considers a queue of Love as reading on the claimed concept of a “request queue table”, under the BRI of the claimed language. As taught in Cols. 4 and 5, an I/O scheduler can place a given I/O request in any of a plurality (e.g., at least three) of queues. In this case, examiner considers at least two of the queues where an I/O scheduler does not place a given I/O request as “a standby request queue table” and “a standby resource queue table”, respectively (e.g., a second and a third queue storing I/O requests).,
wherein the processor causes the I/O processing … to:
transmit (Fig. 2, step 211) the I/O request to the cloud storage in response to a request from a host (VM 101a, Fig. 1)(“In block 211 of Fig. 2, requests are transmitted to storage array 111 according to the level of priority of the respective queue” [Col. 4, 25th line – end + Col. 5, 1-20th lines]);
perform standby processing (Col. 5) for waiting to receive the I/O response to the I/O request after transmitting the I/O request to the cloud storage until an elapse of a first time-out time (“a pre-set time interval” [Col. 5])(“the I/O scheduler can consider the time a request has been on the queue. If a request has not been serviced within a pre-set time interval … the I/O scheduler may move the delayed request” [Col. 5, 50-60th lines]) – As detailed in Col. 5, the I/O scheduler considers the amount of time a request has been located within a queue and identifies requests which are located in the queue for longer than “a pre-set time interval”, which examiner considers as “standby processing”--;
transfer the I/O response to the host if having received the I/O response from the cloud storage before the elapse of the first time-out time (“responses to requests are transmitted back to a virtual machine” [Col. 6, 45-50th lines]) – One of ordinary skill in the art would accordingly understand that responses to the requests which are serviced within the pre-set time interval (as discussed above with respect to Col. 5) would be transmitted back to the virtual machine (i.e., “the host” which originally sent the request)-; and
move an I/O resource of the plurality of I/O resources related to the I/O request from the I/O processing … and … retain the I/O resource if not having received the I/O response from the cloud storage before the elapse of the first time-out (“If a request has not been serviced within a pre-set time interval … the I/O scheduler may move the delayed request” [Col. 5, 50-60th lines]); and
store the I/O resource, which has been moved from the I/O processing …, in the standby request queue table (“the I/O scheduler may move the request to another queue” [Col. 5, 50-06th lines]) – As clarified in Col. 5, delayed I/O requests are moved into another queue. In this case, examiner considers the queue into which the delayed request is moved as “the standby request queue table” (i.e., a queue table which is distinct from an initial queue where an I/O scheduler retains an I/O request),
Love is silent regarding a “thread” structure included within an I/O scheduler which processes I/O requests. Specifically, Love does not explicitly disclose the following limitations:
a processor for the storage node executes an I/O processing thread to perform the I/O request and an I/O response
However, Twohig discloses the following limitations:
a processor (¶0006) for the storage node executes an I/O processing thread to perform the I/O request and an I/O response (“receiving of the request may include enqueuing the request in the first queue … determining of how to handle the request can include assigning the request to a thread of the controller” [0005] // “each thread is capable of servicing at most one request at a time … a request is completed on a given thread” [0037] // “Request controllers can include internal request queues and threads for servicing API requests” [0048] // Fig. 3) – As shown in Fig. 3, a storage controller 312 processes client requests directed to storage resources 306 using respective queues 314, similar to how an I/O scheduler 103a processes VM requests directed to cloud storage 111 out of a priority queue, as detailed in Love Fig. 1. Examiner accordingly considers a controller 312 of Twohig Fig. 3 as analogous to an I/O scheduler 103a of Love. As taught in Twohig ¶0048, a controller 312 includes a plurality of threads (i.e., at least “an I/O processing thread”) which are assigned to process requests and which can only process at most one request at a time.
Love and Twohig are considered analogous to the claimed invention because they all relate to the same field of scheduling and performing I/O requests in a distributed storage environment. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Love with the teachings of Twohig and realize a storage node which includes an I/O processing thread to perform I/O requests using a queue structure. Using threads to perform I/O request processing would be expected to improve the throughput at which a storage node can process I/O requests by enabling parallel processing of requests while still coordinating access to shared resources, as disclosed in Twohig ¶¶0037; 0040: “Controller 210 may be configured to allocate a plurality of threads 216a, 216b, etc. (216 generally) for servicing API requests in parallel” [0037] // “In some cases, threads can coordinate their behavior using semaphores, shared locks, counters, and/or other programming structures usable for controlling access to shared resources.” [0040]
Although Love Fig. 1 shows that server 151 comprises several I/O schedulers 103 which perform I/O requests; and Twohig ¶0048-49 discloses that the processing threads within controllers 312 are allocated specifically to process particular types of requests, the combined teachings of Love and Twohig do not disclose a type of thread specifically for processing requests which exceed the first time-out time. Additionally, the combined teachings of Love and Twohig do not explicitly disclose distinct “standby request” and “standby resource” queues used for processing requests exceeding the first time-out time. Specifically, the combined teachings of Love and Twohig do not explicitly disclose the following limitations:
a response standby processing thread
wherein the response standby processing thread has a standby request queue table and a standby resource queue table
move an I/O resource … related to the I/O request … to the response standby processing thread and cause the response standby processing thread to retain the I/O resource
However, Hassan discloses that reconciler threads perform request processing on requests which have exceeded a pre-defined timeout period. Specifically, Hassan discloses the following limitations:
a response standby processing thread (Reconciler Threads 520, Fig. 5)
wherein the response standby processing thread has a standby request queue table (Default Queue 530, Fig. 5) and a standby resource queue table (Slow Queue 532, Fig. 5)(Fig. 5 // ¶¶0052; 0055) – As shown in Hassan Fig. 5 and detailed in ¶¶0052; 0055, reconciler threads 520 process requests using both a default queue 530 and a slow queue 532--
move (¶0052) an I/O resource … related to the I/O request (In-Progress Request 508, Fig. 5)… to the response standby processing thread and cause the response standby processing thread to retain the I/O resource (“the resource management system may implement a reconciler to update an in-progress request 508 for which the pre-defined timeout period has elapsed. In some embodiments, the system implements a reconciler instance 510 to fetch the request … When called, the illustrated reconciler instance 510 implements a request fetcher 540 to place the in-progress request 508 in a default queue 530.” [0051-52] // Fig. 5 // ¶¶0029-30 // Fig. 1) – As shown in Hassan Fig. 1 and as described in ¶0029-30, a request clearing 130 coupled to a resource management system 120 processes and persists resource requests 110 directed towards downstream services 112, similar to how the Love Fig. 1 I/O scheduler 103a coupled to a server 151 processes and persists requests directed towards storage 111. Examiner accordingly considers the resource management system 120 of Hassan Fig. 1 as analogous to the server 151 of Love Fig. 1. As disclosed in Hassan ¶0051, an in-progress request 508 has an elapsed “pre-defined timeout period”, which examiner considers as analogous to the request of Love Col. 5 which has not been serviced “within a pre-set time interval” (i.e., “the I/O request”). As disclosed in Hassan ¶0051-52, the timed-out request is placed by a reconciler into a default queue 530 to be processed by a reconciler thread. In this context, examiner considers a reconciler thread 520 as reading on the claimed concept of a “response standby processing” type of “thread”—
Love, Twohig, and Hassan are all considered analogous to the claimed invention because they all relate to the same field of a storage node scheduling and queuing requests which are directed to downstream resources. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Love and Twohig with the teachings of Hassan and realize a storage node comprising a dedicated thread and two dedicated queues for performing requests which have exceeded a first time-out time. Identifying and reconciling requests which have not been completed within a pre-defined time allow for better resource tracking and metadata management with reduced operational cost in centralized systems which process requests directed to downstream resources, as disclosed in Hassan ¶0048: “a reconciler may interrogate the downstream service after a pre-defined timeout … In some embodiments, a reconciler instance acts to update and track requests that have not completed the completion stage within a pre-defined time, which may allow better resource tracking and metadata management with reduced operational cost.” [0048]
The combined teachings of Love, Twohig, and Hassan additionally disclose the following limitations:
wherein the processor causes the response standby processing thread (Hassan, Reconciler Threads 520, Fig. 5) to move (Hassan, ¶0055) the I/O resource from a top position (Love, “the front of the queue” [Col. 5, 1-10th lines]) of the standby request queue table to an end position (Love, “the rear of the queue” [Col. 5, 1-10th lines]) of the standby resource queue table and execute the standby processing relating to the I/O response which uses the I/O resource retained in a top position of the standby request queue (Hassan, “In some embodiments, downstream service 513 is unresponsive or does not communicate with the reconciler instance 510 rapidly, and the in-progress request is transferred to a slow queue 532 … to reduce average wait times in the default queue” [0055] // Twohig, “On a first come, first serve basis, API requests can be removed from the head of the queue” [0037] // Love, “requests are ordered according to priority. In this fashion, higher priority requests may be placed at the front of the queue so that they are serviced before lower priority requests, which may be placed at the rear of the queue” [Col. 5, 1-10th lines]) – As disclosed in Hassan ¶0055, when downstream service 513 is unresponsive, a request 508 is moved (i.e., from a default queue 530; see ¶0052) into a slow queue 532. Examiner accordingly considers moving a request from a default queue 530 into a slow queue 532 as “mov[ing]” an I/O resource “from the standby request queue table” to “the standby resource queue table” so that the slow queue 532 can retain the request instead of default queue 530 (i.e., “executes standby processing” while awaiting a response “if the I/O resource is retained in” slow queue 532). As taught in Twohig, requests in a queue are performed in order from “the head of the queue” (i.e., the request in “a top position” of a queue is performed). As taught in Love Col. 5, a given request can be placed into “the rear of” a queue (i.e., into “an end position” of a queue) so other requests can be performed before the given request. One of ordinary skill in the art would accordingly understand that the in-progress request of Hassan which is moved from a default queue 530 to a slow queue 532 would be moved from a top position of the default queue 530 to an end position of the slow queue 532 because requests in a queue are performed in order. It similarly follows that the moved request would once again be serviced from the slow queue 532 once reaching “a top position” of the slow queue 532.
Claims 2, 4, and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Love further in view of Twohig, Hassan, and Stafford et al. (US 20190108064 A1)(cited by examiner in previous action)(hereafter referred to as Stafford).
Regarding Claim 2,
The same motivation to combine provided in Claim 1 is equally applicable to Claim 2. The combined teachings of Love, Twohig, and Hassan disclose the following limitations:
The storage system according to claim 1 (see Claim 1 limitation mappings above), wherein if the I/O response from the cloud storage is not received by the I/O processing thread before the elapse of the first time-out time (Love, Col. 5 // Hassan, “a request having an in-progress status 420 for a time exceeding a pre-defined timeout period” [0049]),
the processor causes a status transition (Hassan, “the request status may be modified” [0051]) … from a normal state (Hassan, “an in-progress status” [0049]) … to a temporary blockade state (Hassan, “reconciling status” [0051])(Hassan, “the resource management system may call a reconciler instance in response to a request having an in-progress status 420 for a time exceeding a pre-defined timeout period” [0049] // “the system implements a reconciler instance 510 to fetch the in-progress request … Once fetched, the request status may be modified to indicate that the request is reconciling (e.g., reconciling status 230 of FIG. 2)” [0051]) – As taught in Hassan, the “request status” of a request which is in-progress for “a time exceeding a pre-defined timeout period” (i.e., a request for which the response is not received “before the elapse of the first time-out time”) has its status “modified” (i.e., “a status transition”) from “an in-progress status” to “a reconciling status”, which examiner considers as a transition from “a normal state” to “a temporary blockade state”.
The combined teachings of Love, Twohig, and Hassan do not explicitly disclose a status transition associated with cloud storage responsive to a request exceeding the first time-out period. Specifically, the combined teachings of Love, Twohig, and Hassan do not explicitly disclose the following limitations:
a status transition of the cloud storage from a normal state capable of accepting an I/O request which is new, to a temporary blockade state incapable of accepting the new I/O request and capable of accepting only a response confirmation
However, Stafford discloses the following limitations:
a status transition (“revoking the draw privilege of the respective node” [0037]) of the cloud storage (Cluster 110, Fig. 1 // ¶¶0192-193) from a normal state capable of accepting an I/O request which is new, to a temporary blockade state incapable of accepting the new I/O request and capable of accepting only a response confirmation (“Each respective node in the cluster of nodes is granted a draw privilege. The draw privilege permits a respective node to draw one or more jobs from the queue” [0016] // “Accordingly, when … a first job in the queue has been in the queue for more than a predetermined amount of time … revoking the draw privilege of the respective node until the respective node has completed the first job. This forces the node to complete the first job.” [0037]) -- As shown in Stafford Fig. 1 and detailed in ¶0016, cloud storage nodes 282 draw jobs from a queue 248 located within an application server 102, similar to how storage array 111 of Love Fig. 1 receives I/O requests from a priority queue located within a server 151. Examiner accordingly considers application server 102 of Stafford Fig. 1 as analogous to server 151 of Love Fig. 1. As taught in Stafford ¶0016, each cloud storage node initially is “granted a draw privilege” which enables a respective storage node to draw jobs from queue 248 of application server 102. Examiner accordingly considers a cloud storage node being granted a draw privilege as “a normal state capable of accepting an I/O request which is new” (i.e., a state whereby a cloud storage node is capable of drawing a new job from a queue). As further disclosed in Stafford ¶0037, when a particular job has been located within queue 248 for “more than a predetermined amount of time” (i.e., when an I/O response is not received from a cloud storage node before “the first time-out time”), the draw privileges for the cloud storage node which is performing the job are revoked, which forces the node to complete the job. Examiner accordingly considers a cloud storage node having a draw privilege being revoked as “a temporary blockade state” which prevents the node from drawing new jobs until the timed-out job is completed.
Love, Twohig, Hassan, and Stafford are all considered analogous to the claimed invention because they all relate to the same field of a centralized server scheduling and processing requests targeting downstream resources. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Love, Twohig, and Hassan with the teachings of Stafford and realize a storage system which transitions cloud storage to a state of being incapable of accepting new requests until a timed-out request is complete. Doing so would prevent each node in a cluster of nodes from drawing too many jobs from a centralized queue, keeping each nodes within respective memory and/or processor requirements, as disclosed in Stafford ¶0089: “Each node in the cluster has the privilege to independently draw jobs from the queue subject to the collective requirements of the drawn jobs. In other words, a node in the cluster cannot draw more jobs from the queue than it can handle, from the perspective of the memory requirements and/or the processor requirements of the drawn jobs.” [0089]
Regarding Claim 4,
The same motivation to combine provided in Claim 2 is equally applicable to Claim 4. The combined teachings of Love, Twohig, Hassan, and Stafford disclose the following limitations:
The storage system according to claim 2, wherein the processor (Hassan, ¶0003) causes the response standby processing thread to:
perform (Hassan, ¶0057) the standby processing after transmitting (Hassan, ¶0056) the response confirmation to the cloud storage until an elapse of a second time-out time (Hassan, “Processing an in-progress request 508 … includes requesting resource status from downstream service 512 by the reconciler threads 520 … In some embodiments the reconciler instance 510 does not complete reconciliation of the in-progress request 508 … within … a maximum period of time for the reconciler instance 510 to reconcile the in-progress request 508” [0057]) – As disclosed in Hassan ¶¶0056-57, reconciler threads contact downstream service 512 in order to complete request 508 until a “maximum period of time” for the request to reconcile (i.e., “until an elapse of a second time-out time”)--; and
if having received the I/O response from the cloud storage before the elapse of the second time-out time, transfer the received I/O response to the host; (Hassan, “Processing an in-progress request 508 .. includes requesting resource status from the downstream service 512 by the reconciler threads 520. In response, the downstream service 512 provides the status of the resource addressed by the in-progress request” [0056] // Love, Col. 6) – As disclosed in Hassan ¶0056, a reconciler thread receives a response from the request 508 from downstream service 512. As previously discussed (see Claim 1 limitation mappings above) and as detailed in Love Col. 6, responses to I/O requests are sent back to the host which originally sent the request.-- and
release the I/O resource relating to the I/O response retained by the response standby processing thread (Hassan, “In accordance with the returned resource status, the reconciler threads 520 dispose of the in-progress request” [0056]).
Regarding Claim 6,
The same motivation to combine provided in Claim 4 is equally applicable to Claim 6. The combined teachings of Love, Twohig, Hassan, and Stafford disclose the following limitations:
The storage system according to claim 4 (see Claim 4 limitation mappings above), wherein if the I/O response from the cloud storage is not received by the response standby processing thread before the elapse of the second time-out time (Hassan, “the reconciler instance 510 does not complete reconciliation … within … a maximum period of time” [0057]),
the processor causes a status transition of the cloud storage from the temporary blockade state (Stafford, “revoking the draw privilege of the respective node” [0037]) to a blockade state incapable of accepting the I/O request and the response confirmation (Stafford, “In some such embodiments, first, the draw privileges of some of the nodes is terminated. Then, as such nodes complete their existing jobs, they are terminated from the cluster” [0025]) – As previously discussed (see Claims 2 and 4 limitation mappings above), the draw privilege of a node is revoked (Stafford) when a reconciler thread has not completed a request before a maximum period of time (Hassan). As clarified in Stafford ¶0025, after a node which has had its draw privileges revoked (i.e., is in “the temporary blockade state”) has completed a timed-out job, the node is then terminated from the cluster. Examiner considers a node being terminated from a cluster as the node “transition[ing]” “to a blockade state” where a node cannot receive or provide responses to new requests-- and
releases (Hassan, ¶0057) the I/O resource relating to the I/O response retained by the response standby processing thread (Hassan, “In such cases, … the job re-scheduler may release the in-progress request 508 by updating the status of the in-progress request to allow another reconciler instance to fetch and process” [0057]) – As clarified in Hassan ¶0057, requests which cannot be reconciled by a given reconciler instance are then released by the reconciler instance.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Love further in view of Twohig, Hassan, Stafford, and Barszczak et al. (US 20200112628 A1)(cited by examiner in previous action)(hereafter referred to as Barszczak).
Regarding Claim 3,
The same motivation to combine provided in Claim 2 is equally applicable to Claim 3. The combined teachings of Love, Twohig, Hassan, and Stafford disclose the following limitations:
The storage system according to claim 2 (see Claim 2 limitation mappings above), wherein … the processor detaches the cloud storage from the storage system (Stafford, “removing the respective node from the cluster” [0035])
Although Stafford generally discloses that nodes exceeding a second time-out period are removed (i.e., “detached”) from the cluster, and Hassan ¶0057 discloses that a reconciler can track both “a number of times a downstream service has been contacted for resource information” and “a maximum number of retrials” in order to determine when to allow another instance to reconcile a request; the combined teachings of Love, Twohig, Hassan, and Stafford do not explicitly disclose the following limitations:
wherein the processor measures a number of times when a status of the cloud storage is changed to the temporary blockade state; and wherein when the number of times reaches an upper limit count, the processor detaches the cloud storage from the storage system
However, Barszczak discloses within the context of identifying timed-out nodes in a distributed storage system (see Fig. 4) that once a number of missed heartbeat messages received from a given node exceeds a predetermined “error threshold”, the node can subsequently be removed from the system.
Barszczak discloses the following limitations:
wherein the processor measures (decision block 599, Fig. 5B) a number of times when a status of the cloud storage is changed to the temporary blockade state (“missed heartbeat messages” [0045]), and wherein when the number of times reaches an upper limit count (“an error threshold” [0045] // Decision block 599, ‘Yes’ branch, Fig. 5B), the processor detaches the cloud storage from the storage system (block 565, Fig. 5B // ¶0019)(“flow continues to decision 599 where a determination may be made as to if an error threshold (e.g., for missed heartbeat messages) has been reached … if there is an error threshold crossing, the YES prong of decision 599, flow may continue to block 565 … where a recovery action may be initiated as described above” [0045] // “In the case where a node becomes unavailable, all functionality of that node may be failed over to one or more other nodes in the cluster that remain available” [0019]) – As disclosed in ¶¶0019 and 0045, after a particular node has missed a number of heartbeat messages (i.e., has experienced a number of time-outs; see ¶0019) exceeding a predetermined “error threshold”; all functionality of the node is moved to other available nodes in the cluster (i.e., the node is “detached” from the storage system).
Love, Twohig, Hassan, Stafford, and Barszczak are all considered to be analogous to the claimed invention because they all relate to the same field of management of timed-out requests in a distributed storage system. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Love, Twohig, Hassan, and Stafford with the teachings of Barszczak and realize a processor employing a heartbeat mechanism which detaches a node from a storage system after the node experiences a threshold number of time-outs. Doing so would enable heartbeats to be broadcast to nodes at a reduced frequency, thereby resulting in reduced processing overhead as compared to traditional heartbeat mechanisms, as disclosed in Barszczak ¶0024: “This disclosure represents … a new type of non-persistent heartbeat message to replace some of the persistent heartbeat messages such that a frequency of persistent heartbeat message may be reduced (e.g., may be performed at a much slower rate) and thus reduce overhead for processing heartbeat messages overall.”
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Love further in view of Twohig, Hassan, Stafford, and Lake et al. (US 20220357852 A1)(cited by examiner in previous action)(hereafter referred to as Lake).
Regarding Claim 5,
The same motivation to combine provided in Claim 4 is equally applicable to Claim 5. The combined teachings of Love, Twohig, Hassan, and Stafford disclose the following limitations:
The storage system according to claim 4 (see Claim 4 limitation mappings above) wherein
if the I/O response from the cloud storage is received by the response standby processing thread before the elapse of the second time-out time (Hassan, ¶0057), the processor (Stafford, ¶0010) causes a status transition of the cloud storage from the temporary blockade state to the normal state (Stafford, “revoking the draw privilege of the respective node until the respective node has completed the first job” [0037]) – As disclosed in Stafford ¶0037, nodes which are selected to complete a job exceeding the first time-out period have their “draw privilege” “revoked” (i.e., are placed into “the temporary blockade state”; see also Claim 2 limitation mappings above) specifically until the completion of the job. Accordingly, once the job is completed, the respective node is considered as transitioning back to the “normal state”.
The combined teachings of Love, Twohig, Hassan, and Stafford are silent regarding performing “difference rebuilding” to recover data once the job exceeding the first time-out period is completed. Specifically, the combined teachings of Love, Twohig, Hassan, and Stafford are silent regarding the following limitations:
wherein the processor … executes difference rebuilding processing for recovering data of the cloud storage
However, Lake discloses within the context of recovering data from a disconnected node in a distributed storage system (see Fig. 1) that upon a node re-connecting to the system, “a partial rebuild” of “a subset of new data” is performed to recover data back to the disconnected node.
Lake discloses the following limitations:
wherein the processor (Storage Controller 120, Fig. 1 // ¶0005) … executes difference rebuilding processing for recovering data of the cloud storage (“When applying data mirroring techniques, if a data mirror becomes disconnected and new data is written to the remaining data mirrors, the disconnected mirror is no longer up to date. When the disconnected mirror reconnects, redundancy can be restored by performing … a partial rebuild of the disconnected mirror … When the disconnected mirror reconnects, only the tracked differences are written to the disconnected mirror” [0014]) – As disclosed in Lake ¶0014, after “a data mirror” (e.g., Mirror Volume 150-1 of Fig. 2) reconnects, “a partial rebuild” (i.e., “difference rebuild processing”) is performed to recover data back to the disconnected mirror.
Love, Twohig, Hassan, Stafford, and Lake are all considered to be analogous to the claimed invention because they all relate to the same field of recovering data in distributed storage systems which comprise nodes experiencing time-out. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Love, Twohig, Hassan, and Stafford with the teachings of Lake and realize a storage system processor which performs a partial rebuild to recover data to cloud storage nodes which exceed a first-time out response time but which do not exceed a second time-out response time. Doing so would reduce the amount of bandwidth and time required to recover redundant data to a disconnected node, thereby improving availability and performance of the storage system, as disclosed in Lake ¶0014: “When the disconnected mirror reconnects, redundancy can be restored by performing a full or partial rebuild of the disconnected mirror. A full rebuild requires substantial bandwidth and time, which can affect the availability and performance of the storage system. To address this, some storage systems perform a partial rebuild”.
Response to Arguments
The previous 35 U.S.C. 112(b) rejections of Claims 2-6 are withdrawn in view of the instant claim amendments. Examiner notes that the instant amendments introduce new 35 U.S.C. 112(b) rejections. See 35 U.S.C. 112(b) rejections above for additional details.
Applicant's arguments filed 11/03/2025 have been fully considered but they are not persuasive.
With respect to applicant’s argument located within the final paragraph of the 5th page of remarks (numbered as page 11), continuing to the 6th page of remarks (numbered as page 12), which recites:
“On the other hand, Love merely discloses a plurality of queues with requests with different priorities (see col. 4, line 56 to col. 5, line 10). At best, this would only correspond to the I/O processing threads of claim 1. In addition, Love discloses that "[i]f a request has not been serviced within a pre-set time interval, or has been delayed repeatedly by a large number of higher priority requests, the I/O scheduler may move the delayed request to a part of a queue that enables faster servicing, or the I/O scheduler may move the request to another queue to ensure faster servicing" (see col. 5, lines 52-57). Again, this at best relates to the I/O processing thread of claim 1.
Love does not consider the distribution of a load on a response standby processing thread by avoiding an increase in an access frequency of I/O resources in the standby request queue table, so that the storage node alone performs I/O request processing with an I/O processing thread, while the response standby processing thread can monitor and respond to the cloud storage I/O response delay. Love also does not consider suppressing the degradation of I/O performance of the storage node where I/O resources are moved from an I/O processing thread to a response standby processing thread.”
Examiner has fully considered the aforementioned argument but does not find it persuasive. Applicant appears to argue that Love does not disclose the limitations of Claim 1 as amended at least because Love “does not consider” factors such as “distribution of a load” on a thread or “suppressing the degradation” of I/O performance of a node. However, examiner notes that Claim 1 as amended does not explicitly recite instances of “distribution of a load” or “suppressing the degradation”. Instead, Claim 1 as amended recites “wherein the processor causes the response standby processing thread to move the I/O resource from a top position of the standby request queue table to an end position of the standby resource queue table and execute the standby processing relating to the I/O response which uses the I/O resource retained in a top position of the standby request queue table”, which as evidenced by Instant Specification ¶¶0039-40 and the 2nd/3rd pages of Remarks are limitations of the instant invention intended to provide the aforementioned load distribution and degradation suppression. As taught in Love Col. 5, a request queue table includes both a “front” and a “rear” which impacts when a request in the queue will be performed relative to other requests in the queue. As taught in Hassan ¶0055, a request is moved from a default queue into a slow queue. Examiner relies on the combination of Love, Hassan, and Twohig to disclose the concept of moving a request from a top position of a first queue into an end position of a second queue as recited in the claims as amended. See 35 U.S.C. 103 rejections above for additional details. Nothing in the claims as currently presented precludes such an interpretation of Claim 1 as amended.
With respect to applicant’s argument located within the 3rd paragraph of the 6th page of remarks (numbered as page 12), which recites:
“The deficiencies in Love are not overcome by resort to Twohig and Hassan. Twohig is merely relied upon for disclosing threads. Hassan is relied upon for allegedly disclosing the details of the response standby processing thread. However, Applicant submits that Hassan cannot be combined the Love and Twohig to arrive at the claimed invention.”
Examiner has fully considered the aforementioned argument but does not find it persuasive. See below for additional details with respect Hassan.
With respect to applicant’s argument located within the final paragraph of the 6th page of remarks (numbered as page 12) continuing to the 7th page of remarks (numbered as page 13), which recites:
“Hassan is not related to the claimed invention and instead is directed to processing a request for allocating a cloud-service resource such as computation resources, cloud storage, object storage, and network resources (see para. [0027]). Hassan discloses that a "resource management system 120 may collect request metadata for the resource request 110, including, but not limited to, a unique request identifier, a resource identifier on which the request is acting, a time at which the request was created (e.g., for tracking purposes ), a time at which the metadata processing may start reconciling the request (discussed in more detail below), a request type, a request status, the identity of the originator of the resource request 110, etc. In some embodiments, the request type may include, but is not limited to, a create request (for assigning a resource), a terminate request (for de-assigning a resource), a move request associated with moving a resource within and/or between systems (e.g., from one compartment to another compartment), a modify usage request (for increasing or decreasing usage of the associated resource), etc. In some embodiments, the request status, as described in more detail in reference to the forthcoming figures, may include, but is not limited to, an in-progress status (reflecting that the request has not been resolved by the downstream service 112), or that the request is either committed, aborted, or reconciling (to indicate that it is in- process by a reconciler instance as described in more detail in reference to FIG . 5 , below)" (see para. [0037]).
Therefore, the disclosure of a default queue 530 or slow queue 532 in Fig. 5 is not relevant to the claimed invention and cannot be combined with Love and Twohig to arrive at the claimed invention. The remaining cited references fail to cure the deficiencies in these primary references.”
Examiner has fully considered the aforementioned argument but does not find it persuasive. Applicant argues that because default queue 530 and slow queue 532 of Hassan Fig. 5 are “not relevant to the claimed invention”, Hassan cannot be combined with Love and Twohig to arrive at the claimed invention. Examiner respectfully disagrees with the assertion that Hassan cannot be combined with Love and Twohig to arrive at the claimed invention. Examiner notes that, similar to the instant invention, each of the Hassan, Love, and Twohig references relate to the same field of scheduling and queuing requests in a storage node which are directed to downstream resources. Therefore, each of the aforementioned references belong to the same field of endeavor as the instant invention and are accordingly analogous art to the claimed invention. See MPEP 2141.01(a).
In addition, even if applicant were instead to argue that the mapping of default queue 530 and slow queue 532 of Hassan Fig. 5 to the claimed “standby request queue table” and “standby resource queue table”, respectively, were improper, such an argument would not be persuasive in view of the claims as currently presented. Applicant distinguishes the default queue 530 and slow queue 532 of Hassan from the instant invention by citing Hassan ¶¶0027;0037 and concluding that Hassan “is not related to the claimed invention”. Examiner respectfully disagrees with the assertion that Hassan is not related to the claimed invention. As shown in Hassan Fig. 5, a reconciler thread 520 moves requests between a default queue 530 and a slow queue 532. Examiner considers a queue storing requests, such as depicted in Hassan, as reading on the claimed concept of a “request queue table”, under the Broadest Reasonable Interpretation (BRI) of the claimed invention. Accordingly, default queue 530 and slow queue 532 of Hassan Fig. 5 read on the claimed concepts of “a standby request queue table” and “a standby resource queue table”, respectively. Thus, Hassan is related to the claimed invention. Nothing in the claims as currently presented precludes such an interpretation of Claim 1.
With respect to applicant’s argument located within the final paragraph of the 7th page of remarks (numbered as page 13) continuing to the 8th page of remarks (numbered as page 14), which recites:
“The dependent claims further define claim 1 and, for example, the cited references do not disclose releasing the I/O resource relating to the I/O response retained by the response standby processing thread as recited in claim 4, in the combination with the limitations of claim 1.”
Examiner has fully considered the aforementioned argument and does not find it to be persuasive. Applicant argues that cited references do not disclose releasing the I/O resource relating to the I/O response retained by the response standby processing thread, such as recited in Claim 4. Examiner respectfully disagrees, and notes that Hassan ¶0056 discloses a process whereby reconciler threads “dispose” requests which are complete, which examiner considers as a process of “releasing the I/O resource relating to the I/O response”, under the BRI of the claimed invention. Nothing in the claims as currently presented precludes such an interpretation of Claim 4.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIAN SCOTT MENDEL whose telephone number is (703)756-1608. The examiner can normally be reached M-F 10am - 4pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocío del Mar Pérez-Vélez can be reached at 571-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.S.M./Examiner, Art Unit 2133 /ROCIO DEL MAR PEREZ-VELEZ/Supervisory Patent Examiner, Art Unit 2133