Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
This action is in response to Applicant’s amendments filed 25 November 2025. Claim 1 was previously withdrawn. Claims 2-20 were previously pending. Claims 2 and 20 have been amended according to Applicant’s amendments. No claims have been cancelled. New claim 21 has been added. Accordingly, claims 2-21 are under consideration.
Election/Restriction
Applicant's election with traverse of claim 1 (Group I) in the reply filed on 14 April 2025 is acknowledged. The traversal is on the ground(s) that the search and examination can be made without undue burden on the office. This is not found persuasive because a search and examination would require serious burden as noted by the Examiner in requirement for restriction/election dated 18th February 2025, and further for at least the same reasons set forth in the requirement for restriction/election.
The requirement is still deemed proper and is therefore made FINAL.
Thus, claim 1 (Group I) remains withdrawn from further consideration pursuant to 37 CFR 1.142(b), as being drawn to a nonelected subcombination, there being no allowable generic or linking claim. Applicant timely traversed the restriction (election) requirement in the reply filed on 14 April 2025.
Therefore, claims 2-20 (group II) have been elected, and under examination as follows below.
Response to Arguments
Claim Objections –
Applicant has amended claim 20 to address the previously noted informality. Accordingly, the objection is withdrawn.
35 USC 103 -
Applicant’s arguments, see remarks pages 11-12, filed 25 November 2025, with respect to the rejection of claims 2, 4, 16, and 18 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Marks in view of Lee et al (US 2019/0026220 A1). Examiner notes Applicant’s own disclosure in claim 3 and paragraph [0055] of the specification of the API comprising the VPD page.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “logic to manage”, “logic to expose”, “logic to receive”, “logic to send” as in claim 2, “logic to identify”, “logic to store” as in claim 4, “logic to set” as in claim 5, “logic to divert” as in claim 6, “logic to read”, “logic to cache” as in claim 8, “logic to generate”, “logic to send” as in claim 9, “logic to unset” as in claim 11, “logic to send” as in claim 12, “logic to identify”, “logic to send” as in claim 13, “logic to set”, “logic to send”, “logic to determine”, “logic to send” as in claim 15, and “logic to expose”, “logic to receive”, “logic to send” as in claim 17.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the specification shows the following disclosure appears to be the corresponding structure described in the specification for the 35 USC 112(f) limitations: Fig. 2, Fig. 4, and Fig. 6 and corresponding paragraphs [0019]-[0023], [0042], and [0073] – [0083], variously disclosed as a RAID/storage controller, circuitry as hardware or firmware, or embodied by a chip, SoC, ASIC, programmable logic device.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2-4, 16, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Marks et al (US 2020/0133896 A1, hereinafter Marks) in view of Lee et al (US 2019/0026220 A1, hereinafter Lee).
Regarding claim 2, Marks discloses a device comprising: logic to manage a virtual disk comprising one or more spans, each span comprising one or more arms, each arm corresponding a different physical disk (See Marks, Fig. 1 and [0015], storage system includes a virtual hard drive and [0016], a storage controller 124 may map the VHD to a RAID array, the RAID array, or span of VHD across the array, comprising multiple non-volatile memory express devices or arms of physical disks);
logic to expose that the virtual disk supports read ahead operations (See Marks, [0028] disclosing the pre-fetch command may be represented by the ‘read prepare’ bit or other flag that is added to context attributes of a small computer system interface (SCSI) command—PREFETCH...For the SCSI command—PREFETCH and the Dataset Management command, the applicability of the pre-fetch command may be applied to each range of logical block addresses, or in other words, supporting read ahead);
logic to receive a prefetch command (See Marks, [0015], host 100 may generate I/O transactions 110 targeting a coupled storage subsystem 120 that includes a virtual hard drive (VHD) and [0017] the host 100 is configured to write an NVMe command. In this embodiment, the NVMe command is directed to the storage controller 124 and the RAID array 140 and [0020] & [0021], disclosing the NVMe command 200 includes a request to write the new data and the NVMe command 200 further includes an advisory command and a non-completion command. In this embodiment, the advisory command may be represented by a pre-fetch command), the prefetch command comprising a requested SCSI input-output operation (IO) (See Marks, [0015], host 100 may generate I/O transactions 110 targeting a coupled storage subsystem 120 that includes a virtual hard drive (VHD) and [0028] disclosing the pre-fetch command may be represented by the ‘read prepare’ bit or other flag that is added to context attributes of a small computer system interface (SCSI) command);
logic to send a completion message in response to the prefetch command (See Marks, [0056] After the command has been issued and completed by the storage controller 124, the storage controller 124, at block 512, writes completion queue entries and generates corresponding interrupts such as the MSI-X interrupt 314. At block 514, the host 100 consumes and processes the completion queue entries in the completion queue), the completion message indicating a status of the SCSI IO (See Marks, [0056] After the command has been issued and completed by the storage controller 124, the storage controller 124, at block 512, writes completion queue entries and generates corresponding interrupts such as the MSI-X interrupt 314. At block 514, the host 100 consumes and processes the completion queue entries in the completion queue).
Marks does not disclose logic to expose a application programming interface (API) indicating that the virtual disk supports read ahead operations.
However, Lee discloses logic to expose a application programming interface (API) indicating that the virtual disk supports read ahead operations (See Lee, [0038] and [0039], disclosing the use of a VPD page to indicate a maximum prefetch length, or in other words, an API as evidenced by Applicant’s claim 3 indicating a VPD comprising the API and Applicant’s specification at [0055], producing VPD with a page including a MAX_PREFETCH_LENGTH field).
Marks and Lee are analogous art directed to improved data storage management techniques. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the virtual disk prefetch storage system of Marks with the API of Lee as storage performance can be improved by enabling the OS to be informed and make use of the various specifications including prefetching capabilities of the storage device.
Regarding claim 3, Marks in view of Lee disclosed the device of claim 2 as described hereinabove. Marks in view of Lee further disclose wherein: the API comprises at least a portion of a SCSI (See Marks, [0028] disclosing the pre-fetch command may be represented by the ‘read prepare’ bit or other flag that is added to context attributes of a small computer system interface (SCSI) command—PREFETCH...For the SCSI command—PREFETCH and the Dataset Management command) vital product data (VPD) page (See Lee, [0038] and [0039], disclosing the use of a VPD page to indicate a maximum prefetch length)
Regarding claim 4, Marks in view of Lee disclosed the device of claim 2 as described hereinabove. Marks further discloses: logic to identify a condition of the virtual disk that prevents successful read ahead caching (See Marks, [0046] At step 316, the host 100 consumes and processes the new completion queue entries in the completion queue. The consummation and processing includes taking any actions based on error conditions that may be indicated); and logic to store an indication that the device cannot successfully perform read ahead caching on the virtual disk (See Marks, [0046] At step 316, the host 100 consumes and processes the new completion queue entries in the completion queue. The consummation and processing includes taking any actions based on error conditions that may be indicated).
Regarding claim 16, Marks in view of Lee disclosed the device of claim 2 as described hereinabove. Marks further discloses wherein the device comprises a redundant array of independent disks (RAID) controller (See Marks, [0016] At the storage subsystem 120, a storage controller 124 may map the VHD 122 to a RAID array 140. In an embodiment, the storage controller 124 includes a RAID controller 126 that may be configured to control multiple non-volatile memory express (NVMe) devices 142-146 that make up the RAID array 140).
Regarding claim 18, Marks in view of Lee disclosed the device of claim 17 as described hereinabove. Marks further discloses wherein: the second device comprises a host computer having an operating system (See Marks, [0015] The host 100 may generate I/O transactions 110 targeting a coupled storage subsystem 120 that includes a virtual hard drive (VHD) 122).
Regarding claim 19, Marks in view of Lee1 disclosed the device of claim 2 as described hereinabove. Lee further discloses wherein the API comprises a maximum prefetch length field, the maximum prefetch length field indicating that the virtual disk supports read ahead caching (See Lee, [0038], disclosing a VPD page storing vendor specific information about a logical unit and a target device and a maximum prefetch length field indicating maximum prefetch length of logical blocks for a single prefetch command, which necessarily indicates supporting read ahead caching/prefetching).
Claims 8, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Marks et al (US 2020/0133896 A1, hereinafter Marks) in view of Lee et al (US 2019/0026220 A1, hereinafter Lee), further in view of Feld et al (US 2023/0325090 A1, hereinafter Feld).
Regarding claim 8, Marks in view of Lee disclosed the device of claim 2 as described hereinabove. Neither Marks nor Lee discloses logic to read from the virtual disk the data requested by the SCSI IO; and logic to cache the data requested by the SCSI IO in a read ahead cache.
However, Feld discloses logic to read from the virtual disk the data requested by the SCSI IO; and logic to cache the data requested by the SCSI IO in a read ahead cache (See Feld, [0033], disclosing The host computers 102.1, . . . , 102.n can be further configured to provide, over the network(s) 106, storage input/output (IO) requests (e.g., small computer system interface (SCSI) commands, network filesystem (NFS) commands) to the storage system 104. Such storage IO requests (e.g., read requests, write requests) can direct the storage system 104 to read and/or write data blocks, data pages, data files, and/or any other suitable data elements from/to storage objects such as volumes (VOLs), logical units (LUNs), filesystems, and/or any other suitable storage objects, which can be maintained on a storage device array 114 and [0048] In this example, each block of prefetched data (e.g., at 4 kb granularity) stored or cached in the IO transaction cache layer 208 within the prefetch increment 306 has a plurality of associated flags, namely, a PREFETCHED flag and a REPROMOTED flag. Further, the storage system 104 employs a least-recently-used (LRU) cache management technique to determine whether data blocks (e.g., prefetched or non-prefetched data blocks) can be retained in or evicted from the JO transaction cache layer 208. For each prefetched data block stored or cached in the JO transaction cache layer 208, the storage processing circuitry 110 sets its associated PREFETCHED flag).
Marks, Lee and Feld are analogous art directed to improved storage management techniques. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the virtual disk prefetch storage system of Marks and Lee with the prefetch cache of Feld as storage performance can be improved by enabling the host to quickly access requested data which has already been prefetched and cached (See Feld [0033] disclosing each block of prefetched data (e.g., at 4 kb granularity) stored or cached in the IO transaction cache layer).
Regarding claim 17, Marks in view of Lee disclosed the device of claim 2 as described hereinabove. Lee further discloses wherein: the logic to expose the API comprises logic to expose the API to a second device in communication with the device (See Lee, [0038] and [0039], disclosing the OS executed by a processor (host/second device) receiving a VPD page to indicate a maximum prefetch length, or in other words, an API as evidenced by Applicant’s claim 3 indicating a VPD comprising the API and Applicant’s specification at 0055], producing VPD with a page including a MAX_PREFETCH_LENGTH field).
Neither Marks nor Lee disclose the logic to receive a prefetch command comprises logic to receive the prefetch command from the second device; and the logic to send a completion message indicating that the requested data has been cached comprises logic to send the completion message to the second device.
However, Feld discloses the logic to receive a prefetch command comprises logic to receive the prefetch command from the second device (See Feld, [0033], disclosing The host computers 102.1, . . . , 102.n can be further configured to provide, over the network(s) 106, storage input/output (IO) requests (e.g., small computer system interface (SCSI) commands, network filesystem (NFS) commands) to the storage system 104. Such storage IO requests (e.g., read requests, write requests) can direct the storage system 104 to read and/or write data blocks); and the logic to send a completion message indicating that the requested data has been cached comprises logic to send the completion message to the second device (See Feld, [0048] In this example, each block of prefetched data (e.g., at 4 kb granularity) stored or cached in the IO transaction cache layer 208 within the prefetch increment 306 has a plurality of associated flags, namely, a PREFETCHED flag and a REPROMOTED flag. Further, the storage system 104 employs a least-recently-used (LRU) cache management technique to determine whether data blocks (e.g., prefetched or non-prefetched data blocks) can be retained in or evicted from the JO transaction cache layer 208. For each prefetched data block stored or cached in the JO transaction cache layer 208, the storage processing circuitry 110 sets its associated PREFETCHED flag).
Marks, Lee and Feld are analogous art directed to improved storage management techniques. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the virtual disk prefetch storage system of Marks and Lee with the prefetch cache of Feld as storage performance can be improved by enabling the host to quickly access requested data which has already been prefetched and cached (See Feld [0033] disclosing each block of prefetched data (e.g., at 4 kb granularity) stored or cached in the IO transaction cache layer).
Regarding claim 20, Marks discloses a method, comprising: managing a virtual disk comprising one or more spans, each span comprising one or more arms, each arm corresponding a different physical disk (See Marks, Fig. 1 and [0015], storage system includes a virtual hard drive and [0016], a storage controller 124 may map the VHD to a RAID array, the RAID array, or span of VHD across the array, comprising multiple non-volatile memory express devices or arms of physical disks);
exposing a small computer system interface indicating that the virtual disk supports read ahead operations indicating that the virtual disk supports read ahead operations (See Marks, [0028] disclosing the pre-fetch command may be represented by the ‘read prepare’ bit or other flag that is added to context attributes of a small computer system interface (SCSI) command—PREFETCH...For the SCSI command—PREFETCH and the Dataset Management command, the applicability of the pre-fetch command may be applied to each range of logical block addresses, or in other words, supporting read ahead);
receiving a prefetch command, the prefetch command comprising a requested SCSI input-output operation (IO) (See Marks, [0015], host 100 may generate I/O transactions 110 targeting a coupled storage subsystem 120 that includes a virtual hard drive (VHD) and [0017] the host 100 is configured to write an NVMe command. In this embodiment, the NVMe command is directed to the storage controller 124 and [0028] disclosing the pre-fetch command may be represented by the ‘read prepare’ bit or other flag that is added to context attributes of a small computer system interface (SCSI) command); and
sending a completion message indicating that the requested data has been cached (See Marks, [0056] After the command has been issued and completed by the storage controller 124, the storage controller 124, at block 512, writes completion queue entries and generates corresponding interrupts such as the MSI-X interrupt 314. At block 514, the host 100 consumes and processes the completion queue entries in the completion queue).
Marks does not disclose the use of an application programming interface (API).
However, Lee discloses the use of an application programming interface (API) (See Lee, [0038] and [0039], disclosing the use of a VPD page to indicate a maximum prefetch length, or in other words, an API as evidenced by Applicant’s claim 3 indicating a VPD comprising the API and Applicant’s specification at [0055], producing VPD with a page including a MAX_PREFETCH_LENGTH field).
Marks and Lee are analogous art directed to improved data storage management techniques. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the virtual disk prefetch storage system of Marks with the API of Lee as storage performance can be improved by enabling the OS to be informed and make use of the various specifications including prefetching capabilities of the storage device.
Neither Marks nor Lee disclose reading from the read from the virtual disk the data requested by the SCSI IO; caching the data requested by the SCSI IO in a read ahead cache.
However, Feld discloses read from the virtual disk the data requested by the SCSI IO; caching the data requested by the SCSI IO in a read ahead cache (See Feld, [033], disclosing The host computers 102.1, . . . , 102.n can be further configured to provide, over the network(s) 106, storage input/output (IO) requests (e.g., small computer system interface (SCSI) commands, network filesystem (NFS) commands) to the storage system 104. Such storage IO requests (e.g., read requests, write requests) can direct the storage system 104 to read and/or write data blocks, data pages, data files, and/or any other suitable data elements from/to storage objects such as volumes (VOLs), logical units (LUNs), filesystems, and/or any other suitable storage objects, which can be maintained on a storage device array 114 and [0048] In this example, each block of prefetched data (e.g., at 4 kb granularity) stored or cached in the IO transaction cache layer 208 within the prefetch increment 306 has a plurality of associated flags, namely, a PREFETCHED flag and a REPROMOTED flag. Further, the storage system 104 employs a least-recently-used (LRU) cache management technique to determine whether data blocks (e.g., prefetched or non-prefetched data blocks) can be retained in or evicted from the JO transaction cache layer 208. For each prefetched data block stored or cached in the JO transaction cache layer 208, the storage processing circuitry 110 sets its associated PREFETCHED flag).
Marks, Lee and Feld are analogous art directed to improved storage management techniques. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the virtual disk prefetch storage system of Marks and Lee with the prefetch cache of Feld as storage performance can be improved by enabling the host to quickly access requested data which has already been prefetched and cached (See Feld [0033] disclosing each block of prefetched data (e.g., at 4 kb granularity) stored or cached in the IO transaction cache layer).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Marks et al (US 2020/0133896 A1, hereinafter Marks) in view of Lee et al (US 2019/0026220 A1, hereinafter Lee), further in view of Feld et al (US 2023/0325090 A1, hereinafter Feld), and further in view of Chawla et al (US 20140281045 A1, hereinafter Chawla).
Regarding claim 9, Marks in view of Lee in view of Feld disclosed the device of claim 8 as described hereinabove. None of Marks, Lee or Feld disclose logic to generate an accelerated IO (ACIO) from the SCSI IO; and logic to send the ACIO for execution on the virtual disk.
However, Chawla discloses logic to generate an accelerated IO (ACIO) from the SCSI IO (See Chawla, [0036], disclosing converting a SCSI protocol value ranging between zero (0) and one (1) into a DCB protocol priority value ranging between zero (0) and seven (7)); and logic to send the ACIO for execution on the virtual disk (See Chawla, [0033], disclosing sending a SCSI command to an identified iSCSI target, the iSCSI target utilizing logical unit numbers, or virtual storage/disk, the SCSI commands representing instructions to send, receive and store data on storage devices at the iSCSI server).
Marks, Lee, Feld, and Chawla are analogous art directed to improved storage management techniques. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the virtual disk prefetch storage system of Marks, Lee and Feld with the SCSI command conversion of Chawla as storage system performance and flexibility can be improved by providing protocol conversion resulting in improvements in data transmission to eliminate loss due to queue overflow and allows specific bandwidth to be allocated on various links within the network (See Chawla, [0031]).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Marks et al (US 2020/0133896 A1, hereinafter Marks) in view of Lee et al (US 2019/0026220 A1, hereinafter Lee), further in view of Feld et al (US 2023/0325090 A1, hereinafter Feld), further in view of Chawla et al (US 20140281045 A1, hereinafter Chawla), and even further in view of Gellerich et al (US 2018/0081814 A1, hereinafter Gellerich).
Regarding claim 10, Marks in view of Lee in view of Feld further in view of Chawla disclosed the device of claim 9 as described hereinabove. None of Marks, Lee, Feld, or Chawla disclose the ACIO comprises a read ahead operation code (OPCODE).
However, Gellerich discloses the ACIO comprises a read ahead operation code (OPCODE) (See Gellerich, [0047], disclosing a prefetch instruction definitions including a code that defines the instruction for extracting or prefetching).
Marks, Lee, Feld, Chawla, Gellerich are analogous art directed to improved data storage management techniques. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the SCSI capable virtual disk prefetch storage system of Marks, Lee, Feld, and Chawla with the prefetch OPCODE of Gellerich as system performance can be improved by having the ability to quickly identify and/or decode incoming instructions (See Gellerich [0047], disclosing the prefetch cache references start with a code that defines the instruction for extracting or prefetching at the very first encountered bit, bit 0).
Allowable Subject Matter
Claims 5-7, 11-15, and 21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The closest prior art of record (1) Marks discloses RAID based storage system with SCSI prefetch functionality (2) Sen discloses the use of an API to disclose device capabilities (3) Feld discloses the prefetching of data requested by SCSI IO commands and caching the prefetched data for host access (4) Dambal discloses disclosing a host system invoking APIs and obtain vital product data (VPD) from the SCSI driver for the applicable file system, (5) Lee discloses the use of a VPD page storing vendor specific information about a logical unit and a target device and a maximum prefetch length field, (6) Vishnuswaroop Ramesh et al (US 2024/0427633 A1) discloses one or more circuits to perform an prefetch application programming interface on one or more storages, (7) Chawla discloses converting a SCSI protocol value into a DCB protocol priority value, and (8) Dell Shared PowerEdge RAID Controller 8 Cards For Dell PowerEdge VRTX Systems, User's Guide, 2018 discloses a virtual disk with prefetch capability.
However, the prior art alone or in combination fails to teach or fairly suggest the combination
wherein the logic to store an indication that the device cannot successfully perform read ahead caching on the virtual disk comprises: logic to set a divert prefetch control flag in a command data unit virtual disk property table (VDPT) entry corresponding to the virtual disk, as in dependent claim 5.
Nor does the prior art alone or in combination fails to teach or fairly suggest the combination
logic to unset a firmware flag in the ACIO to indicated that the ACIO is hardware generated, as in dependent claim 11.
Nor does the prior art alone or in combination fails to teach or fairly suggest the combination of the device further comprising: a virtual device property table (VDPT) comprising a divert prefetch control flag that indicates whether the virtual disk can successfully perform read ahead caching, as in dependent claim 21.
EXAMINER’S NOTE
Examiner has cited particular columns and line numbers in the references applied to the claims above for the convenience of the Applicants. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the Applicants in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDMUND H KWONG whose telephone number is (571)272-8691. The examiner can normally be reached Monday-Friday 10-6 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan P. Savla can be reached at 571-272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/E.H.K/Examiner, Art Unit 2137 /RYAN BERTRAM/Primary Examiner, Art Unit 2137