DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Information Disclosure Statement
The Information Disclosure Statement filed on 27 Dec 2024 has been considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 6-13 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter.
In claims 6-13, a “virtual storage manager” is being recited, however, it appears that the virtual storage manager would reasonably be interpreted by one of ordinary skill in the art as software, per se [pg. 33]. There are no hardware elements positively recited as being part of the virtual storage manager. A number of modules are claimed, however, they do not appear to have been limited to being implemented in hardware. As such, the claim(s) is/are drawn to a form of energy. Energy is not one of the four categories of invention and therefore this/these claim(s) is/are not statutory. Energy is not a series of steps or acts and thus is not a process. Energy is not a physical article or object and as such is not a machine or manufacture. Energy is not a combination of substances and therefore not a composition of matter. Examiner suggests amending the claim to include at least one hardware element.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “tracking module”, “aggregation module”, “allocation module”, “advertising module”, “over-provisioning module”, and “mapping module” in claims 6-13.
Page 33 of the specification identifies the structure for the modules as:
“The blocks or steps of a method or algorithm and functions described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a tangible, non-transitory computer-readable medium. A software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD ROM, or any other form of storage medium known in the art.”
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 4, and 5 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Colgrove (U.S. Patent No. 11,163,448).
Claim 1:
Colgrove discloses a Solid State Disk (SSD) [column 4, lines 11-17 – “A ‘storage device’ as the term is used in this specification refers to any device configured to record data persistently. The term ‘persistently’ as used here refers to a device's ability to maintain recorded data after loss of a power source. Examples of storage devices may include mechanical, spinning hard disk drives, Solid-state drives (e.g., “Flash drives”), and the like.”], comprising:
a flash storage medium [column 4, lines 11-17; column 6, lines 42-56 – “A ‘storage device’ as the term is used in this specification refers to any device configured to record data persistently. The term ‘persistently’ as used here refers to a device's ability to maintain recorded data after loss of a power source. Examples of storage devices may include mechanical, spinning hard disk drives, Solid-state drives (e.g., “Flash drives”), and the like.” … “The storage device (308) of FIG. 3 may be characterized by a first storage capacity. The first storage capacity may be expressed in terms of MB, GB, or any other appropriate unit of measure. Readers will appreciate that the ‘storage capacity’ of a storage device (308), as the term is used here, refers to the total storage capacity of the storage device (308), not the amount of free space within a storage device (308) at a given point in time. For example, if the storage device (308) includes a total capacity of 80 GB and the same storage device currently has 40 GB of data stored on the storage device, the storage capacity of the storage device (308) is 80 GB. The first storage capacity of the storage device (308) may be specified by the manufacturer, set during a previous iteration of the method depicted in FIG. 3, or established in other ways.”]; and
a controller to access a data on the flash storage medium [column 8, lines 28-35 – “The storage device (308) may compress (402) the data (306) using control logic within the storage device (308) such as an application-specific integrated circuit (‘ASIC’), microcontroller, microprocessor, or other form of computer hardware. Such control logic may be configured to compress (402) the data (306) using one or more compression algorithms such as, for example, LZ77, LZ78, and many others.”],
wherein the SSD is configured to advertise a physical capacity of the flash storage medium to a Virtual Storage Manager (VSM) [column 9, line 20 – column 10, line 6 – “Consider the example described above where the storage device (308) is initially characterized by a first storage capacity of 80 GB, the computing device (304) issues a request that the storage device (308) store a database that includes 25 GB of data, the storage device (308) is able to apply compression algorithms and deduplication algorithms that reduce (310) the database to a size of 10 GB, and the storage device (308) designates the entire amount of the storage capacity saved by reducing the data as additional capacity. In such an example, the storage device (308) would export (314) an updated storage capacity (316) of 95 GB to the computing device (304) in the absence of the storage device (308) holding a predetermined amount of storage capacity in reserve. In an embodiment where the storage device (308) does hold a predetermined amount of storage capacity in reserve, however, the storage device (308) would export (314) an updated storage capacity (316) that is reduced by predetermined amount of storage capacity in reserve. If the predetermined amount of storage capacity to be held in reserve was 10 GB, for example, the storage device (308) would export (314) an updated storage capacity (316) of 85 GB to the computing device (304). In the example method depicted in FIG. 5, determining (312) an updated storage capacity (316) for the storage device (308) can alternatively include determining (504) the updated storage capacity (316) for the storage device (308) in dependence upon an anticipated reduction level. The anticipated reduction level can represent the extent to which future commitments of data to the storage device are expected to be reduced. The anticipated reduction level may be determined, for example, based on the average rate at which all data currently stored on the storage device (308) has been reduced (310).”].
Claim 4 (as applied to claim 1 above):
Colgrove discloses,
wherein the SSD is further configured to support thin provisioning [column 8, lines 52-58 – “Although the examples described above are related to compressing (402) the data (306) and deduplicating (404) the data (306), readers will appreciate that other techniques (e.g., thin provisioning) may be utilized to reduce (310) the data (306). Furthermore, embodiments of the present disclosure are not limited to using a single technique, as multiple techniques may be utilized in combination.”].
Claim 5 (as applied to claim 1 above):
Colgrove discloses the SSD, further comprising
a firmware to advertise the physical capacity of the flash storage medium to the VSM [column 11, lines 24-30 – “Persons skilled in the art will recognize also that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware, as hardware, or as an aggregation of hardware and software are well within the scope of embodiments of the present disclosure.”].
Claim(s) 6, 7, 9, 10, 14, 15, and 17 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pellegrino et al. (Pub. No. US 2003/0145045).
Claim 6:
Pellegrino et al. disclose a Virtual Storage Manager (VSM), comprising:
a tracking module to track a first physical capacity of a first storage device and a second physical capacity of a second storage device [fig. 3; par. 0033 – “FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”];
an aggregation module to determine a usable capacity of a virtual storage device based at least in part on the first physical capacity of the first storage device and the second physical capacity of the second storage device [fig. 3; par. 0033 – “An aggregate volume 302 may be created of equal or lesser size than the set of free pages of the storage device volumes 350.”]; and
an allocation module to allocate a first portion of the first storage device and a second portion of the second storage device to an application executing on a processor based at least in part on the usable capacity of the virtual storage device [fig. 3; par. 0032-0033 – “FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”].
Claim 7 (as applied to claim 6 above):
Pellegrino et al. disclose the VSM, further comprising
an advertising module to advertise the usable capacity to the application executing on the processor [fig. 3; par. 0032-0033 – Aggregate volumes are exposed to consumers. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)].
Claim 9 (as applied to claim 6 above):
Pellegrino et al. disclose, wherein:
the first portion of the first storage device includes a first size [fig. 3; par. 0032-0033 – Aggregate volumes are exposed to consumers. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)];
the second portion of the second storage device includes a second size [fig. 3; par. 0032-0033 – Aggregate volumes are exposed to consumers. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)]; and
a combination of the first size and the second size is at least as large as a storage size requested by the application executing on the processor [fig. 3; par. 0032-0033 – Aggregate volumes are exposed to consumers. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)].
Claim 10 (as applied to claim 6 above):
Pellegrino et al. disclose, wherein the allocation module is configured to:
determine a relative percentage of the usable capacity based at least in part on a storage size requested by the application executing on the processor [fig. 3; par. 0032-0033 – Available space is determined and allocated to a consumer in response to a request. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)];
allocate the first portion of the first storage device to the application executing on the processor based at least in part on the relative percentage of the first physical capacity of the first storage device [fig. 3; par. 0032-0033 – Available space is determined and allocated to a consumer in response to a request. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)]; and
allocate the second portion of the second storage device to the application executing on the processor based at least in part on the relative percentage of the second physical capacity of the first storage device [fig. 3; par. 0032-0033 – Available space is determined and allocated to a consumer in response to a request. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)].
Claim 14:
Pellegrino et al. disclose a method, comprising:
receiving a first physical capacity of a first storage device from the first storage device [par. 0033 – Capacities of storage devices are determined. (“The storage aggregator 130 initially and periodically creates the aggregate volumes 302. A storage aggregator 130 advertises all the available free pages of the storage device volumes 350 for purpose of volume 302 creation and in RAID embodiments, includes the capacity at each RAID level (and a creation command will specify the RAID level desired). An aggregate volume 302 may be created of equal or lesser size than the set of free pages of the storage device volumes 350.”)];
determining a first logical capacity of the first storage device based at least in part on the first physical capacity of the first storage device [par. 0033 – Capacities of storage devices are determined. (“The storage aggregator 130 initially and periodically creates the aggregate volumes 302. A storage aggregator 130 advertises all the available free pages of the storage device volumes 350 for purpose of volume 302 creation and in RAID embodiments, includes the capacity at each RAID level (and a creation command will specify the RAID level desired). An aggregate volume 302 may be created of equal or lesser size than the set of free pages of the storage device volumes 350.”)];
receiving a second physical capacity of a second storage device from the second storage device [par. 0033 – Capacities of storage devices are determined. (“The storage aggregator 130 initially and periodically creates the aggregate volumes 302. A storage aggregator 130 advertises all the available free pages of the storage device volumes 350 for purpose of volume 302 creation and in RAID embodiments, includes the capacity at each RAID level (and a creation command will specify the RAID level desired). An aggregate volume 302 may be created of equal or lesser size than the set of free pages of the storage device volumes 350.”)];
determining a second logical capacity of the second storage device based at least in part on the second physical capacity of the second storage device [par. 0033 – Capacities of storage devices are determined. (“The storage aggregator 130 initially and periodically creates the aggregate volumes 302. A storage aggregator 130 advertises all the available free pages of the storage device volumes 350 for purpose of volume 302 creation and in RAID embodiments, includes the capacity at each RAID level (and a creation command will specify the RAID level desired). An aggregate volume 302 may be created of equal or lesser size than the set of free pages of the storage device volumes 350.”)];
aggregating the first logical capacity of the first storage device and the second logical capacity of the second storage device to produce a usable capacity [fig. 3; par. 0033 – “An aggregate volume 302 may be created of equal or lesser size than the set of free pages of the storage device volumes 350.”]; and
advertising the usable capacity to an application executing on a processor [fig. 3; par. 0032-0033 – “FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”].
Claim 15 (as applied to claim 14 above):
Pellegrino et al. disclose,
wherein advertising the usable capacity to the application executing on the processor includes advertising the usable capacity of a virtual storage device to the application executing on the processor [fig. 3; par. 0032-0033 – “FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”].
Claim 17 (as applied to claim 14 above):
Pellegrino et al. disclose the method, further comprising:
receiving a request to allocate a storage size from the application executing on the processor, wherein the storage size is less than the usable capacity [fig. 3; par. 0032-0033 – Aggregate volumes are exposed to consumers. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)];
reserving a first portion of the first storage device to the application executing on the processor, the first portion of the first storage device including a first size [fig. 3; par. 0032-0033 – Aggregate volumes are exposed to consumers. Storage is reserved and allocated. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)]; and
reserving a second portion of the second storage device to the application executing on the processor, the second portion of the second storage device including a second size [fig. 3; par. 0032-0033 – Aggregate volumes are exposed to consumers. Storage is reserved and allocated. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)],
wherein a combination of the first size of the first portion of the first storage device and the second size of the second portion of the second storage device is at least as large as the storage size [fig. 3; par. 0032-0033 – Aggregate volumes are exposed to consumers. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Colgrove (Pub. No. US 2016/0070482) in view of Kruger (U.S. Patent No. 9,519,577).
Claim 2 (as applied to claim 1 above):
Colgrove discloses all the limitations above but does not specifically disclose,
wherein the SSD is configured to update the physical capacity of the flash storage medium to a second physical capacity based at least in part on a block in the flash storage medium failing.
In the same field of endeavor, Kruger discloses,
wherein the SSD is configured to update the physical capacity of the flash storage medium to a second physical capacity based at least in part on a block in the flash storage medium failing [fig. 5; column 15, lines 1-54 – “At step 504, the respective flash memory device reduces its advertised size. In some embodiments, the difference between the current (or reduced) advertised size and the previous advertised size is equal to the number of flash memory blocks comprising a logical chunk. In some embodiments, the advertised size of a flash memory device is an amount of bytes or addresses advertised to storage controller 120 that is equal to the amount of logical chunks that are storing data and are available for storing data. In some embodiments, the advertised size of a flash memory device is equal to the difference between the total amount of flash memory blocks comprising the flash memory device and the number of failed flash memory blocks.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Colgrove to include reducing the advertised size of a storage device in response to failed blocks, as taught by Kruger, in order to improve performance and maintain data integrity by gradually migrating data from a failing device.
Claim 3 (as applied to claim 2 above):
Kruger discloses,
wherein the SSD is configured to advertise the updated physical capacity of the flash storage medium to the VSM [fig. 5; column 15, lines 1-54 – “At step 504, the respective flash memory device reduces its advertised size. In some embodiments, the difference between the current (or reduced) advertised size and the previous advertised size is equal to the number of flash memory blocks comprising a logical chunk. In some embodiments, the advertised size of a flash memory device is an amount of bytes or addresses advertised to storage controller 120 that is equal to the amount of logical chunks that are storing data and are available for storing data. In some embodiments, the advertised size of a flash memory device is equal to the difference between the total amount of flash memory blocks comprising the flash memory device and the number of failed flash memory blocks.”].
Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pellegrino et al. (Pub. No. US 2003/0145045) as applied to claim 6 above, and further in view of Bahirat et al. (Pub. No. US 2013/0346812).
Claim 8 (as applied to claim 6 above):
Pellegrino et al. disclose all the limitations above but do not specifically disclose, wherein:
the aggregation module includes an over-provisioning module to determine a first over-provisioning of the first storage device and to determine a second over-provisioning of the second storage device; and
the aggregation module is configured to determine the usable capacity of the virtual storage device based at least in part on the first physical capacity of the first storage device, the first over-provisioning of the first storage device, the second physical capacity of the second storage device, and the second over-provisioning of the second storage device.
In the same field of endeavor, Bahirat et al. disclose,
the aggregation module includes an over-provisioning module to determine a first over-provisioning of the first storage device and to determine a second over-provisioning of the second storage device [par. 0033 – “Some memory systems employ over-provisioning (OP) to prolong the lifetime of an SSD, for instance. OP can limit the accessible amount of memory allowed by the controller (e.g., controller 108 shown in FIG. 1) to less than the physical amount of memory present in a device. For instance, an SSD with 64 GB of physical memory can be over-provisioned to only allow 80% of its memory space to be used such that the memory space of the SSD appears (e.g., to a host) to be 51 GB. The over-provisioned 13 GB of memory can may not be accessible directly by a host, but can be treated as reserve and used by the controller in association with wear leveling, garbage collection, etc. For instance, the over-provisioned memory can be used to replace bad blocks (e.g., blocks determined to be unreliable) within the portion of the SSD accessible by the host. A block may be determined to be a bad block (e.g., via an error detection/correction component such as 118 shown in FIG. 1) based on a determined error rate corresponding thereto, for example. As another example, a block may also be determined to be a bad block once a process cycle count corresponding thereto reaches or exceeds a threshold cycle count.”]; and
the aggregation module is configured to determine the usable capacity of the virtual storage device based at least in part on the first physical capacity of the first storage device, the first over-provisioning of the first storage device, the second physical capacity of the second storage device, and the second over-provisioning of the second storage device [par. 0033 – “Some memory systems employ over-provisioning (OP) to prolong the lifetime of an SSD, for instance. OP can limit the accessible amount of memory allowed by the controller (e.g., controller 108 shown in FIG. 1) to less than the physical amount of memory present in a device. For instance, an SSD with 64 GB of physical memory can be over-provisioned to only allow 80% of its memory space to be used such that the memory space of the SSD appears (e.g., to a host) to be 51 GB. The over-provisioned 13 GB of memory can may not be accessible directly by a host, but can be treated as reserve and used by the controller in association with wear leveling, garbage collection, etc. For instance, the over-provisioned memory can be used to replace bad blocks (e.g., blocks determined to be unreliable) within the portion of the SSD accessible by the host. A block may be determined to be a bad block (e.g., via an error detection/correction component such as 118 shown in FIG. 1) based on a determined error rate corresponding thereto, for example. As another example, a block may also be determined to be a bad block once a process cycle count corresponding thereto reaches or exceeds a threshold cycle count.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pellegrino et al. to include over-provisioning, as taught by Bahirat et al., in order to improve performance by providing a reserve area for housekeeping operations.
Claim 16 (as applied to claim 14 above):
Pellegrino et al. disclose all the limitations above but do not specifically disclose, wherein:
determining the first logical capacity of the first storage device based at least in part on the first physical capacity of the first storage device includes:
determining a first over-provisioning of the first storage device; and
determining the first logical capacity of the first storage device based on a difference between the first physical capacity of the first storage device and the first over-provisioning of the first storage device; and
determining the second logical capacity of the second storage device based at least in part on the second physical capacity of the second storage device includes:
determining a second over-provisioning of the second storage device [par. 0033 – “Some memory systems employ over-provisioning (OP) to prolong the lifetime of an SSD, for instance. OP can limit the accessible amount of memory allowed by the controller (e.g., controller 108 shown in FIG. 1) to less than the physical amount of memory present in a device. For instance, an SSD with 64 GB of physical memory can be over-provisioned to only allow 80% of its memory space to be used such that the memory space of the SSD appears (e.g., to a host) to be 51 GB. The over-provisioned 13 GB of memory can may not be accessible directly by a host, but can be treated as reserve and used by the controller in association with wear leveling, garbage collection, etc. For instance, the over-provisioned memory can be used to replace bad blocks (e.g., blocks determined to be unreliable) within the portion of the SSD accessible by the host. A block may be determined to be a bad block (e.g., via an error detection/correction component such as 118 shown in FIG. 1) based on a determined error rate corresponding thereto, for example. As another example, a block may also be determined to be a bad block once a process cycle count corresponding thereto reaches or exceeds a threshold cycle count.”]; and
determining the second logical capacity of the second storage device based on a difference between the second physical capacity of the second storage device and the second over-provisioning of the second storage device [par. 0033 – “Some memory systems employ over-provisioning (OP) to prolong the lifetime of an SSD, for instance. OP can limit the accessible amount of memory allowed by the controller (e.g., controller 108 shown in FIG. 1) to less than the physical amount of memory present in a device. For instance, an SSD with 64 GB of physical memory can be over-provisioned to only allow 80% of its memory space to be used such that the memory space of the SSD appears (e.g., to a host) to be 51 GB. The over-provisioned 13 GB of memory can may not be accessible directly by a host, but can be treated as reserve and used by the controller in association with wear leveling, garbage collection, etc. For instance, the over-provisioned memory can be used to replace bad blocks (e.g., blocks determined to be unreliable) within the portion of the SSD accessible by the host. A block may be determined to be a bad block (e.g., via an error detection/correction component such as 118 shown in FIG. 1) based on a determined error rate corresponding thereto, for example. As another example, a block may also be determined to be a bad block once a process cycle count corresponding thereto reaches or exceeds a threshold cycle count.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pellegrino et al. to include over-provisioning, as taught by Bahirat et al., in order to improve performance by providing a reserve area for housekeeping operations.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pellegrino et al. (Pub. No. US 2016/0070482) as applied to claim 6 above, and further in view of Adler et al. (Pub. No. US 2017/0364347).
Claim 11 (as applied to claim 6 above):
Pellegrino et al. disclose, wherein the allocation module is configured to:
allocate a first section of the first portion of the first storage device based at least in part on receiving a first write request from the application executing on the processor [fig. 3; par. 0032-0033 – Available space is determined and allocated to a consumer in response to a request. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)]; and
allocate a second section of the second portion of the second storage device based at least in part on receiving a second write request from the application executing on the processor [fig. 3; par. 0032-0033 – Available space is determined and allocated to a consumer in response to a request. (“FIG. 3 (with reference to FIG. 1) provides a logical view 300 of the storage aggregator 130 and of data flow and storage virtualization within the data storage network 100. As shown, the aggregate volumes 302 within or created by the storage aggregator 130 are the logical LUNs presented to the outside world, i.e., consumers 112 of host server system 110.” … “The storage aggregator 130 creates an aggregate volume structure when a new volume 302 is created, but the pages of the storage device volumes 350 are not allocated directly to aggregate volume pages. FIG. 3 provides one exemplary arrangement and possible field sizes and contents for the volume pages 310 and volume headers 320. The storage aggregator considers the pool of available pages to be smaller by the number of pages required for the new volume 302. Actual storage device volumes 350 are created by sending a physical volume create command to the storage aggregator 130. The storage aggregator 130 also tracks storage device volume usage as shown at 304, 306 with example storage device volume entries and volume header shown at 330 and 340, respectively.”)].
However, Pellegrino et al. do not specifically disclose,
reserve the first portion of the first storage device to the application executing on the processor;
reserve the second portion of the second storage device to the application executing on the processor;
In the same field of endeavor, Adler et al. disclose,
reserve the first portion of the first storage device to the application executing on the processor [pars. 0014-0019 – “Systems and methods in accordance with various embodiments of the present disclosure overcome at least some of the above mentioned shortcomings and deficiencies by enabling techniques for segregating a monolithic computing device that contains many conventionally installed applications into separate dedicated containers or application storage volumes for each of those applications, where the containers can subsequently be attached or detached from the computing device as needed. Once the monolithic device has been segregated in such a manner, the application storage volumes can be managed remotely from a management server and can be enabled or disabled on the device based on instructions from an administrator. Additionally, the embodiments described herein enable the administrator to select which application storage volumes are migrated during an operating system (OS) upgrade on the computing device.”];
reserve the second portion of the second storage device to the application executing on the processor [pars. 0014-0019 – “Systems and methods in accordance with various embodiments of the present disclosure overcome at least some of the above mentioned shortcomings and deficiencies by enabling techniques for segregating a monolithic computing device that contains many conventionally installed applications into separate dedicated containers or application storage volumes for each of those applications, where the containers can subsequently be attached or detached from the computing device as needed. Once the monolithic device has been segregated in such a manner, the application storage volumes can be managed remotely from a management server and can be enabled or disabled on the device based on instructions from an administrator. Additionally, the embodiments described herein enable the administrator to select which application storage volumes are migrated during an operating system (OS) upgrade on the computing device.”];
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pellegrino et al. to include dedicated containers or application storage volumes, as taught by Adler et al. in order to improve security by providing isolation.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pellegrino et al. (Pub. No. US 2003/0145045) as applied to claim 6 above, and further in view of Hashimoto (Pub. No. US 2014/0223088).
Claim 12 (as applied to claim 6 above):
Pellegrino et al. disclose all the limitations above but do not specifically disclose,
wherein the VSM is configured to set the first storage device in a read-only mode based at least in part on an updated physical capacity of the first storage device or an error count of the first storage device.
In the same field of endeavor, Hashimoto discloses
wherein the VSM is configured to set the first storage device in a read-only mode based at least in part on an updated physical capacity of the first storage device or an error count of the first storage device [par. 0044; claim 17 – “RO mode (Read Only Mode): mode of prohibiting all processes involving write to the NAND type flash memory. The data already written from the host is guaranteed as much as possible when the SSD comes close to the end of its lifespan by returning an error respect to all the write requests from the host so as not to perform write. The SSD transitions to the RO mode when the number of remaining entries of the bad cluster table or the bad block table becomes smaller than or equal to a predetermined number or when the free block is insufficient.” … “The host device according to claim 16, wherein the controller is further configured to determine that an operation mode of the storage device is to be recognized as a read only mode when the error count exceeds the threshold value.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pellegrino et al. to include placing the device in the read-only mode, as taught by Hashimoto, in order to maintain system integrity my preserving stored data when a device is nearing end of life.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pellegrino et al. (Pub. No. US 2003/0145045) as applied to claim 6 above, and further in view of DeKoning et al. (Pub. No. US 2012/0185643).
Claim 13 (as applied to claim 6 above):
Pellegrino et al. disclose all the limitations above but do not specifically disclose the VSM, further comprising
a mapping module to map a logical address used by the application executing on the processor to an address on one of the first storage device or the second storage device.
In the same field of endeavor, DeKoning et al. disclose,
a mapping module to map a logical address used by the application executing on the processor to an address on one of the first storage device or the second storage device [par. 0006 – “Newer storage paradigms provide still further enhancements by distributing data over all disk drives of the entire storage system (e.g., "declustered" storage architecture). In one current embodiment of such a paradigm, the aggregate storage capacity of all storage devices in the system is treated as a pool of available physical storage and logical volumes defined by the RAID controller may be distributed in any useful manner over any of the pool of physical storage. Each logical volume is defined, in essence, by a mapping structure that identifies where blocks of data corresponding to logical block addresses of the logical volume are stored in the storage pool that is the physical disk drives of the system. These newer data distribution techniques may serve to provide, for example, faster recovery from drive failures, greater uniformity of performance across logical volumes, or lower power requirements. For example, a method known as Controlled Replication Under Scalable Hashing (CRUSH) may distribute data blocks of any single RAID level 5 stripe over any of the storage capacity of any of the storage devices of the system. CRUSH methods and structures utilize a hierarchical cluster map representing available storage devices in order to map logical to physical addresses and to permit migration of data all transparently with respect to attached host systems. CRUSH provides for a layer of virtualization above and beyond RAID logical volumes, wherein stored data may be migrated to any subset of the hundreds or even thousands of storage devices of the system. Furthermore, using CRUSH techniques, migration may occur as an online process, without interruption of processing of host I/O requests. In general, the storage controller in a storage system using the CRUSH architecture is coupled with all of the disk drives of the system to allow the controller complete flexibility to store and migrate physical storage anywhere it deems appropriate. Mapping features map all logical addresses and logical volumes to corresponding portions of physical storage. Other declustered, distributed storage management techniques are known to those of ordinary skill in the art where data is distributed over any of the storage devices of the storage system without regard to predefined, static groupings or clustering of the storage devices.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pellegrino et al. to include mapping addresses, as taught by DeKoning et al., in order to improve experience by providing greater uniformity of performance across logical volumes.
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pellegrino et al. (Pub. No. US 2016/0070482) as applied to claim 17 above, and further in view of Roussos et al. (U.S. Patent No. 8,024,442).
Claim 18 (as applied to claim 17 above):
Pellegrino et al. disclose all the limitations above but do not specifically disclose, wherein:
reserving the first portion of the first storage device to the application executing on the processor includes reserving the first portion of the storage device to the application executing on the processor using thin provisioning; and
reserving the second portion of the second storage device to the application executing on the processor includes reserving the second portion of the second storage device to the application executing on the processor to the application executing on the processor using thin provisioning.
In the same field of endeavor, Roussos et al. disclose,
reserving the first portion of the first storage device to the application executing on the processor includes reserving the first portion of the storage device to the application executing on the processor using thin provisioning [column 2, lines 42-56 – “Thin provisioning is a provisioning operation that allows space to be allocated to servers, on a just-enough and just-in-time basis, as compared to pre-allocating a large amount of space to account for possible data growth. Thin provisioning may be used in many applications where access to the same storage resource pool is used, allowing administrators to maintain space in the resource pool to service the data growth requirements of the many applications on an ongoing basis. Thin provisioning may allow organizations to purchase less storage capacity up front, and defer storage capacity upgrades with actual increase in usage. In contrast, in fat provisioning, typically large amounts of storage capacity are pre-allocated to individual applications. Most of the storage resources in fat provisioning remain unused (e.g., not written to), resulting in poor utilization rates.”]; and
reserving the second portion of the second storage device to the application executing on the processor includes reserving the second portion of the second storage device to the application executing on the processor to the application executing on the processor using thin provisioning [column 2, lines 42-56 – “Thin provisioning is a provisioning operation that allows space to be allocated to servers, on a just-enough and just-in-time basis, as compared to pre-allocating a large amount of space to account for possible data growth. Thin provisioning may be used in many applications where access to the same storage resource pool is used, allowing administrators to maintain space in the resource pool to service the data growth requirements of the many applications on an ongoing basis. Thin provisioning may allow organizations to purchase less storage capacity up front, and defer storage capacity upgrades with actual increase in usage. In contrast, in fat provisioning, typically large amounts of storage capacity are pre-allocated to individual applications. Most of the storage resources in fat provisioning remain unused (e.g., not written to), resulting in poor utilization rates.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pellegrino et al. to include thin provisioning, as taught by Roussos et al., in order to reduce costs by providing more effective storage utilization.
Allowable Subject Matter
Claims 19-20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
The prior art does not disclose the limitations of the listed claims in conjunction with the limitations of the base claim and intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Mizrachi et al. (Pub. No. US 2008/0091868) disclose, “In an exemplary embodiment of the invention, the MSI-X interrupts may be edge triggered since the interrupt may be signaled with a posted write command by the device targeting a pre-allocated area of memory on the host bridge. However, some host bridges may have the ability to latch the acceptance of an MSI-X message and may effectively treat it as a level signaled interrupt. The MSI-X interrupts may enable writing to a segment of memory instead of asserting a given IRQ pin. Each device may have one or more unique memory locations to which MSI-X messages may be written. The MSI interrupts may enable data to be pushed along with the MSI event, allowing for greater functionality. The MSI-X interrupt mechanism may enable the system software to configure each vector with an independent message address and message data that may be specified by a table that may reside in host memory. The MSI-X mechanism may enable the device functions to support two or more vectors, which may be configured to target different CPUs to increase scalability.”. [par. 0048]
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LARRY T MACKALL whose telephone number is (571)270-1172. The examiner can normally be reached Monday - Friday, 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald G Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
LARRY T. MACKALL
Primary Examiner
Art Unit 2131
27 December 2025
/LARRY T MACKALL/Primary Examiner, Art Unit 2139