DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are presented for examination.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 2, 3, 11, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (hereinafter CHEN) (US 2017/0031825 A1) in view of Srinivasan et al. (hereinafter SRINIVASAN) (US 9,778,865 B1).
As to claim 2, CHEN teaches a method for cache management in a hyperconverged infrastructure (HCI) (multi-host and common storage pool architecture, etc.) comprising a plurality of physical nodes (PNs) (Nodes 310-1, 310-2,…310-N), the method comprising (Abstract; [0003]; [0016]-[0018]; Fig. 1):
receiving, utilizing one or more processors (CPU(s) 111), a primary plurality of input/output (I/O) requests (requests or queries) at a plurality of virtual machines (VMs) (virtual machines (VMs) 130), each of the plurality of VMs running on a respective corresponding PN of the plurality of PNs (Abstract; [0024]; [0027]; Fig 1);
allocating, utilizing the one or more processors, a plurality of local caches (LCs) (local cache or Cache 115-1, 115-2,…. In Host 100-1) and a plurality of remote caches (RCs) (cache of the targets host; Cache 115-n in Host 100-n through H2H 410) to the plurality of VMs based on the primary plurality of I/O requests (virtual machines (VMs) 130) (Abstract; [0027]; [0016]; [0020]; [0034]; [0046]; Fig. 1);
receiving, utilizing the one or more processors, a secondary (at a later time) plurality of I/O requests at the plurality of VMs (update metadata that enables it or another host to retrieve the correct data at a later time in response to a read request from an application or VM) ([0044]; [0023]-[0024]); and
serving, utilizing the one or more processors (CPU(s) 111), the secondary plurality of I/O requests (if there is a cache hit, the processor may read the identical data faster – typically much faster – from the cache) based on the plurality of LCs (local cache or Cache 115-1, 115-2,…. In Host 100-1) and the plurality of RCs (cache of the targets host; Cache 115-n in Host 100-n through H2H 410) (Abstract; [0005]; [0027]; [0016]; [0034]; [0044]-[0046]; Fig. 1).
Under the broadest reasonable interpretation, CHEN’s multi-host, plurality of Nodes 310, and a common storage pool architecture represents a “hyperconverged infrastructure” or HCI (Abstract; [0003]; [0016]-[0018]; Fig. 1). It is also noted that the recitation of the HCI is in the preamble.
But nonetheless, SRINIVASAN is introduced to show explicit teaching of a hyper-converged infrastructure (HCI) system that includes HCI Units 240, Physical Computing Servers 250 that run an Input/Output (IO) stack 260 to process IO requests 254 from host application instances 252. SRINIVASAN also teaches that VOEs 540 are respective virtual machines managed by the hypervisor and each host application instance may run in its own virtual machine. Furthermore, data can be stored to a local memory cache 522 on a physical computing server 250 and a remote cache that is on the other physical computing server (Abstract; col. 11, lines 1-27; Figs 2-3, 5, 8). It would have been obvious to one of ordinary skill in the art before the effective date of the application to modify CHEN to employ SRINIVASAN’s HCI architecture, as described above. The suggestion/motivation for doing so would have been to provide the predicted result of provide integration abilities at the sub-assembly level. For example, a hyper-converged system may include servers that perform multiple roles, such as any combination of compute, storage, and networking (SRINIVASAN: col. 1, lines 15-20) and do so in a more efficient way that reduces network latency, for example (SRINIVASAN: col. 3, lines 1-35).
As to claim 3, CHEN ([0027]; [0016]; [0032]) in view of SRINIVASAN (Abstract; col. 11, lines 1-27; Figs 2-3, 5, 8) teaches the method of claim 2, wherein allocating the plurality of LCs and the plurality of RCs comprises allocating an (i, j)th LC of the plurality of LCs and an (i, j)th RC of the plurality of RCs to an (i, j)th VM of the plurality of VMs where 1≤i≤N, 1≤j≤Ni, N is a number of the plurality of PNs, Ni is a number of the plurality of VMs running on an ith PN of the plurality of PNs, wherein: the (i, j)th VM runs on the ith PN; the (i, j)th LC comprises a portion of a cache space of the ith PN; and the (i, j)th RC comprises a portion of a cache space of an lth PN of the plurality of PNs where 1≤l≤N and l≠i.
It is noted that the (i,j), N, and Ni, etc.l are just mathematical indexing and inherent in a multi-host, multi-VM system.
As to claim 11, CHEN teaches the method of claim 3, further comprising updating the plurality of LCs and the plurality of RCs by one of: minimizing a traffic of a network connecting the plurality of PNs (minimize chatter across hosts) ([0027]; [0016]; [0035]); and balancing the secondary plurality of I/O requests between the plurality of PNs (spreading the load of cache misses; balance CPU and memory load across the hosts) ([0016]; [0028]).
As to claim 14, CHEN (flash caches) ([0027]; [0016]; [0032]; [0044]) in view of SRINIVASAN (memory 520 can include solid state drives) (Abstract; col. 11, lines 1-27; Figs 2-3, 5, 8) teaches the method of claim 2, wherein allocating the plurality of LCs and the plurality of RCs to the plurality of VMs comprises allocating a plurality of solid-state drives to the plurality of VMs.
As to claim 15, it is rejected for the same reasons as stated in the rejections of claims 2 and 3.
Allowable Subject Matter
Claim 1 is allowed over prior art.
Claims 4-10, 12-13, and 16-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH TANG whose telephone number is (571)272-3772. The examiner can normally be reached Monday-Friday 7AM-3PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENNETH TANG/Primary Examiner, Art Unit 2197