DETAILED ACTION
This Action is responsive to the application filed on 11/18/2024.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-20 are pending and have been examined.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,164,424. Although the claims at issue are not identical, they are not patentably distinct from each other because of the following analysis:
Instant application 18/950,386
Patent 12,164,424
1. An apparatus comprising: storage comprising a range of stored data, wherein a first cache level of the storage is segmented into a plurality of first data sub-ranges of the range of stored data, a second cache level of the storage is segmented into a plurality of second data sub-ranges of the range of stored data, and the plurality of first data sub-ranges and the plurality of second data sub-ranges are associated with a plurality of node devices; and a processing device to process an access request associated with data within the range of stored data by accessing one or more node devices of the plurality of node devices.
1. A distributed cache apparatus comprising: storage comprising a range of stored data, wherein a first cache level of the storage is segmented into a plurality of first data sub-ranges of the range of stored data, and each of the plurality of first data sub-ranges is associated with one of a first subset of a plurality of node devices, wherein a second cache level of the storage is segmented into a plurality of second data sub-ranges of the range of stored data, and each of the plurality of second data sub-ranges is associated with one of a second subset of the plurality of node devices, and wherein each of the plurality of second data sub-ranges is smaller than each of the plurality of first data sub- ranges; and a processing device to process a read request for data within the range of stored data by accessing one of the second subset of the plurality of node devices.
2. The apparatus of claim 1, wherein contents of the first data sub-ranges associated with respective ones of a first subset of the plurality of node devices are loaded from one or more of a second subset of the plurality of node devices.
2. The apparatus of claim 1, wherein contents of the first data sub-ranges associated with respective ones of first subset of the plurality of node devices are loaded from one or more of the second subset of the plurality of node devices.
3. The apparatus of claim 1, wherein a third cache level is segmented into a plurality of third data sub-ranges of the range of stored data, and each of the plurality of third data sub-ranges is associated with one of a third subset of the plurality of node devices, and wherein each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges.
3. The apparatus of claim 1, wherein a third cache level is segmented into a plurality of third data sub-ranges of the range of stored data, and each of the plurality of third data sub- ranges is associated with one of a third subset of the plurality of node devices, and wherein each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges.
4. The apparatus of claim 1, wherein, responsive to a failure of a failed node device of a second subset of the plurality of node devices, a replacement node device of a first subset of the plurality of node devices is moved from the first subset of the plurality of node devices to the second subset of the plurality of node devices by associating, with the replacement node device, a second data sub-range of the plurality of second data sub-ranges that was associated with the failed node device.
4. The apparatus of claim 1, wherein, responsive to a failure of a failed node device of the second subset of the plurality of node devices, a replacement node device of the first subset of the plurality of node devices is moved from the first subset of the plurality of node devices to the second subset of the plurality of node devices by associating, with the replacement node device, a second data sub-range of the plurality of second data sub-ranges that was associated with the failed node device.
5. The apparatus of claim 1, wherein a first data sub-range associated with a first node device of a first subset of the plurality of node devices overlaps with two of the plurality of second data sub-ranges.
5. The apparatus of claim 1, wherein a first data sub-range associated with a first node device of the first subset of the plurality of node devices overlaps with two of the plurality of second data sub-ranges.
6. The apparatus of claim 1, wherein the range of stored data comprises a plurality of keys, and wherein the processing device is further to: receive the access request to access a stored key of the plurality of keys; generate a hash of the key, wherein the hash is associated with a target second data sub-range of the plurality of second data sub-ranges; and access a second node device of a second subset of the plurality of node devices that is associated with the target second data sub-range to retrieve the stored key.
6. The apparatus of claim 1, wherein the range of stored data comprises a plurality of keys, and wherein the processing device is further to: receive the read request to read a stored key of the plurality of keys; generate a hash of the key, wherein the hash is associated with a target second data sub- range of the plurality of second data sub-ranges; and access a second node device of the second subset of the plurality of node devices that is associated with the target second data sub-range to retrieve the stored key.
7. The apparatus of claim 6, wherein, responsive to determining that the stored key is not present in the target second data sub-range, returning an error indicating that the stored key is not available.
7. The apparatus of claim 6, wherein, responsive to determining that the stored key is not present in the target second data sub-range, returning an error indicating that the stored key is not available.
8. The apparatus of claim 1, wherein the range of stored data comprises a plurality of keys, and wherein a processing device of a second node device of a second subset of the plurality of node devices is to: receive a request to write a stored key of the plurality of keys, the stored key associated with a second data sub-range of the plurality of second data sub-ranges that is associated with the second node device and with a first data sub-range of the plurality of first data sub-ranges that is associated with a first node device of a first subset of the plurality of node devices; update a value of the stored key within the second data sub-range; and access the first node device that is associated with the first data sub-range to update the stored key within the first data sub-range.
8. The apparatus of claim 1, wherein the range of stored data comprises a plurality of keys, and wherein a processing device of a second node device of the second subset of the plurality of node devices is to: receive a request to write a stored key of the plurality of keys, the stored key associated with a second data sub-range of the plurality of second data sub-ranges that is associated with the second node device and with a first data sub-range of the plurality of first data sub-ranges that is associated with a first node device of the first subset of the plurality of node devices; update a value of the stored key within the second data sub-range; and access the first node device that is associated with the first data sub-range to update the stored key within the first data sub-range.
9. A method of operating a distributed cache of storage comprising a range of stored data, the method comprising: segmenting a first cache level of the storage into a plurality of first data sub-ranges of the range of stored data; segmenting a second cache level of the storage into a plurality of second data sub-ranges of the range of stored data, the plurality of first data sub-ranges and the plurality of second data sub-ranges are associated with a plurality of node devices; and processing an access request associated with data within the range of stored data by accessing one or more node devices of the plurality of node devices.
9. A method of operating a distributed cache of storage comprising a range of stored data, the method comprising: assigning a first subset of a plurality of node devices to a first cache level of the storage, wherein the first cache level is segmented into a plurality of first data sub-ranges of the range of stored data and each of the first subset of the plurality of node devices is associated with one of the plurality of first data sub-ranges; assigning, by a processing device, a second subset of the plurality of node devices to a second cache level of the storage, wherein the second cache level is segmented into a plurality of second data sub-ranges of the range of stored data and each of the second subset of the plurality of node devices is associated with one of the plurality of second data sub-ranges, and wherein each of the plurality of second data sub-ranges is smaller than each of the plurality of first data sub-ranges; and processing a read request for data within the range of stored data by accessing one of the second subset of the plurality of node devices.
10. The method of claim 9, wherein further comprising loading contents of the first data sub-ranges associated with respective ones of a first subset of the plurality of node devices from one or more of a second subset of the plurality of node devices.
10. The method of claim 9, wherein further comprising loading contents of the first data sub-ranges associated with respective ones of first subset of the plurality of node devices from one or more of the second subset of the plurality of node devices.
11. The method of claim 9, further comprising: assigning a third subset of the plurality of node devices to a third cache level of the storage, wherein the third cache level is segmented into a plurality of third data sub-ranges of the range of stored data and each of the third subset of the plurality of node devices is associated with one of the plurality of third data sub-ranges, wherein each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges.
11. The method of claim 9, further comprising: assigning a third subset of the plurality of node devices to a third cache level of the storage, wherein the third cache level is segmented into a plurality of third data sub-ranges of the range of stored data and each of the third subset of the plurality of node devices is associated with one of the plurality of third data sub-ranges, wherein each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges.
12. The method of claim 9, further comprising: responsive to a failure of a failed node device of a second subset of the plurality of node devices, moving a replacement node device of a first subset of the plurality of node devices from the first subset of the plurality of node devices to the second subset of the plurality of node devices by associating, with the replacement node device, the second data sub-range that was associated with the failed node device.
12. The method of claim 9, further comprising: responsive to a failure of a failed node device of the second subset of the plurality of node devices, moving a replacement node device of the first subset of the plurality of node devices from the first subset of the plurality of node devices to the second subset of the plurality of node devices by associating, with the replacement node device, the second data sub-range that was associated with the failed node device.
13. The method of claim 9, wherein a first data sub-range associated with a first node device of a first subset of the plurality of node devices overlaps with two of the plurality of second data sub-ranges.
13. The method of claim 9, wherein a first data sub-range associated with a first node device of the first subset of the plurality of node devices overlaps with two of the plurality of second data sub-ranges.
14. The method of claim 9, wherein the range of stored data comprises a plurality of keys, and wherein the method further comprises: receiving a request to write a stored key of the plurality of keys, the stored key associated with the second data sub-range that is associated with a second node device of a second subset of the plurality of node devices and with a first data sub-range of the plurality of first data sub-ranges that is associated with a first node device of a first subset of the plurality of node devices; updating a value of the stored key within the second data sub-range; and accessing the first node device that is associated with the first data sub-range to update the stored key within the first data sub-range.
14. The method of claim 9, wherein the range of stored data comprises a plurality of keys, and wherein the method further comprises: receiving a request to write a stored key of the plurality of keys, the stored key associated with the second data sub-range that is associated with a second node device of the second subset of the plurality of node devices and with a first data sub-range of the plurality of first data sub- ranges that is associated with a first node device of the first subset of the plurality of node devices; updating a value of the stored key within the second data sub-range; and accessing the first node device that is associated with the first data sub-range to update the stored key within the first data sub-range.
15. A non-transitory computer-readable storage medium including instructions that, when executed by a processing device that is coupled to a storage comprising a range of stored data, cause the processing device to: segment a first cache level of the storage into a plurality of first data sub-ranges of the range of stored data; segment a second cache level of the storage into a plurality of second data sub-ranges of the range of stored data, the plurality of first data sub-ranges and the plurality of second data sub-ranges are associated with a plurality of node devices; and process, by the processing device, an access request associated with data within the range of stored data by accessing one or more node devices of the plurality of node devices.
15. A non-transitory computer-readable storage medium including instructions that, when executed by a processing device, cause the processing device to: assign a first subset of a plurality of node devices to a first cache level of storage comprising a range of stored data, wherein the first cache level is segmented into a plurality of first data sub-ranges of the range of stored data and each of the first subset of the plurality of node devices is associated with one of the plurality of first data sub-ranges; assign, by the processing device, a second subset of the plurality of node devices to a second cache level of the storage, wherein the second cache level is segmented into a plurality of second data sub-ranges of the range of stored data and each of the second subset of the plurality of node devices is associated with one of the plurality of second data sub-ranges, and wherein each of the plurality of second data sub-ranges is smaller than each of the plurality of first data sub-ranges; and process a read request for data within the range of stored data by accessing one of the second subset of the plurality of node devices.
16. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to load contents of the first data sub-ranges associated with respective ones of a first subset of the plurality of node devices from one or more of a second subset of the plurality of node devices.
16. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to load contents of the first data sub-ranges associated with respective ones of first subset of the plurality of node devices from one or more of the second subset of the plurality of node devices.
17. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to: assign a third subset of the plurality of node devices to a third cache level of the storage, wherein the third cache level is segmented into a plurality of third data sub-ranges of the range of stored data and each of the third subset of the plurality of node devices is associated with one of the plurality of third data sub-ranges, wherein each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges.
17. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to: assign a third subset of the plurality of node devices to a third cache level of the storage, wherein the third cache level is segmented into a plurality of third data sub-ranges of the range of stored data and each of the third subset of the plurality of node devices is associated with one of the plurality of third data sub-ranges, wherein each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges.
18. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to: responsive to a failure of a failed node device of a second subset of the plurality of node devices, move a replacement node device of a first subset of the plurality of node devices from the first subset of the plurality of node devices to the second subset of the plurality of node devices by associating, with the replacement node device, the second data sub-range that was associated with the failed node device.
18. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to: responsive to a failure of a failed node device of the second subset of the plurality of node devices, move a replacement node device of the first subset of the plurality of node devices from the first subset of the plurality of node devices to the second subset of the plurality of node devices by associating, with the replacement node device, the second data sub-range that was associated with the failed node device.
19. The non-transitory computer-readable storage medium of claim 15, wherein a first data sub-range associated with a first node device of a first subset of the plurality of node devices overlaps with two of the plurality of second data sub-ranges.
19. The non-transitory computer-readable storage medium of claim 15, wherein a first data sub-range associated with a first node device of the first subset of the plurality of node devices overlaps with two of the plurality of second data sub-ranges.
20. The non-transitory computer-readable storage medium of claim 15, wherein the range of stored data comprises a plurality of keys, and wherein the processing device is further to: receive a request to write a stored key of the plurality of keys, the stored key associated with the second data sub-range that is associated with a second node device of a second subset of the plurality of node devices and with a first data sub-range of the plurality of first data sub-ranges that is associated with a first node device of a first subset of the plurality of node devices; update a value of the stored key within the second data sub-range; and access the first node device that is associated with the first data sub-range to update the stored key within the first data sub-range.
20. The non-transitory computer-readable storage medium of claim 15, wherein the range of stored data comprises a plurality of keys, and wherein the processing device is further to: receive a request to write a stored key of the plurality of keys, the stored key associated with the second data sub-range that is associated with a second node device of the second subset of the plurality of node devices and with a first data sub-range of the plurality of first data sub- ranges that is associated with a first node device of the first subset of the plurality of node devices; update a value of the stored key within the second data sub-range; and access the first node device that is associated with the first data sub-range to update the stored key within the first data sub-range.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 14 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claim 14,
Claim 14 recites “the second data sub-range” in the 4th line, the scope of which cannot be determined due to numerous reasonable interpretations. In particular, examiner cannot determine to which of the positively-recited “plurality of second data sub-ranges” of Claim 9 the aforementioned limitation refers. Therefore, the scope of Claim 14 is indefinite, and the claim is rejected under 35 U.S.C. 112(b). Examiner recommends applicant amend Claim 14 to instead read “a second data sub-range” in order to overcome this rejection.
Regarding Claim 20,
Claim 20 recites “the second data sub-range” in the 5th line, the scope of which cannot be determined due to numerous reasonable interpretations. In particular, examiner cannot determine to which of the positively-recited “plurality of second data sub-ranges” of Claim 15 the aforementioned limitation refers. Therefore, the scope of Claim 20 is indefinite, and the claim is rejected under 35 U.S.C. 112(b). Examiner recommends applicant amend Claim 20 instead to read “a second data sub-range” in order to overcome this rejection.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 5, 9-10, 13, 15-16, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chu et al. (US 20110153737 A1)(hereafter referred to as Chu).
Regarding Claim 1,
Chu anticipates the following limitations:
An apparatus (Decomposed Chord network 200, Fig. 2) comprising:
storage comprising a range of stored data (“The nodes 110 each store files which may be shared among the nodes 110” [0024] // “a Chord network with key space… In Chord, a file is stored at a node that has a node ID that matches the file ID of the file” [0029] // ¶¶0055-56 // Figs. 2 + 10) – Decomposed Chord Network 200 comprises a plurality of nodes storing files (i.e., storing “a range of stored data”)--,
wherein a first cache level (Sub-Network 2041, Fig. 2) of the storage is segmented into a plurality of first data sub-ranges of the range of stored data (Fig. 2 // “In a decomposed network, each of the sub-networks uses the same key space (M bits) and the same key space hash function” [0057] // “a P2P network may be decomposed into any number of sub-networks at any number hierarchical levels” [0080] // ¶¶0026; 0029; 0086-87) – As shown in Fig. 2 and taught in ¶0080, a decomposed network is comprised of plural sub-networks corresponding to plural “hierarchical levels” of the network. Each sub-network includes an M-bit sized key space (¶0057); comprises a plurality of nodes (¶0087); and assigns particular files to particular nodes within the sub-network by hashing the file name into the sub-network key space (¶¶0026;0029); effectively creating a respective range of files stored on each node of a sub-network. In the context of Fig. 2, examiner considers the ranges of files stored in the key space of Sub-Network 2041 as “a plurality of first data sub-ranges”--,
a second cache level (Sub-Network 2043, Fig. 2) of the storage is segmented into a plurality of second data sub-ranges of the range of stored data (¶¶0057;0087 // “each sub-network stores all files of the decomposed P2P network” [0206]) – As taught in ¶0206, each sub-network stores all files within the decomposed network 200 (i.e., each sub-network stores respective sub-ranges of “the range of stored data”). In the context of Fig. 2, examiner considers the ranges of files stored in the key space of Sub-Network 2043 as “a plurality of second data sub-ranges”--, and
the plurality of first data sub-ranges and the plurality of second data sub-ranges are associated with a plurality of node devices (Fig. 2 // “subsets of the thirty-six active nodes of the Chord network 100 are members of sub-networks … respectively” [0086] // ¶0087 // Fig. 10) – As shown in Fig. 2, each sub-network includes a plurality of active node devices; and
a processing device (Node 1000, Fig. 10 // ¶0091) to process (¶¶0223-224; Fig. 3) an access request (“causing a file to be stored” [0091]) associated with data within the range of stored data by accessing (Fig. 3, steps 308 + 310) one or more node devices (“one or more active nodes of the sub-network” [0095]) of the plurality of node devices (Fig. 3 // “FIG. 3 depicts one embodiment of a method for use by a node in causing a file to be stored within a decomposed P2P network” [0091] // “At step 308, an active node(s) associated with the identified sub-network(s) is identified … At step 310, a process is initiated for causing the file to be stored in each of the identified sub-networks.” [0095-99]) – As shown in Fig. 3, storing files in the decomposed network 200 includes identifying one or more active nodes (step 308) of a particular sub-network (see steps 304 + 306) and subsequently storing files to the identified active nodes (step 310) (i.e., at least “accessing” the identified active nodes of the particular sub-network).
Regarding Claim 2,
Chu anticipates the following limitations:
The apparatus of claim 1, wherein contents of the first data sub-ranges associated with respective ones of a first subset (“active nodes” [0027] // Fig. 2) of the plurality of node devices are loaded from one or more of a second subset (“seed nodes” [0053]) of the plurality of node devices (“A file is originally introduced into the Chord network by a member of the Chord network (where the member node is referred to as a seed node of the file) … The seed node will locate the first active node in the Chord network that has a node ID that is equal or greater than the file ID. The seed node then sends the file to the located node.” [0053] // Fig. 2 // ¶0027) – As taught in Chu, the decomposed network 200 includes several types of nodes, including “active” nodes (¶0027) which currently participate in the network; as well as “seed” nodes (¶0050) which store files into the network. In this context, examiner considers the concepts of active nodes and seed nodes as the claimed “first subset” and “second subset” of nodes in the decomposed network 200, respectively. One of ordinary skill in the art would accordingly understand that each file of sub-network 2041 (i.e., “contents of the first data sub-ranges”) would be received by an active node of the sub-network from a seed node. In such a context, data stored by active nodes are received from (i.e., are “loaded from”) seed nodes.
Regarding Claim 5,
Chu anticipates the following limitations:
The apparatus of claim 1, wherein a first data sub-range associated with a first node device of a first subset of the plurality of node devices overlaps with two of the plurality of second data sub-ranges (“The seed node will locate the first active node in the Chord network that has a node ID that is equal to or greater than the file ID” [0053] // “In a decomposed network, each of the sub-networks uses the same key space (M bits) and the same key space hash function” [0057] // “In decomposed network 200, nodes 0 … and 58 are active nodes in sub-network 2041, … and nodes 2 … and 58 are active nodes in sub-network 2043” [0087]) – As previously discussed and as taught in Chu, each sub-network uses the same hash function for the same M-bit-sized key space (¶0057). Sub-networks have respective active member nodes (¶0086); and sub-networks include differing numbers of active nodes (e.g., sub-network 2042 includes 15 active nodes whereas sub-network 2043 includes 18 active nodes). As clarified in ¶0053, file IDs are used to distributed files across the key space of a sub-network; such that the file is stored to the first active node in a sub-network which has a node ID greater than or equal to the file ID. Applied to the example sub-networks of Fig. 2 (see ¶0087, where sub-network 2041 includes active nodes 6, 7 and 13; and sub-network 2043 includes active nodes 5, 9, and 12), node 13 of sub-network 2041 would store file IDs from (7-13] (i.e., “a first data sub-range”). Nodes 9 and 12 of sub-network 2043 store file IDs from (5 – 9] and (9 – 12], respectively (i.e., “two of the plurality of second data sub-ranges” which “overlap with” the range of the aforementioned node 13).
Regarding Claim 9,
Chu anticipates the following limitations:
A method of operating a distributed cache (Decomposed Chord network 200, Fig. 2) of storage comprising a range of stored data (“The nodes 110 each store files which may be shared among the nodes 110” [0024] // “a Chord network with key space… In Chord, a file is stored at a node that has a node ID that matches the file ID of the file” [0029] // ¶¶0055-56 // Figs. 2 + 10) – Decomposed Chord Network 200 comprises a plurality of nodes storing files (i.e., storing “a range of stored data”)--, the method comprising:
segmenting a first cache level (Sub-Network 2041, Fig. 2) of the storage into a plurality of first data sub-ranges of the range of stored data (Fig. 2 // “In a decomposed network, each of the sub-networks uses the same key space (M bits) and the same key space hash function” [0057] // “a P2P network may be decomposed into any number of sub-networks at any number hierarchical levels” [0080] // ¶¶0026; 0029; 0086-87) – As shown in Fig. 2 and taught in ¶0080, a decomposed network is comprised of plural sub-networks corresponding to plural “hierarchical levels” of the network. Each sub-network includes an M-bit sized key space (¶0057); comprises a plurality of nodes (¶0087); and assigns particular files to particular nodes within the sub-network by hashing the file name into the sub-network key space (¶¶0026;0029); effectively creating a respective range of files stored on each node of a sub-network. In the context of Fig. 2, examiner considers the ranges of files stored in the key space of Sub-Network 2041 as “a plurality of first data sub-ranges”--;
segmenting a second cache level (Sub-Network 2043, Fig. 2) of the storage into a plurality of second data sub-ranges of the range of stored data (¶¶0057;0087 // “each sub-network stores all files of the decomposed P2P network” [0206]) – As taught in ¶0206, each sub-network stores all files within the decomposed network 200 (i.e., each sub-network stores respective sub-ranges of “the range of stored data”). In the context of Fig. 2, examiner considers the ranges of files stored in the key space of Sub-Network 2043 as “a plurality of second data sub-ranges”--,
the plurality of first data sub-ranges and the plurality of second data sub-ranges are associated with a plurality of node devices (Fig. 2 // “subsets of the thirty-six active nodes of the Chord network 100 are members of sub-networks … respectively” [0086] // ¶0087 // Fig. 10) – As shown in Fig. 2, each sub-network includes a plurality of active node devices; and
processing (¶¶0223-224; Fig. 3) an access request (“causing a file to be stored” [0091]) associated with data within the range of stored data by accessing (Fig. 3, steps 308 + 310) one or more node devices (“one or more active nodes of the sub-network” [0095]) of the plurality of node devices (Fig. 3 // “FIG. 3 depicts one embodiment of a method for use by a node in causing a file to be stored within a decomposed P2P network” [0091] // “At step 308, an active node(s) associated with the identified sub-network(s) is identified … At step 310, a process is initiated for causing the file to be stored in each of the identified sub-networks.” [0095-99]) – As shown in Fig. 3, storing files in the decomposed network 200 includes identifying one or more active nodes (step 308) of a particular sub-network (see steps 304 + 306) and subsequently storing files to the identified active nodes (step 310) (i.e., at least “accessing” the identified active nodes of the particular sub-network).
Regarding Claim 10,
Chu anticipates the following limitations:
The method of claim 9, wherein further comprising loading contents of the first data sub-ranges associated with respective ones of a first subset (“active nodes” [0027] // Fig. 2) of the plurality of node devices from one or more of a second subset (“seed nodes” [0053]) of the plurality of node devices (“A file is originally introduced into the Chord network by a member of the Chord network (where the member node is referred to as a seed node of the file) … The seed node will locate the first active node in the Chord network that has a node ID that is equal or greater than the file ID. The seed node then sends the file to the located node.” [0053] // Fig. 2 // ¶0027) – As taught in Chu, the decomposed network 200 includes several types of nodes, including “active” nodes (¶0027) which currently participate in the network; as well as “seed” nodes (¶0050) which store files into the network. In this context, examiner considers the concepts of active nodes and seed nodes as the claimed “first subset” and “second subset” of nodes in the decomposed network 200, respectively. One of ordinary skill in the art would accordingly understand that each file of sub-network 2041 (i.e., “contents of the first data sub-ranges”) would be received by an active node of the sub-network from a seed node. In such a context, data stored by active nodes are received from (i.e., are “loaded from”) seed nodes.
Regarding Claim 13,
Chu anticipates the following limitations:
The method of claim 9, wherein a first data sub-range associated with a first node device of a first subset of the plurality of node devices overlaps with two of the plurality of second data sub-ranges (“The seed node will locate the first active node in the Chord network that has a node ID that is equal to or greater than the file ID” [0053] // “In a decomposed network, each of the sub-networks uses the same key space (M bits) and the same key space hash function” [0057] // “In decomposed network 200, nodes 0 … and 58 are active nodes in sub-network 2041, … and nodes 2 … and 58 are active nodes in sub-network 2043” [0087]) – As previously discussed and as taught in Chu, each sub-network uses the same hash function for the same M-bit-sized key space (¶0057). Sub-networks have respective active member nodes (¶0086); and sub-networks include differing numbers of active nodes (e.g., sub-network 2042 includes 15 active nodes whereas sub-network 2043 includes 18 active nodes). As clarified in ¶0053, file IDs are used to distributed files across the key space of a sub-network; such that the file is stored to the first active node in a sub-network which has a node ID greater than or equal to the file ID. Applied to the example sub-networks of Fig. 2 (see ¶0087, where sub-network 2041 includes active nodes 6, 7 and 13; and sub-network 2043 includes active nodes 5, 9, and 12), node 13 of sub-network 2041 would store file IDs from (7-13] (i.e., “a first data sub-range”). Nodes 9 and 12 of sub-network 2043 store file IDs from (5 – 9] and (9 – 12], respectively (i.e., “two of the plurality of second data sub-ranges” which “overlap with” the range of the aforementioned node 13).
Regarding Claim 15,
Chu anticipates the following limitations:
A non-transitory computer-readable storage medium including instructions (¶¶0223-225) that, when executed by a processing device (Node 1000, Fig. 10) that is coupled to a storage comprising a range of stored data (“The nodes 110 each store files which may be shared among the nodes 110” [0024] // “a Chord network with key space… In Chord, a file is stored at a node that has a node ID that matches the file ID of the file” [0029] // ¶¶0055-56 // Figs. 2 + 10) – Decomposed Chord Network 200 comprises a plurality of nodes storing files (i.e., storing “a range of stored data”)--, cause the processing device to:
segment a first cache level (Sub-Network 2041, Fig. 2) of the storage into a plurality of first data sub-ranges of the range of stored data (Fig. 2 // “In a decomposed network, each of the sub-networks uses the same key space (M bits) and the same key space hash function” [0057] // “a P2P network may be decomposed into any number of sub-networks at any number hierarchical levels” [0080] // ¶¶0026; 0029; 0086-87) – As shown in Fig. 2 and taught in ¶0080, a decomposed network is comprised of plural sub-networks corresponding to plural “hierarchical levels” of the network. Each sub-network includes an M-bit sized key space (¶0057); comprises a plurality of nodes (¶0087); and assigns particular files to particular nodes within the sub-network by hashing the file name into the sub-network key space (¶¶0026;0029); effectively creating a respective range of files stored on each node of a sub-network. In the context of Fig. 2, examiner considers the ranges of files stored in the key space of Sub-Network 2041 as “a plurality of first data sub-ranges”--;
segment a second cache level (Sub-Network 2043, Fig. 2) of the storage into a plurality of second data sub-ranges of the range of stored data (¶¶0057;0087 // “each sub-network stores all files of the decomposed P2P network” [0206]) – As taught in ¶0206, each sub-network stores all files within the decomposed network 200 (i.e., each sub-network stores respective sub-ranges of “the range of stored data”). In the context of Fig. 2, examiner considers the ranges of files stored in the key space of Sub-Network 2043 as “a plurality of second data sub-ranges”--,
the plurality of first data sub-ranges and the plurality of second data sub-ranges are associated with a plurality of node devices (Fig. 2 // “subsets of the thirty-six active nodes of the Chord network 100 are members of sub-networks … respectively” [0086] // ¶0087 // Fig. 10) – As shown in Fig. 2, each sub-network includes a plurality of active node devices; and
process, by the processing device (¶¶0223-224; Fig. 3), an access request (“causing a file to be stored” [0091]) associated with data within the range of stored data by accessing (Fig. 3, steps 308 + 310) one or more node devices (“one or more active nodes of the sub-network” [0095]) of the plurality of node devices (Fig. 3 // “FIG. 3 depicts one embodiment of a method for use by a node in causing a file to be stored within a decomposed P2P network” [0091] // “At step 308, an active node(s) associated with the identified sub-network(s) is identified … At step 310, a process is initiated for causing the file to be stored in each of the identified sub-networks.” [0095-99]) – As shown in Fig. 3, storing files in the decomposed network 200 includes identifying one or more active nodes (step 308) of a particular sub-network (see steps 304 + 306) and subsequently storing files to the identified active nodes (step 310) (i.e., at least “accessing” the identified active nodes of the particular sub-network).
Regarding Claim 16,
Chu anticipates the following limitations:
The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to load contents of the first data sub-ranges associated with respective ones of a first subset (“active nodes” [0027] // Fig. 2) of the plurality of node devices from one or more of a second subset (“seed nodes” [0053]) of the plurality of node devices (“A file is originally introduced into the Chord network by a member of the Chord network (where the member node is referred to as a seed node of the file) … The seed node will locate the first active node in the Chord network that has a node ID that is equal or greater than the file ID. The seed node then sends the file to the located node.” [0053] // Fig. 2 // ¶0027) – As taught in Chu, the decomposed network 200 includes several types of nodes, including “active” nodes (¶0027) which currently participate in the network; as well as “seed” nodes (¶0050) which store files into the network. In this context, examiner considers the concepts of active nodes and seed nodes as the claimed “first subset” and “second subset” of nodes in the decomposed network 200, respectively. One of ordinary skill in the art would accordingly understand that each file of sub-network 2041 (i.e., “contents of the first data sub-ranges”) would be received by an active node of the sub-network from a seed node. In such a context, data stored by active nodes are received from (i.e., are “loaded from”) seed nodes.
Regarding Claim 19,
Chu anticipates the following limitations:
The non-transitory computer-readable storage medium of claim 15, wherein a first data sub-range associated with a first node device of a first subset of the plurality of node devices overlaps with two of the plurality of second data sub-ranges (“The seed node will locate the first active node in the Chord network that has a node ID that is equal to or greater than the file ID” [0053] // “In a decomposed network, each of the sub-networks uses the same key space (M bits) and the same key space hash function” [0057] // “In decomposed network 200, nodes 0 … and 58 are active nodes in sub-network 2041, … and nodes 2 … and 58 are active nodes in sub-network 2043” [0087]) – As previously discussed and as taught in Chu, each sub-network uses the same hash function for the same M-bit-sized key space (¶0057). Sub-networks have respective active member nodes (¶0086); and sub-networks include differing numbers of active nodes (e.g., sub-network 2042 includes 15 active nodes whereas sub-network 2043 includes 18 active nodes). As clarified in ¶0053, file IDs are used to distributed files across the key space of a sub-network; such that the file is stored to the first active node in a sub-network which has a node ID greater than or equal to the file ID. Applied to the example sub-networks of Fig. 2 (see ¶0087, where sub-network 2041 includes active nodes 6, 7 and 13; and sub-network 2043 includes active nodes 5, 9, and 12), node 13 of sub-network 2041 would store file IDs from (7-13] (i.e., “a first data sub-range”). Nodes 9 and 12 of sub-network 2043 store file IDs from (5 – 9] and (9 – 12], respectively (i.e., “two of the plurality of second data sub-ranges” which “overlap with” the range of the aforementioned node 13).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3, 11, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chu further in view of a 2007 IEEE publication authored by Cai et al. (cited M. Cai and K. Hwang, "Distributed Aggregation Algorithms with Load-Balancing for Scalable Grid Resource Monitoring," 2007 IEEE International Parallel and Distributed Processing Symposium, Long Beach, CA, USA, 2007, pp. 1-10)(hereafter referred to as Cai)
Regarding Claim 3,
Chu discloses the following limitations:
The apparatus of claim 1 (see Claim 1 limitation mappings above), wherein a third cache level (Sub-Network 2042, Fig. 2) is segmented into a plurality of third data sub-ranges of the range of stored data (Fig. 2 // ¶¶0026; 0029; 0057; 0080; 0086-87) – As previously discussed (see Claim 1 limitation mappings above) and as shown in Fig. 2, decomposed network 200 additionally includes a third sub-network (e.g., Sub-Network 2042). In the context of Fig. 2, examiner considers the ranges of files stored in the key space of Sub-Network 2042 as “a plurality of third data sub-ranges”, and
each of the plurality of third data sub-ranges is associated with one of a third subset of the plurality of node devices (¶¶0086-87 // “In a decomposed network, each of the sub-networks uses the same key space (M bits) and the same key space hash function” [0057] // “In decomposed network 200 … nodes 3 … and 62 are active nodes in sub-network 2042, and nodes 2 … and 58 are active nodes in sub-network 2043” [0087]) – As previously discussed and as taught in Chu, each sub-network uses the same hash function for the same M-bit-sized key space (¶0057). Sub-networks have respective active member nodes (¶0086); and sub-networks include differing numbers of active nodes (e.g., sub-network 2042 includes 15 active nodes whereas sub-network 2043 includes 18 active nodes).
wherein … the plurality of second data sub-ranges is smaller than … the plurality of third data sub-ranges (Fig. 2 // ¶¶0057; 0087) – As discussed above, sub-networks each use the same M bit-sized key space but include differing numbers of active nodes. In the context of Chu, an active node in sub-network 2042, stores, on average, 1/15th of the entire key space; whereas an active node in sub-network 2043 stores, on average, 1/18th of the entire key space because sub-network 2042 includes 15 active nodes whereas sub-network 2043 includes 18 active nodes. Thus, the average range of a key space stored by an active node in sub-network 2043 (i.e., an average size of “the plurality of second data sub-ranges”) is smaller than an average range of a key space stored by an active node in sub-network 2042 (i.e., is smaller than an average size of “the plurality of third data sub-ranges”).
Chu does not disclose an even balance of active nodes around the key space of a given sub-network, and thus does not disclose the following limitations:
each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges
However, Cai discloses the following limitations:
each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges (“distributed aggregation trees (DAT) on a structured P2P network like Chord … To balance the DAT trees, we propose a balanced routing algorithm … enables the construction of almost completely balanced DATs, when nodes are evenly distributed in the Chord identifier space” [Abstract, pg. 1] // Section 3.4-3.5) – As taught in Cai, a “distributed aggregation tree” (DAT) comprises a plurality of nodes organized in a ring implemented in Chord, similar to the sub-networks of Chu. Examiner accordingly considers a DAT of Cai as analogous to a sub-network of Chu. As taught in Cai, balanced routing algorithms enable construction of DATs such that nodes are “evenly distributed in the Chord identifier space”. Applying the concept of a balanced DAT to Chu Fig. 2, one of ordinary skill in the art would understand that each sub-range of data stored by an active node in the Chu sub-network 2043 (e.g., 18 active nodes) would be smaller than each sub-range of data stored by an active node in sub-network 2042 (e.g., 15 active nodes).
Chu and Cai are considered analogous to the claimed invention because they all relate to the same field of storing and locating data in a distributed cache node ring. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chu with the teachings of Cai and realize a storage device including second and third cache hierarchies implemented in a balanced node ring. Balancing nodes around a key space enables improved estimation of global resource monitoring by providing a scalable, adaptive, and fair load balancing mechanism for distributed aggregation, as disclosed in Cai Section 1: “Distributed aggregation is an essential building block for global resource monitoring in large-scale Grids. By employing a distributed aggregation tree (DAT), the global resource status can be calculated recursively by applying an aggregate function on a subset of local status … Load balancing is thus essential for both workload fairness and system scalability.” [Section 1, pgs. 1-2]
Regarding Claim 11,
Chu discloses the following limitations:
The method of claim 9 (see Claim 9 limitation mappings above), further comprising:
assigning a third subset of the plurality of node devices to a third cache level (Sub-Network 2042, Fig. 2) of the storage (¶¶0086-87 // “In a decomposed network, each of the sub-networks uses the same key space (M bits) and the same key space hash function” [0057] // “In decomposed network 200 … nodes 3 … and 62 are active nodes in sub-network 2042, and nodes 2 … and 58 are active nodes in sub-network 2043” [0087]) – As previously discussed and as taught in Chu, each sub-network uses the same hash function for the same M-bit-sized key space (¶0057). Sub-networks have respective active member nodes (¶0086); and sub-networks include differing numbers of active nodes (e.g., sub-network 2042 includes 15 active nodes whereas sub-network 2043 includes 18 active nodes).,
wherein the third cache level is segmented into a plurality of third data sub-ranges of the range of stored data and each of the third subset of the plurality of node devices is associated with one of the plurality of third data sub-ranges (Fig. 2 // ¶¶0026; 0029; 0057; 0080; 0086-87) – As previously discussed (see Claim 9 limitation mappings above) and as shown in Fig. 2, decomposed network 200 additionally includes a third sub-network (e.g., Sub-Network 2042). In the context of Fig. 2, examiner considers the ranges of files stored in the key space of Sub-Network 2042 as “a plurality of third data sub-ranges”,
wherein … the plurality of second data sub-ranges is smaller than … the plurality of third data sub-ranges (Fig. 2 // ¶¶0057; 0087) – As discussed above, sub-networks each use the same M bit-sized key space but include differing numbers of active nodes. In the context of Chu, an active node in sub-network 2042, stores, on average, 1/15th of the entire key space; whereas an active node in sub-network 2043 stores, on average, 1/18th of the entire key space because sub-network 2042 includes 15 active nodes whereas sub-network 2043 includes 18 active nodes. Thus, the average range of a key space stored by an active node in sub-network 2043 (i.e., an average size of “the plurality of second data sub-ranges”) is smaller than an average range of a key space stored by an active node in sub-network 2042 (i.e., is smaller than an average size of “the plurality of third data sub-ranges”).
Chu does not disclose an even balance of active nodes around the key space of a given sub-network, and thus does not disclose the following limitations:
each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges
However, Cai discloses the following limitations:
each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges (“distributed aggregation trees (DAT) on a structured P2P network like Chord … To balance the DAT trees, we propose a balanced routing algorithm … enables the construction of almost completely balanced DATs, when nodes are evenly distributed in the Chord identifier space” [Abstract, pg. 1] // Section 3.4-3.5) – As taught in Cai, a “distributed aggregation tree” (DAT) comprises a plurality of nodes organized in a ring implemented in Chord, similar to the sub-networks of Chu. Examiner accordingly considers a DAT of Cai as analogous to a sub-network of Chu. As taught in Cai, balanced routing algorithms enable construction of DATs such that nodes are “evenly distributed in the Chord identifier space”. Applying the concept of a balanced DAT to Chu Fig. 2, one of ordinary skill in the art would understand that each sub-range of data stored by an active node in the Chu sub-network 2043 (e.g., 18 active nodes) would be smaller than each sub-range of data stored by an active node in sub-network 2042 (e.g., 15 active nodes).
Chu and Cai are considered analogous to the claimed invention because they all relate to the same field of storing and locating data in a distributed cache node ring. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chu with the teachings of Cai and realize a storage device including second and third cache hierarchies implemented in a balanced node ring. Balancing nodes around a key space enables improved estimation of global resource monitoring by providing a scalable, adaptive, and fair load balancing mechanism for distributed aggregation, as disclosed in Cai Section 1: “Distributed aggregation is an essential building block for global resource monitoring in large-scale Grids. By employing a distributed aggregation tree (DAT), the global resource status can be calculated recursively by applying an aggregate function on a subset of local status … Load balancing is thus essential for both workload fairness and system scalability.” [Section 1, pgs. 1-2]
Regarding Claim 17,
Chu discloses the following limitations:
The non-transitory computer-readable storage medium of claim 15 (see Claim 15 limitation mappings above), wherein the processing device is further to:
assign a third subset of the plurality of node devices to a third cache level (Sub-Network 2042, Fig. 2) of the storage (¶¶0086-87 // “In a decomposed network, each of the sub-networks uses the same key space (M bits) and the same key space hash function” [0057] // “In decomposed network 200 … nodes 3 … and 62 are active nodes in sub-network 2042, and nodes 2 … and 58 are active nodes in sub-network 2043” [0087]) – As previously discussed and as taught in Chu, each sub-network uses the same hash function for the same M-bit-sized key space (¶0057). Sub-networks have respective active member nodes (¶0086); and sub-networks include differing numbers of active nodes (e.g., sub-network 2042 includes 15 active nodes whereas sub-network 2043 includes 18 active nodes).,
wherein the third cache level is segmented into a plurality of third data sub-ranges of the range of stored data and each of the third subset of the plurality of node devices is associated with one of the plurality of third data sub-ranges (Fig. 2 // ¶¶0026; 0029; 0057; 0080; 0086-87) – As previously discussed (see Claim 15 limitation mappings above) and as shown in Fig. 2, decomposed network 200 additionally includes a third sub-network (e.g., Sub-Network 2042). In the context of Fig. 2, examiner considers the ranges of files stored in the key space of Sub-Network 2042 as “a plurality of third data sub-ranges”,--
wherein … the plurality of second data sub-ranges is smaller than … the plurality of third data sub-ranges (Fig. 2 // ¶¶0057; 0087) – As discussed above, sub-networks each use the same M bit-sized key space but include differing numbers of active nodes. In the context of Chu, an active node in sub-network 2042, stores, on average, 1/15th of the entire key space; whereas an active node in sub-network 2043 stores, on average, 1/18th of the entire key space because sub-network 2042 includes 15 active nodes whereas sub-network 2043 includes 18 active nodes. Thus, the average range of a key space stored by an active node in sub-network 2043 (i.e., an average size of “the plurality of second data sub-ranges”) is smaller than an average range of a key space stored by an active node in sub-network 2042 (i.e., is smaller than an average size of “the plurality of third data sub-ranges”).
Chu does not disclose an even balance of active nodes around the key space of a given sub-network, and thus does not disclose the following limitations:
each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges
However, Cai discloses the following limitations:
each of the plurality of second data sub-ranges is smaller than each of the plurality of third data sub-ranges (“distributed aggregation trees (DAT) on a structured P2P network like Chord … To balance the DAT trees, we propose a balanced routing algorithm … enables the construction of almost completely balanced DATs, when nodes are evenly distributed in the Chord identifier space” [Abstract, pg. 1] // Section 3.4-3.5) – As taught in Cai, a “distributed aggregation tree” (DAT) comprises a plurality of nodes organized in a ring implemented in Chord, similar to the sub-networks of Chu. Examiner accordingly considers a DAT of Cai as analogous to a sub-network of Chu. As taught in Cai, balanced routing algorithms enable construction of DATs such that nodes are “evenly distributed in the Chord identifier space”. Applying the concept of a balanced DAT to Chu Fig. 2, one of ordinary skill in the art would understand that each sub-range of data stored by an active node in the Chu sub-network 2043 (e.g., 18 active nodes) would be smaller than each sub-range of data stored by an active node in sub-network 2042 (e.g., 15 active nodes).
Chu and Cai are considered analogous to the claimed invention because they all relate to the same field of storing and locating data in a distributed cache node ring. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chu with the teachings of Cai and realize a storage device including second and third cache hierarchies implemented in a balanced node ring. Balancing nodes around a key space enables improved estimation of global resource monitoring by providing a scalable, adaptive, and fair load balancing mechanism for distributed aggregation, as disclosed in Cai Section 1: “Distributed aggregation is an essential building block for global resource monitoring in large-scale Grids. By employing a distributed aggregation tree (DAT), the global resource status can be calculated recursively by applying an aggregate function on a subset of local status … Load balancing is thus essential for both workload fairness and system scalability.” [Section 1, pgs. 1-2]
Claims 4, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chu further in view of Cen (US 20100299553 A1)(hereafter referred to as Cen).
Regarding Claim 4,
Chu discloses the following limitations:
The apparatus of claim 1, wherein, responsive to a failure of a failed node device (“the successor of a node fails” [0050]) of a second subset of the plurality of node devices (“active nodes” [0027] // Fig. 2) – As taught in ¶0027 and shown in Fig. 2, decomposed network 200 includes both “active nodes” (i.e., “a second subset of the plurality of node devices”) and inactive nodes. Nodes are arranged in an ordered ring by node ID; and each active node has a successor node. Nodes also experience failure (¶0050). One of ordinary skill in the art would accordingly understand that the failed successor node of ¶0050 would be an active node at the time of failure.-- ,
a replacement node device (“the successor node of its successor node” [0050]) … by associating, with the replacement node device, a second data sub-range of the plurality of second data sub-ranges that was associated with the failed node device (“Chord also supports a Chord recovery procedure for enabling recovery from node failures … when a node receives a heartbeat message from its successor node it knows the identity of the successor node and its successor node and, thus, when the successor of a node fails, the node can initiate a connection to the successor node of its successor node and the Chord ring is maintained” [0050] // ¶¶0027; 0053) – As taught in Chu ¶0050, a given node (e.g., A; for the purposes of this limitation mapping) has a successor node (e.g., B) which corresponds to the next node ID in the ring which is greater than the node ID of node A (¶0027). Node B also has a respective successor node (e.g., C). When node B fails, node A initiates connection to node C, and the Chord ring is accordingly maintained. In the context of Chu, node C corresponds to “a replacement node device” for node B after node A initiates the connection to node C. As taught in ¶0053, files are stored in the network by hashing a filename into a file ID, subsequently identifying and storing the file to the first node ID equal to or greater than the file ID. Thus, in the context of Chu ¶0050, all files which would have previously been stored at node B instead would be stored in its successor node C after node A establishes the connection to node C, thereby “associating” the portion of the key space (i.e., “a second data sub-range of the plurality of second data sub-ranges”) previously assigned to node B instead with node C (i.e., “that was associated with the failed node device”).
Chu does not appear to explicitly disclose the following limitations:
a replacement node device of a first subset of the plurality of node devices is moved from the first subset of the plurality of node devices to the second subset of the plurality of node devices
However, Cen discloses the following limitations:
a replacement node device of a first subset (Backup Cache Service Nodes, Fig. 12) of the plurality of node devices (Fig. 12) is moved from the first subset of the plurality of node devices to the second subset (Master Cache Service Nodes, Fig. 12) of the plurality of node devices (Fig. 4 // “If the master cache service node has failed, at 410, the cache client selects a backup cache service node as the new master cache service node … and sends future cache processing requests to the backup cache service node” [0036]) – As taught in Cen, when a “master cache” type of service node fails, a “backup cache” type of service node is selected in order to replace the failed master cache node. In this context, once selected, the backup cache type of service node becomes a master cache type of service node and receives future requests (i.e., “a replacement node device” which is moved from a “backup cache” type of node to an active node).
Chu and Cen are considered analogous to the claimed invention because they all relate to the same field of managing and replacing failed nodes in a distributed cache environment. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chu with the teachings of Cen and realize an apparatus whereby a replacement node is moved from a first subset of nodes into a second subset of nodes in order to replace a failed node. Maintaining a subset of nodes, such as backup nodes, to be used as replacement in case of failure improves performance by reducing cache service node unavailability and reducing data loss as a result of cache node failure, as disclosed in Cen ¶0026: “In the embodiments discussed herein, when a master cache service node failed to perform cache data processing … chooses a backup cache service node and sends the operation request to the backup cache service node. The application … solves the problems of cache service node unavailability and data loss as a result of cache service node failures.” [0026]
Regarding Claim 12,
Chu discloses the following limitations:
The method of claim 9 (see Claim 9 limitation mappings above), further comprising: responsive to a failure of a failed node device (“the successor of a node fails” [0050]) of a second subset of the plurality of node devices (“active nodes” [0027] // Fig. 2) – As taught in ¶0027 and shown in Fig. 2, decomposed network 200 includes both “active nodes” (i.e., “a second subset of the plurality of node devices”) and inactive nodes. Nodes are arranged in an ordered ring by node ID; and each active node has a successor node. Nodes also experience failure (¶0050). One of ordinary skill in the art would accordingly understand that the failed successor node of ¶0050 would be an active node at the time of failure.--,
… a replacement node device (“the successor node of its successor node” [0050])… by associating, with the replacement node device, the second data sub-range that was associated with the failed node device(“Chord also supports a Chord recovery procedure for enabling recovery from node failures … when a node receives a heartbeat message from its successor node it knows the identity of the successor node and its successor node and, thus, when the successor of a node fails, the node can initiate a connection to the successor node of its successor node and the Chord ring is maintained” [0050] // ¶¶0027; 0053) – As taught in Chu ¶0050, a given node (e.g., A; for the purposes of this limitation mapping) has a successor node (e.g., B) which corresponds to the next node ID in the ring which is greater than the node ID of node A (¶0027). Node B also has a respective successor node (e.g., C). When node B fails, node A initiates connection to node C, and the Chord ring is accordingly maintained. In the context of Chu, node C corresponds to “a replacement node device” for node B after node A initiates the connection to node C. As taught in ¶0053, files are stored in the network by hashing a filename into a file ID, subsequently identifying and storing the file to the first node ID equal to or greater than the file ID. Thus, in the context of Chu ¶0050, all files which would have previously been stored at node B instead would be stored in its successor node C after node A establishes the connection to node C, thereby “associating” the portion of the key space (i.e., “a second data sub-range of the plurality of second data sub-ranges”) previously assigned to node B instead with node C (i.e., “that was associated with the failed node device”).
Chu does not appear to explicitly disclose the following limitations:
moving a replacement node device of a first subset of the plurality of node devices from the first subset of the plurality of node devices to the second subset of the plurality of node devices
However, Cen discloses the following limitations:
moving a replacement node device of a first subset (Backup Cache Service Nodes, Fig. 12) of the plurality of node devices from the first subset of the plurality of node devices to the second subset (Master Cache Service Nodes, Fig. 12) of the plurality of node devices (Fig. 4 // “If the master cache service node has failed, at 410, the cache client selects a backup cache service node as the new master cache service node … and sends future cache processing requests to the backup cache service node” [0036]) – As taught in Cen, when a “master cache” type of service node fails, a “backup cache” type of service node is selected in order to replace the failed master cache node. In this context, once selected, the backup cache type of service node becomes a master cache type of service node and receives future requests (i.e., “a replacement node device” which is moved from a “backup cache” type of node to an active node).
Chu and Cen are considered analogous to the claimed invention because they all relate to the same field of managing and replacing failed nodes in a distributed cache environment. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chu with the teachings of Cen and realize an apparatus whereby a replacement node is moved from a first subset of nodes into a second subset of nodes in order to replace a failed node. Maintaining a subset of nodes, such as backup nodes, to be used as replacement in case of failure improves performance by reducing cache service node unavailability and reducing data loss as a result of cache node failure, as disclosed in Cen ¶0026: “In the embodiments discussed herein, when a master cache service node failed to perform cache data processing … chooses a backup cache service node and sends the operation request to the backup cache service node. The application … solves the problems of cache service node unavailability and data loss as a result of cache service node failures.” [0026]
Regarding Claim 18,
Chu discloses the following limitations:
The non-transitory computer-readable storage medium of claim 15 (see Claim 15 limitation mappings above), wherein the processing device is further to:
responsive to a failure of a failed node device (“the successor of a node fails” [0050]) of a second subset of the plurality of node devices (“active nodes” [0027] // Fig. 2) – As taught in ¶0027 and shown in Fig. 2, decomposed network 200 includes both “active nodes” (i.e., “a second subset of the plurality of node devices”) and inactive nodes. Nodes are arranged in an ordered ring by node ID; and each active node has a successor node. Nodes also experience failure (¶0050). One of ordinary skill in the art would accordingly understand that the failed successor node of ¶0050 would be an active node at the time of failure.--,
… a replacement node device (“the successor node of its successor node” [0050])… by associating, with the replacement node device, the second data sub-range that was associated with the failed node device(“Chord also supports a Chord recovery procedure for enabling recovery from node failures … when a node receives a heartbeat message from its successor node it knows the identity of the successor node and its successor node and, thus, when the successor of a node fails, the node can initiate a connection to the successor node of its successor node and the Chord ring is maintained” [0050] // ¶¶0027; 0053) – As taught in Chu ¶0050, a given node (e.g., A; for the purposes of this limitation mapping) has a successor node (e.g., B) which corresponds to the next node ID in the ring which is greater than the node ID of node A (¶0027). Node B also has a respective successor node (e.g., C). When node B fails, node A initiates connection to node C, and the Chord ring is accordingly maintained. In the context of Chu, node C corresponds to “a replacement node device” for node B after node A initiates the connection to node C. As taught in ¶0053, files are stored in the network by hashing a filename into a file ID, subsequently identifying and storing the file to the first node ID equal to or greater than the file ID. Thus, in the context of Chu ¶0050, all files which would have previously been stored at node B instead would be stored in its successor node C after node A establishes the connection to node C, thereby “associating” the portion of the key space (i.e., “a second data sub-range of the plurality of second data sub-ranges”) previously assigned to node B instead with node C (i.e., “that was associated with the failed node device”).
Chu does not appear to explicitly disclose the following limitations:
move a replacement node device of a first subset of the plurality of node devices from the first subset of the plurality of node devices to the second subset of the plurality of node devices
However, Cen discloses the following limitations:
move a replacement node device of a first subset (Backup Cache Service Nodes, Fig. 12) of the plurality of node devices from the first subset of the plurality of node devices to the second subset (Master Cache Service Nodes, Fig. 12) of the plurality of node devices (Fig. 4 // “If the master cache service node has failed, at 410, the cache client selects a backup cache service node as the new master cache service node … and sends future cache processing requests to the backup cache service node” [0036]) – As taught in Cen, when a “master cache” type of service node fails, a “backup cache” type of service node is selected in order to replace the failed master cache node. In this context, once selected, the backup cache type of service node becomes a master cache type of service node and receives future requests (i.e., “a replacement node device” which is moved from a “backup cache” type of node to an active node).
Chu and Cen are considered analogous to the claimed invention because they all relate to the same field of managing and replacing failed nodes in a distributed cache environment. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chu with the teachings of Cen and realize an apparatus whereby a replacement node is moved from a first subset of nodes into a second subset of nodes in order to replace a failed node. Maintaining a subset of nodes, such as backup nodes, to be used as replacement in case of failure improves performance by reducing cache service node unavailability and reducing data loss as a result of cache node failure, as disclosed in Cen ¶0026: “In the embodiments discussed herein, when a master cache service node failed to perform cache data processing … chooses a backup cache service node and sends the operation request to the backup cache service node. The application … solves the problems of cache service node unavailability and data loss as a result of cache service node failures.” [0026]
Claims 6-8, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chu further in view of Narendra et al. (US 11741078 B1)(hereafter referred to as Narendra).
Regarding Claim 6,
Chu discloses the following limitations:
The apparatus of claim 1 (see Claim 1 limitation mappings above),
While Chu describes the data stored within decomposed network 200 as “files”, Chu does not provide specific detail regarding the type of data stored within the files of the decomposed network. Specifically, Chu does explicitly not disclose the following limitations:
wherein the range of stored data comprises a plurality of keys,
However, Narendra discloses the following limitations:
wherein the range of stored data comprises a plurality of keys (Index Service 112, Fig. 1 // “The index service 112 may include a set of persistent storage nodes 120 to store the key map data that represents mappings of data object keys to storage locations at which the data objects are stored in the object storage service 114 … the index service 112 may be employed to determine the specific storage location of the data object identified by key.” [Col. 12, final ¶ -- Col. 13, 1st ¶] // Col. 8, 20-45th lines) – As shown in Narendra Fig. 1, an index service 112 is comprised of a plurality of key cache nodes 122 organized as a ring and storing data objects in a key space (Col. 8). Examiner accordingly considers index service 112 comprising key cache nodes 122 of Narendra Fig. 1 as analogous to a sub-network of Chu comprising respective active nodes. As taught in Narendra, data stored within key cache nodes 122 includes mappings indexed by a “data object key” (i.e., at least “a plurality of keys”)--
Chu and Narendra are considered analogous to the claimed invention because they all relate to the same field of indexing and locating data within a cache hierarchy comprised of a plurality of nodes storing data within a key space. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chu with the teachings of Narendra and realize a distributed cache storage which stores a plurality of object keys. Caching object keys reduces the amount of time required to retrieve a data object by enabling object keys to be accessed via a high-speed data store as opposed to a primary storage location, as disclosed in Narendra Col. 2: “A cache is a high-speed data store which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location.” [Col. 2, 1st ¶]
The combined teachings of Chu and Narendra disclose the following limitations:
and wherein the processing device (Chu, Node 1000, Fig. 10) is further to:
receive the access request to access a stored key of the plurality of keys (Chu, “A file is originally introduced into the Chord network by … a seed node of the file” [0053]) – As taught in Chu, seed nodes receive files (i.e., “the access request”) to be introduced into the decomposed network. As discussed above with respect to Narendra, one of ordinary skill in the art would understand that introducing a file into the decomposed network is analogous to introducing a key into the network (i.e., “to access a stored key of the plurality of keys”)--;
generate a hash of the key (Chu, “The seed node obtains the hashed value of the filename (i.e., the file ID), and searches for the node having the same node ID as the file ID” [0053] // ¶0057) – As taught in Chu, the seed node performs a hash of a filename to obtain a file ID (i.e., “a hash of the key”)--,
wherein the hash is associated with a target second data sub-range of the plurality of second data sub-ranges (Chu, ¶0057) and access a second node device of a second subset of the plurality of node devices that is associated with the target second data sub-range to retrieve the stored key (Chu, “The seed node will locate the first active node in the Chord network that has a node ID that is equal to or greater than the file ID. The seed node sends the file to the located node.” [0053] // “each of the sub-networks uses the same key space (M bits) and the same key space hash function” [0057] // Fig. 4 // ¶0118) – As taught in Chu ¶0053, a seed node sends a file to a particular node within the network by comparing the hash to node IDs of active nodes of the network. As clarified in Chu ¶0057, each sub-network of a decomposed network employs the same hash function in the same-sized key space. In the context of Chu, a file is sent to the active node (e.g., of the “second” cache level; i.e., Sub-Network 2043) which is associated with “a target second data sub-range” that encompasses the obtained file ID. As shown in Chu Fig. 4 and taught in ¶0118, using the file ID and node ID paradigm to identify an active node for file storage enables future retrieval of the file using the same file ID and node ID used to introduce the file into the network (i.e., “to retrieve the stored key”).
Regarding Claim 7,
The same motivation to combine provided in Claim 6 is equally applicable to Claim 7. The combined teachings of Chu and Narendra disclose the following limitations:
The apparatus of claim 6, wherein, responsive to determining that the stored key is not present in the target second data sub-range, returning an error indicating that the stored key is not available (Chu, “files that are stored at the failed nodes are lost” [0051] // “when the successor node of a node fails, the node can initiate a connection to the successor node of its successor node and the Chord ring is maintained” [0040] // ¶0180) – As taught in Chu, “heartbeat messages” are exchanged between active nodes and enable identification of failed nodes (¶0050) and accordingly lost files on failed nodes (¶0051). When an active node identifies that an old successor node has failed, a connection to a new successor node is initiated (¶0050), effectively signalling to the new successor node that its predecessor has failed (i.e., “an error indicating” that files are no longer available on the old successor node). As additionally disclosed in Chu ¶0180, the heartbeat messages enable nodes to broadcast and construct an “updated network map” whenever nodes and associated heartbeats fail, signalling to each node in the network any time a node has failed and thus corresponding data is unavailable.
Regarding Claim 8,
Chu discloses the following limitations:
The apparatus of claim 1 (see Claim 1 limitation mappings above),
While Chu describes the data stored within decomposed network 200 as “files”, Chu does not provide specific detail regarding the type of data stored within the files of the decomposed network. Specifically, Chu does explicitly not disclose the following limitations:
wherein the range of stored data comprises a plurality of keys,
However, Narendra discloses the following limitations:
wherein the range of stored data comprises a plurality of keys (Index Service 112, Fig. 1 // “The index service 112 may include a set of persistent storage nodes 120 to store the key map data that represents mappings of data object keys to storage locations at which the data objects are stored in the object storage service 114 … the index service 112 may be employed to determine the specific storage location of the data object identified by key.” [Col. 12, final ¶ -- Col. 13, 1st ¶] // Col. 8, 20-45th lines) – As shown in Narendra Fig. 1, an index service 112 is comprised of a plurality of key cache nodes 122 organized as a ring and storing data objects in a key space (Col. 8). Examiner accordingly considers index service 112 comprising key cache nodes 122 of Narendra Fig. 1 as analogous to a sub-network of Chu comprising respective active nodes. As taught in Narendra, data stored within key cache nodes 122 includes mappings indexed by a “data object key” (i.e., at least “a plurality of keys”)--
Chu and Narendra are considered analogous to the claimed invention because they all relate to the same field of indexing and locating data within a cache hierarchy comprised of a plurality of nodes storing data within a key space. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chu with the teachings of Narendra and realize a distributed cache storage which stores a plurality of object keys. Caching object keys reduces the amount of time required to retrieve a data object by enabling object keys to be accessed via a high-speed data store as opposed to a primary storage location, as disclosed in Narendra Col. 2: “A cache is a high-speed data store which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location.” [Col. 2, 1st ¶]
The combined teachings of Chu and Narendra disclose the following limitations:
wherein a processing device (Chu, Node 1000, Fig. 10 // ¶0091) of a second node device of a second subset (Chu, “seed node” [0053]) of the plurality of node devices is to:
receive a request to write a stored key of the plurality of keys (Chu, “A file is originally introduced into the Chord network by … a seed node of the file” [0053]) – As taught in Chu, “seed” type of nodes (i.e., “a second subset” of nodes) receive files to be introduced into the decomposed network (i.e., “a request to write a stored key”)--. As discussed above with respect to Narendra, one of ordinary skill in the art would understand that introducing a file into the decomposed network is analogous to introducing a key into the network--;
the stored key associated with a second data sub-range of the plurality of second data sub-ranges that is associated with the second node device and with a first data sub-range of the plurality of first data sub-ranges that is associated with a first node device of a first subset (Chu, “active nodes” [0027]) of the plurality of node devices (Chu, “The seed node obtains the hashed value of the filename (i.e., the file ID), and searches for the node having the same node ID as the file ID” [0053] // ¶¶0027; 0057; 0206 // Fig. 2) – As taught in Chu ¶0027, “active” type nodes (i.e., “a first subset” of nodes) participate in the network while inactive nodes have not yet joined the network. When a file is introduced into the network, a seed node identifies an active node to store the file using a file ID and a node ID (¶0053). Each sub-network stores each file (¶0206) and uses the same-sized key space and the same hash function to locate files (¶0057). In the context of Chu, a file is written into each sub-network, including the “second” sub-network (i.e., Sub-Network 2043) and the “first” sub-network (i.e., Sub-Network 2041), by a seed node. An active node in each sub-network is identified by the seed node using a relation between a file ID and a node ID. The active node in the first sub-network (i.e., “a first node device of a first subset”) where the file is written is thus associated with “a first data sub-range” which includes the obtained file ID. The active node in the second sub-network where the file is written is associated with “a second data sub-range” which includes the file ID; and the second data sub-range was identified by (i.e., is “associated with”) a seed node (i.e., “the second node device”).--;
update a value of the stored key (Chu, “file management information” [0104]) within the second data sub-range (Chu, “The file management information may be included within the file … file management information for a file includes one or more of: … (3) a time-stamp for enabling expiration of the file within individual sub-networks” [0104]) – As taught in Chu, each time a file is introduced into an individual sub-network, a time-stamp is added to “file management information” included within the file. One of ordinary skill in the art would accordingly understand that a file written into the second sub-network would include a time-stamp which enables expiration of the file within the second sub-network--; and
access the first node device that is associated with the first data sub-range to update the stored key within the first data sub-range (Chu, “The seed node sends the file to the located node” [0053]) – As discussed above, a seed node sends the file to the identified active node in each sub-network, including the first sub-network (i.e., the active node in the first sub-network receiving the file is at least “access[ed]”).
Regarding Claim 14,
Chu discloses the following limitations:
The method of claim 9 (see Claim 9 limitation mappings above),
While Chu describes the data stored within decomposed network 200 as “files”, Chu does not provide specific detail regarding the type of data stored within the files of the decomposed network. Specifically, Chu does explicitly not disclose the following limitations:
wherein the range of stored data comprises a plurality of keys,
However, Narendra discloses the following limitations:
wherein the range of stored data comprises a plurality of keys (Index Service 112, Fig. 1 // “The index service 112 may include a set of persistent storage nodes 120 to store the key map data that represents mappings of data object keys to storage locations at which the data objects are stored in the object storage service 114 … the index service 112 may be employed to determine the specific storage location of the data object identified by key.” [Col. 12, final ¶ -- Col. 13, 1st ¶] // Col. 8, 20-45th lines) – As shown in Narendra Fig. 1, an index service 112 is comprised of a plurality of key cache nodes 122 organized as a ring and storing data objects in a key space (Col. 8). Examiner accordingly considers index service 112 comprising key cache nodes 122 of Narendra Fig. 1 as analogous to a sub-network of Chu comprising respective active nodes. As taught in Narendra, data stored within key cache nodes 122 includes mappings indexed by a “data object key” (i.e., at least “a plurality of keys”)--
Chu and Narendra are considered analogous to the claimed invention because they all relate to the same field of indexing and locating data within a cache hierarchy comprised of a plurality of nodes storing data within a key space. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chu with the teachings of Narendra and realize a distributed cache storage which stores a plurality of object keys. Caching object keys reduces the amount of time required to retrieve a data object by enabling object keys to be accessed via a high-speed data store as opposed to a primary storage location, as disclosed in Narendra Col. 2: “A cache is a high-speed data store which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location.” [Col. 2, 1st ¶]
The combined teachings of Chu and Narendra disclose the following limitations:
wherein the method further comprises:
receiving a request to write a stored key of the plurality of keys (Chu, “A file is originally introduced into the Chord network by … a seed node of the file” [0053]) – As taught in Chu, “seed” type of nodes (i.e., “a second subset” of nodes) receive files to be introduced into the decomposed network (i.e., “a request to write a stored key”)--. As discussed above with respect to Narendra, one of ordinary skill in the art would understand that introducing a file into the decomposed network is analogous to introducing a key into the network--,
the stored key associated with the second data sub-range that is associated with a second node device of a second subset (Chu, “seed node” [0053]) of the plurality of node devices and with a first data sub-range of the plurality of first data sub-ranges that is associated with a first node device of a first subset (Chu, “active nodes” [0027]) of the plurality of node devices (Chu, “The seed node obtains the hashed value of the filename (i.e., the file ID), and searches for the node having the same node ID as the file ID” [0053] // ¶¶0027; 0057; 0206 // Fig. 2) – As taught in Chu ¶0027, “active” type nodes (i.e., “a first subset” of nodes) participate in the network while inactive nodes have not yet joined the network. When a file is introduced into the network, a seed node identifies an active node to store the file using a file ID and a node ID (¶0053). Each sub-network stores each file (¶0206) and uses the same-sized key space and the same hash function to locate files (¶0057). In the context of Chu, a file is written into each sub-network, including the “second” sub-network (i.e., Sub-Network 2043) and the “first” sub-network (i.e., Sub-Network 2041), by a seed node. An active node in each sub-network is identified by the seed node using a relation between a file ID and a node ID. The active node in the first sub-network (i.e., “a first node device of a first subset”) where the file is written is thus associated with “a first data sub-range” which includes the obtained file ID. The active node in the second sub-network where the file is written is associated with “a second data sub-range” which includes the file ID; and the second data sub-range was identified by (i.e., is “associated with”) a seed node (i.e., “the second node device”).--;
updating a value of the stored key (Chu, “file management information” [0104]) within the second data sub-range (Chu, “The file management information may be included within the file … file management information for a file includes one or more of: … (3) a time-stamp for enabling expiration of the file within individual sub-networks” [0104]) – As taught in Chu, each time a file is introduced into an individual sub-network, a time-stamp is added to “file management information” included within the file. One of ordinary skill in the art would accordingly understand that a file written into the second sub-network would include a time-stamp which enables expiration of the file within the second sub-network--; and
accessing the first node device that is associated with the first data sub-range to update the stored key within the first data sub-range (Chu, “The seed node sends the file to the located node” [0053]) – As discussed above, a seed node sends the file to the identified active node in each sub-network, including the first sub-network (i.e., the active node in the first sub-network receiving the file is at least “access[ed]”).
Regarding Claim 20,
Chu discloses the following limitations:
The non-transitory computer-readable storage medium of claim 15 (see Claim 15 limitation mappings above)
While Chu describes the data stored within decomposed network 200 as “files”, Chu does not provide specific detail regarding the type of data stored within the files of the decomposed network. Specifically, Chu does explicitly not disclose the following limitations:
wherein the range of stored data comprises a plurality of keys,
However, Narendra discloses the following limitations:
wherein the range of stored data comprises a plurality of keys (Index Service 112, Fig. 1 // “The index service 112 may include a set of persistent storage nodes 120 to store the key map data that represents mappings of data object keys to storage locations at which the data objects are stored in the object storage service 114 … the index service 112 may be employed to determine the specific storage location of the data object identified by key.” [Col. 12, final ¶ -- Col. 13, 1st ¶] // Col. 8, 20-45th lines) – As shown in Narendra Fig. 1, an index service 112 is comprised of a plurality of key cache nodes 122 organized as a ring and storing data objects in a key space (Col. 8). Examiner accordingly considers index service 112 comprising key cache nodes 122 of Narendra Fig. 1 as analogous to a sub-network of Chu comprising respective active nodes. As taught in Narendra, data stored within key cache nodes 122 includes mappings indexed by a “data object key” (i.e., at least “a plurality of keys”)--
Chu and Narendra are considered analogous to the claimed invention because they all relate to the same field of indexing and locating data within a cache hierarchy comprised of a plurality of nodes storing data within a key space. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chu with the teachings of Narendra and realize a distributed cache storage which stores a plurality of object keys. Caching object keys reduces the amount of time required to retrieve a data object by enabling object keys to be accessed via a high-speed data store as opposed to a primary storage location, as disclosed in Narendra Col. 2: “A cache is a high-speed data store which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location.” [Col. 2, 1st ¶]
The combined teachings of Chu and Narendra disclose the following limitations:
wherein the processing device (Chu, Node 1000, Fig. 10) is further to:
receive a request to write a stored key of the plurality of keys (Chu, “A file is originally introduced into the Chord network by … a seed node of the file” [0053]) – As taught in Chu, “seed” type of nodes (i.e., “a second subset” of nodes) receive files to be introduced into the decomposed network (i.e., “a request to write a stored key”)--. As discussed above with respect to Narendra, one of ordinary skill in the art would understand that introducing a file into the decomposed network is analogous to introducing a key into the network--;
the stored key associated with the second data sub-range that is associated with a second node device of a second subset (Chu, “seed node” [0053]) of the plurality of node devices and with a first data sub-range of the plurality of first data sub-ranges that is associated with a first node device of a first subset (Chu, “active nodes” [0027]) of the plurality of node devices (Chu, “The seed node obtains the hashed value of the filename (i.e., the file ID), and searches for the node having the same node ID as the file ID” [0053] // ¶¶0027; 0057; 0206 // Fig. 2) – As taught in Chu ¶0027, “active” type nodes (i.e., “a first subset” of nodes) participate in the network while inactive nodes have not yet joined the network. When a file is introduced into the network, a seed node identifies an active node to store the file using a file ID and a node ID (¶0053). Each sub-network stores each file (¶0206) and uses the same-sized key space and the same hash function to locate files (¶0057). In the context of Chu, a file is written into each sub-network, including the “second” sub-network (i.e., Sub-Network 2043) and the “first” sub-network (i.e., Sub-Network 2041), by a seed node. An active node in each sub-network is identified by the seed node using a relation between a file ID and a node ID. The active node in the first sub-network (i.e., “a first node device of a first subset”) where the file is written is thus associated with “a first data sub-range” which includes the obtained file ID. The active node in the second sub-network where the file is written is associated with “a second data sub-range” which includes the file ID; and the second data sub-range was identified by (i.e., is “associated with”) a seed node (i.e., “the second node device”).--;
update a value of the stored key (Chu, “file management information” [0104]) within the second data sub-range (Chu, “The file management information may be included within the file … file management information for a file includes one or more of: … (3) a time-stamp for enabling expiration of the file within individual sub-networks” [0104]) – As taught in Chu, each time a file is introduced into an individual sub-network, a time-stamp is added to “file management information” included within the file. One of ordinary skill in the art would accordingly understand that a file written into the second sub-network would include a time-stamp which enables expiration of the file within the second sub-network--; and
access the first node device that is associated with the first data sub-range to update the stored key within the first data sub-range (Chu, “The seed node sends the file to the located node” [0053]) – As discussed above, a seed node sends the file to the identified active node in each sub-network, including the first sub-network (i.e., the active node in the first sub-network receiving the file is at least “access[ed]”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIAN SCOTT MENDEL whose telephone number is (703)756-1608. The examiner can normally be reached M-F 10am - 4pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocío del Mar Pérez-Vélez can be reached at 571-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.S.M./Examiner, Art Unit 2133
/SEAN D ROSSITER/Primary Examiner, Art Unit 2133