Prosecution Insights
Last updated: April 19, 2026
Application No. 18/449,666

DISAGGREGATED CACHE MEMORY FOR EFFICIENCY IN DISTRIBUTED DATABASES

Final Rejection §103§112
Filed
Aug 14, 2023
Examiner
MENDEL, JULIAN SCOTT
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
4 (Final)
79%
Grant Probability
Favorable
5-6
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
26 granted / 33 resolved
+23.8% vs TC avg
Strong +56% interview lift
Without
With
+55.6%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
23 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
10.1%
-29.9% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103 §112
DETAILED ACTION This Action is responsive to the Amendments filed on 07/30/2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1, 3-11, and 13-20 are amended. Claims 1, 3-11, and 13-20 are pending and have been examined. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 11 and 13-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 11, Claim 11 recites “the respective sections of the distributed database” in the 19-20th lines without providing proper antecedent basis. Therefore, the scope of Claim 11 is indefinite, and the claim is rejected under 35 U.S.C. 112(b). Examiner recommends applicant amend Claim 11, 10-11th lines instead to read “a respective section of the distributed database”, such as recited in Claim 1, in order to overcome this rejection. Claims 13-20 depend on Claim 11 and are therefore similarly rejected under 35 U.S.C. 112(b). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-6, 11, and 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 20250068469 A1)(hereafter referred to as Wang) further in view of Annamalai et al. (US 20170054802 A1)(hereafter referred to as Annamalai). Regarding Claim 1, Wang discloses the following limitations: A method comprising: receiving, by a computing system (Fig. 2) and from a user (“Application”, Fig. 2 // “a client” [0073]) …, a first query (“a data write request” [0073]) requesting first data be written to a distributed database (“an index” [0073] // ¶0069)(“The client node is mainly responsible for … processing a data index request and a data read request of an application (also referred to as a client) … The data index request is used to write data into an index, and may also be referred to as a data write request” [0073]) – As shown in Fig. 2 and detailed in ¶0073, clients transmit data write requests to write data (i.e., “first data”) into an index (“a distributed database”; see ¶0069)--, the distributed database comprising: a first plurality of nodes (Writer Cluster node Nw, Fig. 2 // ¶0074), each respective node of the first plurality of nodes assigned to a respective section (“one or more shards” [0074] // ¶0070) of the distributed database (“One or more shards of an index may be distributed in each node in the writer cluster” [0074]) – As detailed in ¶¶0070 and 0074, a distributed database is divided into shards; each shard is distributed to a node in the writer cluster; and each node of the writer cluster receives at least one shard (i.e., each node of the writer cluster is “assigned to” respective shards of the database)-- and having authoritative control over writes to the respective section of the distributed database (“One or more shards of an index may be distributed in each node in the writer cluster, and the writer cluster may be used to process a data write request. In other words, when data needs to be written to an index, the data may be written to only a shard distributed in a writer cluster” [0074] // ¶0189) – As detailed in ¶0074, data being written into the index is only written to corresponding shards (“a target shard”’ see ¶0189) which are distributed on nodes of the writer cluster. In this context, examiner considers a node of a writer cluster as having “authoritative control over writes” to each shard which is distributed on the node--; and a distributed cache pool (Reader cluster, Fig. 2 // ¶0074), the distributed cache pool providing a cache memory (memory of Reader Cluster Nodes Nr, Fig. 2 // “basic physical resources such as … a memory, and a disk” [0068] // ¶0072) disaggregated from (¶0071) the first plurality of nodes and caching a subset of the distributed database independently (Fig. 2 // ¶0192) from the first plurality of nodes, the distributed cache pool comprising distributed memory of a second plurality of nodes (Reader Cluster Nodes Nr, Fig. 2), each node in the second plurality of nodes different from each node in the first plurality of nodes (“the plurality of fourth nodes may be independent of the plurality of first nodes. For example, as shown in FIG. 2 … the plurality of first nodes may be nodes in the writer cluster, and the plurality of fourth nodes may be nodes in the reader cluster” [0192]) and caching a respective portion (“a replica” [0074]) of the subset of the distributed database (“Each shard may have one or more replicas, and each replica and the shard corresponding to the replica are distributed in different nodes” [0071] // Fig. 1 // “One or more replicas of an index may be distributed in each node of the reader cluster … When data is read (data is queried) from an index, the data may be read from only a replica distributed in the reader cluster.” [0074]) – As shown in Fig. 2 and detailed in ¶¶0071 and 0074, each shard of the index has one or more replicas (i.e., “a subset of the distributed database”) which are distributed in nodes of a reader cluster and which are used to service read requests for the index (i.e., replicas are distributed “independently from” the write cluster nodes). As clarified in ¶0071 and shown in Fig. 1, both shards and replicas undergo some form of distribution to respective nodes of read and write clusters. Examiner considers distributing both shards and replicas across different clusters of nodes as an example of “disaggregat[ion]” between the clusters of nodes (i.e., distribution of a shard to a node in a writer cluster does not necessarily dictate distribution of the corresponding replica in a reader cluster)., …; writing, by the computing system and using one of the first plurality of nodes, the first data to the distributed database (“the writer cluster may be used to process a data write request. In other words, when data needs to be written to an index, the data may be written only to a shard distributed in a writer cluster” [0074] // ¶0082) – As discussed above and as detailed in ¶0074, data write requests (e.g.., requests to write “first data”) are processed by nodes of the writer cluster.; receiving, by the computing system and from the user …, a second query (“a data read request” [0074]) requesting second data be read from the distributed database (“One or more replicas may be distributed to each node in the reader cluster, and the reader cluster may be used to process a data read request” [0074] // Fig. 1) – As previously discussed and as detailed in ¶0074, reader cluster nodes process data read requests received from the client. As shown in Fig. 1, plural shards (and thus plural corresponding replicas) exist in an index. One of ordinary skill in the art would accordingly understand that a data read request would target data different from that targeted by a data write request (i.e., requesting “second data” distinct from first data written into the index)--; retrieving, by the computing system and from the distributed cache pool, the second data (“the reader cluster may be used to process a data read request … When data is read (data is queried) from an index, the data may be read from only a replica distributed in a reader cluster” [0074]) – One of ordinary skill in the art would understand that processing a data read request would at least require reading the data associated with the data read request--; and … Wang does not explicitly disclose a client device hosting the application of Fig. 2, and further does not explicitly disclose a final step of providing second data retrieved from the reader cluster to the client as part of processing a data read request. Additionally, although Wang ¶¶0120 and 0179 generally disclose that a “deployment policy” is used to distribute both shards and replicas across the writer and reader cluster nodes, Wang does not explicitly disclose an embodiment whereby replicas associated with shards located on different writer cluster nodes are distributed onto the same reader cluster node. Specifically, Wang does not explicitly disclose the following limitations: receiving … from a user device, a first query … wherein at least one of the respective portions of the subset of the distributed database includes data from a first section and a second section of the respective sections of the distributed database providing, by the computing system and to the user device, the second data retrieved from the distributed cache pool. However, Annamalai discloses the following limitations: receiving … from a user device (Client 125, Fig. 1), a first query (“the primary server 115 receives a read request for a specified data from the client 125” [0035]) … wherein at least one of the respective portions (“a replica of the specified shard” [0018]) of the subset of the distributed database includes data from a first section and a second section of the respective sections of the distributed database (Fig. 3 // “The primary server processes any read and/or any write requests for data associated with the specified shard. The secondary servers store replicas of the specified shard and can, optionally, service read requests from the clients for data associated with the specified shard. However, the secondary servers may not process any write requests for the specified shard” [0017] // ¶¶0017-19; 0040-44) – As detailed in Annamalai ¶¶0017-19, data of a database is partitioned into “primary” and “secondary” shards, where primary and secondary shards are placed across various servers (“node[s]”) and further where write requests are only serviced by shards on primary servers and where secondary servers only service read requests, similar to how data of the index of Wang is partitioned into “shards” and “replicas”, where write requests are only processed using shards and read requests are only processed using replicas. Examiner accordingly considers the concept of “primary” and “secondary” shards, as taught in Annamalai, as analogous to the concept of “shards” and “replicas” discussed in Wang (i.e., “respective sections” and “respective portions” of the distributed database, respectively). As shown in Annamalai Fig. 3 and clarified in ¶¶0040-44, primary and secondary shard placement is determined by “a placement policy” (see ¶0043), where an example placement policy shows (Fig. 3) primary shards A and B placed respectively on Servers 1 and 2, and secondary shards A and B both placed on a single Server 5. In this example placement policy, secondary shards which are assigned to Server 5 (i.e., “at least one of the respective portions”) includes data (e.g., A and B) from both a primary shard assigned to Server 1 (i.e., “a first section … of the respective sections”; e.g., data A) and from a primary shard assigned to Server 2 (i.e., “a second section of the respective sections”; e.g., data B). Such a configuration results in write requests to shard A being serviced by Server 1, write requests to shard B being serviced by Server 2, and read requests to both shards A and B being serviced by Server 5. providing, by the computing system and to the user device, the second data retrieved from the distributed cache pool (Fig. 6 // “the request handler component 415 obtains the specified data from the storage system 135 and returns the specified data to the client 125. This way, the client can be assured of the read-after-wrote consistency” [0075]) – As shown in Fig. 6 and clarified in ¶0075, specified data is returned to the client after servicing a read request. Wang and Annamalai are considered analogous to the claimed invention because they all relate to the same field of partitioning and distributing data across plural nodes in a sharded, distributed database environment with read/write separation. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang with the teachings of Annamalai and realize a method of data distribution whereby a respective portion cached by a given node of a distributed cache pool includes data from a first shard assigned to a first node of a first plurality of nodes and includes data from a second shard assigned to a second node of the first plurality of nodes. Doing so would enable reassignment of roles and moving replicas in the face of server failure, resulting in an efficient failover mechanism, as disclosed in Annamalai ¶0046: “The shard management server 440 can consume the replication policy and place/assign different shards across the servers as per the requirements. As new databases are created and/or deleted and the shard associated with a database changes, the replication policy can be updated. The shard management server 440 can provide an efficient failover mechanism, e.g., by reassigning roles and moving replicas around in the face of failures of one or more servers in the system.” [0046] Regarding Claim 3, The same motivation to combine provided in Claim 1 is equally applicable to Claim 3. The combined teachings of Wang and Annamalai disclose the following limitations: The method of claim 1, wherein the distributed cache pool comprises: a first portion distributed across random access memory (RAM) (Wang, ¶0241) of the second plurality of nodes (Wang, Reader Cluster Nodes Nr, Fig. 2); and a second portion distributed across solid state drives (SSDs) (Wang, ¶0241) of the second plurality of nodes (Wang, Memory 2106, Fig. 21 // “The memory 2106 may include a volatile memory, for example, a random access memory (RAM). The processor 2104 may further include a non-volatile memory, for example, … a solid state drive (SSD)” [0241] // ¶¶0068 ; 0238-241) – As shown in Wang Fig. 21 and clarified in ¶¶0068, 238-241, each node (i.e., including the Reader Cluster Nodes of Fig. 2) includes a memory which can comprise both a volatile RAM (i.e., “a first portion”) and a non-volatile SSD (i.e., “a second portion”). Regarding Claim 4, The same motivation to combine provided in Claim 1 is equally applicable to Claim 4. The combined teachings of Wang and Annamalai disclose the following limitations: The method of claim 1, further comprising: generating (Annamalai, ¶0044), by the computing system, an access map (Annamalai, “assignment decisions” [0044]) mapping locations of data in the distributed cache pool (Annamalai, “The shard management server can make these assignment decisions based on various factors, e.g., … a placement policy” [0044]); distributing (Annamalai, ¶0055), by the computing system, the access map to each node of the first plurality of nodes (Annamalai, “Referring back to the first server 450 or the set of servers 460, a server includes a shard manager client component 405 that works with the shared management server 440 to implement the shard assignments determined by the shard management server 440 … The shard management server 440 conveys any shard placement decisions to the shard manager client component 405” [0055]) -- As taught in Annamalai ¶¶0044 and 0055, a shard management server assigns primary and secondary severs for each shard (i.e., “generat[es] … an access map”) and subsequently conveys the shard assignments to shard manager client components 405 located on each server (see Fig. 4 // ¶0055) Regarding Claim 5, The same motivation to combine provided in Claim 1 is equally applicable to Claim 5. The combined teachings of Wang and Annamalai disclose the following limitations: The method of claim 4, further comprising, after receiving the first query, determining (Annamalai, ¶0030), by at least one of the first plurality of nodes, using the access map, the location of the first data in the distributed cache pool (Annamalai, “When a server, e.g., the primary server 115, receives a write request for writing data from a client, e.g., the first data 155 from the client 125, replicates the data to the servers in a sync replica set associated with a shard the first data 155 belongs to” [0030] // ¶¶0043-44) – As disclosed in Annamalai ¶0030, after receiving a write request (“the first query”) from a client, the primary server (“at least one of the first plurality of nodes”) associated with the write request data replicates the data to each other server in the associated “sync replica set”. As previously discussed (see Claim 4 limitation mappings above) and as discussed in Annamalai ¶0043-44, shard assignment decisions (“the access map”) effectively define which servers comprise a sync replica set for a given shard. One of ordinary skill in the art would accordingly understand that a primary server replicating data to other servers in a sync replica set would use the shard assignment decisions defining the sync replica set. Regarding Claim 6, The same motivation to combine provided in Claim 1 is equally applicable to Claim 6. The combined teachings of Wang and Annamalai disclose the following limitations: The method of claim 1, further comprising: generating (Annamalai, ¶0044), by the computing system, an access map (Annamalai, “assignment decisions” [0044]) mapping locations of data in the distributed cache pool; and distributing (Annamalai, ¶0044), by the computing system, the access map to the user device. (Annamalai, “The shard management server can make these assignment decisions based on various factors, e.g., … a placement policy … The shard assignments can be published e.g., for use by the client 125” [0044] // “A client … can query the directory service 435 to obtain the shard assignments, e.g., a primary and/or secondary servers assigned to the shard” [0047]) – As taught in Annamalai ¶¶0044 and 0055, a shard management server assigns primary and secondary severs for each shard (i.e., “generat[es] … an access map”) and subsequently publishes the shard assignments to a directory so that the client 125 can determine primary and secondary servers (i.e., “locations”) for each data shard in a database. Examiner considers the process of publishing shard assignments to a directory for consumption by a client as effectively “distributing” the shard assignments to the client. Regarding Claim 11, Wang discloses the following limitations: A system comprising: data processing hardware (“a processor” [0037]); and memory hardware (“a memory” [0037]) in communication with the data processing hardware, the memory hardware storing instructions that when executed on the data processing hardware cause the data processing hardware to (¶0037): receive, from a user (“Application”, Fig. 2 // “a client” [0073]) …, a first query (“a data write request” [0073]) requesting first data be written to a distributed database (“an index” [0073] // ¶0069)(“The client node is mainly responsible for … processing a data index request and a data read request of an application (also referred to as a client) … The data index request is used to write data into an index, and may also be referred to as a data write request” [0073]) – As shown in Fig. 2 and detailed in ¶0073, clients transmit data write requests to write data (i.e., “first data”) into an index (“a distributed database”; see ¶0069)--, the distributed database comprising: a first plurality of nodes (Writer Cluster node Nw, Fig. 2 // ¶0074), each respective node of the first plurality of nodes having authoritative control over writes to a respective section (“one or more shards” [0074] // ¶0070) of the distributed database (“One or more shards of an index may be distributed in each node in the writer cluster, and the writer cluster may be used to process a data write request. In other words, when data needs to be written to an index, the data may be written to only a shard distributed in a writer cluster” [0074] // ¶0189) – As detailed in ¶0074, data being written into the index is only written to corresponding shards (“a target shard”’ see ¶0189) which are distributed on nodes of the writer cluster. In this context, examiner considers a node of a writer cluster as having “authoritative control over writes” to each shard which is distributed on the node-; and a distributed cache pool (Reader cluster, Fig. 2 // ¶0074), the distributed cache pool providing a cache memory (memory of Reader Cluster Nodes Nr, Fig. 2 // “basic physical resources such as … a memory, and a disk” [0068] // ¶0072) disaggregated from (¶0071) the first plurality of nodes and caching a subset of the distributed database independently (Fig. 2 // ¶0192) from the first plurality of nodes, the distributed cache pool comprising distributed memory of a second plurality of nodes (Reader Cluster Nodes Nr, Fig. 2), each node in the second plurality of nodes different from each node in the first plurality of nodes (“the plurality of fourth nodes may be independent of the plurality of first nodes. For example, as shown in FIG. 2 … the plurality of first nodes may be nodes in the writer cluster, and the plurality of fourth nodes may be nodes in the reader cluster” [0192]) and caching a respective portion (“a replica” [0074]) of the subset of the distributed database (“Each shard may have one or more replicas, and each replica and the shard corresponding to the replica are distributed in different nodes” [0071] // Fig. 1 // “One or more replicas of an index may be distributed in each node of the reader cluster … When data is read (data is queried) from an index, the data may be read from only a replica distributed in the reader cluster.” [0074]) – As shown in Fig. 2 and detailed in ¶¶0071 and 0074, each shard of the index has one or more replicas (i.e., “a subset of the distributed database”) which are distributed in nodes of a reader cluster and which are used to service read requests for the index (i.e., replicas are distributed “independently from” the write cluster nodes). As clarified in ¶0071 and shown in Fig. 1, both shards and replicas undergo some form of distribution to respective nodes of read and write clusters. Examiner considers distributing both shards and replicas across different clusters of nodes as an example of “disaggregat[ion]” between the clusters of nodes (i.e., distribution of a shard to a node in a writer cluster does not necessarily dictate distribution of the corresponding replica in a reader cluster)., … write, using one of the first plurality of nodes, the first data to the distributed database (“the writer cluster may be used to process a data write request. In other words, when data needs to be written to an index, the data may be written only to a shard distributed in a writer cluster” [0074] // ¶0082) – As discussed above and as detailed in ¶0074, data write requests (e.g.., requests to write “first data”) are processed by nodes of the writer cluster.; receive, from the user …, a second query (“a data read request” [0074]) requesting second data be read from the distributed database (“One or more replicas may be distributed to each node in the reader cluster, and the reader cluster may be used to process a data read request” [0074] // Fig. 1) – As previously discussed and as detailed in ¶0074, reader cluster nodes process data read requests received from the client. As shown in Fig. 1, plural shards (and thus plural corresponding replicas) exist in an index. One of ordinary skill in the art would accordingly understand that a data read request would target data different from that targeted by a data write request (i.e., requesting “second data” distinct from first data written into the index)--; retrieve, from the distributed cache pool, the second data (“the reader cluster may be used to process a data read request … When data is read (data is queried) from an index, the data may be read from only a replica distributed in a reader cluster” [0074]) – One of ordinary skill in the art would understand that processing a data read request would at least require reading the data associated with the data read request--; Wang does not explicitly disclose a client device hosting the application of Fig. 2, and further does not explicitly disclose a final step of providing second data retrieved from the reader cluster to the client as part of processing a data read request. Additionally, although Wang ¶¶0120 and 0179 generally disclose that a “deployment policy” is used to distribute both shards and replicas across the writer and reader cluster nodes, Wang does not explicitly disclose an embodiment whereby replicas associated with shards located on different writer cluster nodes are distributed onto the same reader cluster node. Specifically, Wang does not explicitly disclose the following limitations: receive … from a user device, a first query … wherein at least one of the respective portions of the subset of the distributed database includes data from a first section and a second section of the respective sections of the distributed database provide to the user device, the second data retrieved from the distributed cache pool. However, Annamalai discloses the following limitations: receive … from a user device (Client 125, Fig. 1), a first query (“the primary server 115 receives a read request for a specified data from the client 125” [0035]) … wherein at least one of the respective portions (“a replica of the specified shard” [0018]) of the subset of the distributed database includes data from a first section and a second section of the respective sections of the distributed database (Fig. 3 // “The primary server processes any read and/or any write requests for data associated with the specified shard. The secondary servers store replicas of the specified shard and can, optionally, service read requests from the clients for data associated with the specified shard. However, the secondary servers may not process any write requests for the specified shard” [0017] // ¶¶0017-19; 0040-44) – As detailed in Annamalai ¶¶0017-19, data of a database is partitioned into “primary” and “secondary” shards, where primary and secondary shards are placed across various servers (“node[s]”) and further where write requests are only serviced by shards on primary servers and where secondary servers only service read requests, similar to how data of the index of Wang is partitioned into “shards” and “replicas”, where write requests are only processed using shards and read requests are only processed using replicas. Examiner accordingly considers the concept of “primary” and “secondary” shards, as taught in Annamalai, as analogous to the concept of “shards” and “replicas” discussed in Wang (i.e., “respective sections” and “respective portions” of the distributed database, respectively). As shown in Annamalai Fig. 3 and clarified in ¶¶0040-44, primary and secondary shard placement is determined by “a placement policy” (see ¶0043), where an example placement policy shows (Fig. 3) primary shards A and B placed respectively on Servers 1 and 2, and secondary shards A and B both placed on a single Server 5. In this example placement policy, secondary shards which are assigned to Server 5 (i.e., “at least one of the respective portions”) includes data (e.g., A and B) from both a primary shard assigned to Server 1 (i.e., “a first section … of the respective sections”; e.g., data A) and from a primary shard assigned to Server 2 (i.e., “a second section of the respective sections”; e.g., data B). Such a configuration results in write requests to shard A being serviced by Server 1, write requests to shard B being serviced by Server 2, and read requests to both shards A and B being serviced by Server 5. provide to the user device, the second data retrieved from the distributed cache pool (Fig. 6 // “the request handler component 415 obtains the specified data from the storage system 135 and returns the specified data to the client 125. This way, the client can be assured of the read-after-wrote consistency” [0075]) – As shown in Fig. 6 and clarified in ¶0075, specified data is returned to the client after servicing a read request. Wang and Annamalai are considered analogous to the claimed invention because they all relate to the same field of partitioning and distributing data across plural nodes in a sharded, distributed database environment with read/write separation. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang with the teachings of Annamalai and realize a method of data distribution whereby a respective portion cached by a given node of a distributed cache pool includes data from a first shard assigned to a first node of a first plurality of nodes and includes data from a second shard assigned to a second node of the first plurality of nodes. Doing so would enable reassignment of roles and moving replicas in the face of server failure, resulting in an efficient failover mechanism, as disclosed in Annamalai ¶0046: “The shard management server 440 can consume the replication policy and place/assign different shards across the servers as per the requirements. As new databases are created and/or deleted and the shard associated with a database changes, the replication policy can be updated. The shard management server 440 can provide an efficient failover mechanism, e.g., by reassigning roles and moving replicas around in the face of failures of one or more servers in the system.” [0046] Regarding Claim 13, The same motivation to combine provided in Claim 11 is equally applicable to Claim 13. The combined teachings of Wang and Annamalai disclose the following limitations: The system of claim 11, wherein the distributed cache pool comprises: a first portion distributed across random access memory (RAM) (Wang, ¶0241) of the second plurality of nodes (Wang, Reader Cluster Nodes Nr, Fig. 2); and a second portion distributed across solid state drives (SSDs) (Wang, ¶0241) of the second plurality of nodes (Wang, Memory 2106, Fig. 21 // “The memory 2106 may include a volatile memory, for example, a random access memory (RAM). The processor 2104 may further include a non-volatile memory, for example, … a solid state drive (SSD)” [0241] // ¶¶0068 ; 0238-241) – As shown in Wang Fig. 21 and clarified in ¶¶0068, 238-241, each node (i.e., including the Reader Cluster Nodes of Fig. 2) includes a memory which can comprise both a volatile RAM (i.e., “a first portion”) and a non-volatile SSD (i.e., “a second portion”). Regarding Claim 14, The same motivation to combine provided in Claim 11 is equally applicable to Claim 14. The combined teachings of Wang and Annamalai disclose the following limitations: The system of claim 11, wherein the instructions when executed on the data processing hardware cause the data processing hardware to: generate (Annamalai, ¶0044) an access map (Annamalai, “assignment decisions” [0044]) mapping locations of data in the distributed cache pool (Annamalai, “The shard management server can make these assignment decisions based on various factors, e.g., … a placement policy” [0044]); distributing (Annamalai, ¶0055), by the computing system, the access map to each node of the first plurality of nodes (Annamalai, “Referring back to the first server 450 or the set of servers 460, a server includes a shard manager client component 405 that works with the shared management server 440 to implement the shard assignments determined by the shard management server 440 … The shard management server 440 conveys any shard placement decisions to the shard manager client component 405” [0055]) -- As taught in Annamalai ¶¶0044 and 0055, a shard management server assigns primary and secondary severs for each shard (i.e., “generat[es] … an access map”) and subsequently conveys the shard assignments to shard manager client components 405 located on each server (see Fig. 4 // ¶0055) Regarding Claim 15, The same motivation to combine provided in Claim 11 is equally applicable to Claim 15. The combined teachings of Wang and Annamalai disclose the following limitations: The system of claim 14, wherein the instructions when executed on the data processing hardware cause the data processing hardware to, after receiving the first query, determine (Annamalai, ¶0030), by at least one of the first plurality of nodes, using the access map, the location of the first data in the distributed cache pool (Annamalai, “When a server, e.g., the primary server 115, receives a write request for writing data from a client, e.g., the first data 155 from the client 125, replicates the data to the servers in a sync replica set associated with a shard the first data 155 belongs to” [0030] // ¶¶0043-44) – As disclosed in Annamalai ¶0030, after receiving a write request (“the first query”) from a client, the primary server (“at least one of the first plurality of nodes”) associated with the write request data replicates the data to each other server in the associated “sync replica set”. As previously discussed (see Claim 14 limitation mappings above) and as discussed in Annamalai ¶0043-44, shard assignment decisions (“the access map”) effectively define which servers comprise a sync replica set for a given shard. One of ordinary skill in the art would accordingly understand that a primary server replicating data to other servers in a sync replica set would use the shard assignment decisions defining the sync replica set. Regarding Claim 16, The same motivation to combine provided in Claim 11 is equally applicable to Claim 16. The combined teachings of Wang and Annamalai disclose the following limitations: The method of claim 11, wherein the instructions when executed on the data processing hardware cause the data processing hardware to: generate (Annamalai, ¶0044) an access map (Annamalai, “assignment decisions” [0044]) mapping locations of data in the distributed cache pool; and distribute (Annamalai, ¶0044), by the computing system, the access map to the user device. (Annamalai, “The shard management server can make these assignment decisions based on various factors, e.g., … a placement policy … The shard assignments can be published e.g., for use by the client 125” [0044] // “A client … can query the directory service 435 to obtain the shard assignments, e.g., a primary and/or secondary servers assigned to the shard” [0047]) – As taught in Annamalai ¶¶0044 and 0047, a shard management server assigns primary and secondary severs for each shard (i.e., “generates an access map”) and subsequently publishes the shard assignments to a directory so that the client 125 can determine primary and secondary servers (i.e., “locations”) for each data shard in a database. Examiner considers the process of publishing shard assignments to a directory for consumption by a client as effectively “distribut[ing]” the shard assignments to the client. Claims 7-8 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Wang further in view of Annamalai and Bisson et al. (US 20190243906 A1)(cited by examiner in previous action)(hereafter referred to as Bisson). Regarding Claim 7, The same motivation to combine provided in Claim 1 is equally applicable to Claim 7. The combined teachings of Wang and Annamalai disclose the following limitations: The method of claim 6 (see Claim 6 limitation mappings), Although Annamalai ¶0044 discloses that shard assignment decisions are published to a directory service for use by the client, the combined teachings of Wang and Annamalai do not explicitly disclose the following limitations: wherein the second query comprises a location of the second data in the distributed cache pool based on the access map. However, Bisson discloses within the context of locating and retrieving client file data from a distributed file system environment that “mapping information” is transmit to a client device after the client devices requests a particular file from the distributed file system. Bisson discloses the following limitations: wherein the second query (“a block read command” [0061]) comprises a location (“blockID” [0061]) of the second data in the distributed cache pool based on the access map (Fig. 8, step 865 // “ FIG. 8 shows an example process of reading a file stored in a KV SSD of a distributed file system … To read a file stored in the data node 740, the client 710 sends a read file request 861 (openFile(fileID)) with a file ID (fileID) to the name node 720. Using the file ID, the name node 720 sends a retrieve command … to the KV SSD 730 … The name node 720 forwards mapping information 864 to the client 710. … The name node 720 forwards the mapping information 864 to the client 710. Using the block ID included in the block-datanode mapping information, the client 710 sends a block read command 865 (readBlock(blockID)) to the data node 740.” [0061]) – As detailed in Bisson, a client 710 uses the “blockID” received from mapping information as an input parameter for a read command. Wang, Annamalai, and Bisson are considered analogous to the claimed invention because they all relate to the same field of distributing and locating client file data stored in a distributed file system based on mapping information generated and distributed for each file. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang and Annamalai with the teachings of Bisson and realize a method of distributing access map information to a user device. Doing so would enable a client device to use map information to specify a particular location of data in a distributed file system as part of a query for the data, reducing both memory and processor overhead on the distributed file system when servicing queries, as disclosed in Bisson ¶¶0003 // 0061: “In a traditional data storage node, key-value mappings, such as a block identifier (ID) to data content, are typically stored using an existing file system on the data storage node. This occurs because the underlying storage device does not natively support a key-value interface required by the data storage node. As a result, an additional layer of software, typically a file system, is required to present the key-value interface. The addition of the file system introduces memory and processor overheads.” [0003] // “The fundamental difference between a traditional reading operation is that the name node 720 issues a single direct KV SSD read operation to retrieve the block-datanode map, rather than searching an in-memory hash table for the file to block list and block to datanode list. In addition, the data node 740 sends a request for retrieving data directly to the KV SSD, bypassing any storage software middleware (such as a file system).” [0061] Regarding Claim 8, The same motivation to combine provided in Claim 1 is equally applicable to Claim 8. The combined teachings of Wang and Annamalai disclose the following limitations: The method of claim 1 (see Claim 1 limitation mappings above), wherein retrieving, from the distributed cache pool, the second data is based on a … mapping (Wang, “a mapping relationship between a shard and a node” [0120]) locations of data in the distributed cache pool (Wang, ¶¶0120; 0179) – As previously discussed (see Claim 1 limitation mappings above) and as detailed Wang ¶¶0120; 0179, a distribution policy is used to assign shards and replicas to nodes. As clarified in Wang ¶0120, the distribution policy corresponds to a mapping between shards and nodes of the database. The combined teachings of Wang and Annamalai are silent regarding the following limitations: a hashmap mapping locations of data in the distributed cache pool. However, Bisson discloses within the context of mapping file data to locations in a distributed file system environment that “hash maps” are traditionally used by distributed file systems to identify locations of files. Bisson discloses the following limitations: a hashmap (“a normal hash map” [0057]) mapping locations of data in the distributed cache pool (Fig. 6A // “The process of reading a file stored in the KV SSD is similar to the process of using a normal hash map or a similar data structure. The data structure can be a library that directly links to the KV SSD. For example, a client application issues a file retrieve operation to read a file using a file ID. The metadata node returns a block list of the file in the form of a blob … The block list also contains a node list where each of the blocks in the block list is stored … In this scheme, the metadata node still needs to store the mapping tables in its memory” [0057]) – In this case, examiner considers the “mapping tables” stored in the memory of a metadata node, as detailed in Bisson ¶0057, as analogous to the concept of assignment decisions for primary and secondary data as taught in Annamalai. As clarified in Bisson ¶0057, mappings tables can be “a normal hash map or a similar data structure”. The combined teachings of Wang and Annamalai disclose a distributed database environment (Wang, Fig. 2) comprising a mapping of locations of data (Wang, “deployment policy” [0120]) used to retrieve data from the database, which is considered analogous to the Bisson distributed database environment (Distributed Data Storage System 100A, Fig. 1A) comprising a mapping of locations of data (File Mapping Table, Fig. 6A) used to retrieve data from the database. Bisson discloses a known method of storing a mapping of locations of data in a hashmap type of data structure (see limitation mappings above). It would have been obvious to someone of ordinary skill in the art, as taught by Bisson, to implement the method of storing a mapping of locations of data in a hashmap type of data structure in the distributed database environment of Wang. A person of ordinary skill in the art would have recognized that applying the known technique of storing a mapping of locations of data in a hashmap type of data structure, as taught by Bisson, to a distributed database environment would have yielded the predictable result of retrieving data from the distributed database based on mappings stored in a hashmap type of data structure. Retrieving data from the distributed database based on mappings stored in a hashmap type of data structure would have been expected to improve the scalability of the distributed database by enabling data associated with a particular hash value, such as “key-value (KV)” data described in Bisson ¶0028, to be quickly located in the distributed database. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to apply the known technique of storing a mapping of locations of data in a hashmap type of data structure, as taught by Bisson, to the distributed database environment of Wang and Annamalai. Doing so would predictably result in retrieving data from the distributed database based on mappings stored in a hashmap type of data structure. See MPEP 2143, Rationale D. Regarding Claim 17, The same motivation to combine provided in Claim 11 is equally applicable to Claim 17. The combined teachings of Wang and Annamalai disclose the following limitations: The system of claim 16 (see Claim 16 limitation mappings), Although Annamalai ¶0044 discloses that shard assignment decisions are published to a directory service for use by the client, the combined teachings of Wang and Annamalai do not explicitly disclose the following limitations: wherein the second query comprises a location of the second data in the distributed cache pool based on the access map. However, Bisson discloses within the context of locating and retrieving client file data from a distributed file system environment that “mapping information” is transmit to a client device after the client devices requests a particular file from the distributed file system. Bisson discloses the following limitations: wherein the second query (“a block read command” [0061]) comprises a location (“blockID” [0061]) of the second data in the distributed cache pool based on the access map (Fig. 8, step 865 // “ FIG. 8 shows an example process of reading a file stored in a KV SSD of a distributed file system … To read a file stored in the data node 740, the client 710 sends a read file request 861 (openFile(fileID)) with a file ID (fileID) to the name node 720. Using the file ID, the name node 720 sends a retrieve command … to the KV SSD 730 … The name node 720 forwards mapping information 864 to the client 710. … The name node 720 forwards the mapping information 864 to the client 710. Using the block ID included in the block-datanode mapping information, the client 710 sends a block read command 865 (readBlock(blockID)) to the data node 740.” [0061]) – As detailed in Bisson, a client 710 uses the “blockID” received from mapping information as an input parameter for a read command. Wang, Annamalai, and Bisson are considered analogous to the claimed invention because they all relate to the same field of distributing and locating client file data stored in a distributed file system based on mapping information generated and distributed for each file. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang and Annamalai with the teachings of Bisson and realize a method of distributing access map information to a user device. Doing so would enable a client device to use map information to specify a particular location of data in a distributed file system as part of a query for the data, reducing both memory and processor overhead on the distributed file system when servicing queries, as disclosed in Bisson ¶¶0003 // 0061: “In a traditional data storage node, key-value mappings, such as a block identifier (ID) to data content, are typically stored using an existing file system on the data storage node. This occurs because the underlying storage device does not natively support a key-value interface required by the data storage node. As a result, an additional layer of software, typically a file system, is required to present the key-value interface. The addition of the file system introduces memory and processor overheads.” [0003] // “The fundamental difference between a traditional reading operation is that the name node 720 issues a single direct KV SSD read operation to retrieve the block-datanode map, rather than searching an in-memory hash table for the file to block list and block to datanode list. In addition, the data node 740 sends a request for retrieving data directly to the KV SSD, bypassing any storage software middleware (such as a file system).” [0061] Regarding Claim 18, The same motivation to combine provided in Claim 11 is equally applicable to Claim 18. The combined teachings of Wang and Annamalai disclose the following limitations: The system of claim 11 (see Claim 11 limitation mappings above), wherein to retrieve, from the distributed cache pool, the second data the instructions when executed on the data processing hardware is based on a … mapping (Wang, “a mapping relationship between a shard and a node” [0120]) locations of data in the distributed cache pool (Wang, ¶¶0120; 0179) – As previously discussed (see Claim 11 limitation mappings above) and as detailed Wang ¶¶0120; 0179, a distribution policy is used to assign shards and replicas to nodes. As clarified in Wang ¶0120, the distribution policy corresponds to a mapping between shards and nodes of the database. The combined teachings of Wang and Annamalai are silent regarding the following limitations: a hashmap mapping locations of data in the distributed cache pool. However, Bisson discloses within the context of mapping file data to locations in a distributed file system environment that “hash maps” are traditionally used by distributed file systems to identify locations of files. Bisson discloses the following limitations: a hashmap (“a normal hash map” [0057]) mapping locations of data in the distributed cache pool (Fig. 6A // “The process of reading a file
Read full office action

Prosecution Timeline

Aug 14, 2023
Application Filed
Aug 16, 2024
Non-Final Rejection — §103, §112
Oct 29, 2024
Response Filed
Jan 24, 2025
Final Rejection — §103, §112
Mar 24, 2025
Interview Requested
Mar 31, 2025
Applicant Interview (Telephonic)
Mar 31, 2025
Examiner Interview Summary
Apr 16, 2025
Request for Continued Examination
Apr 20, 2025
Response after Non-Final Action
May 01, 2025
Non-Final Rejection — §103, §112
Jul 11, 2025
Interview Requested
Jul 17, 2025
Examiner Interview Summary
Jul 17, 2025
Applicant Interview (Telephonic)
Jul 30, 2025
Response Filed
Nov 05, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596500
BLOOM FILTER INTEGRATION INTO A CONTROLLER
2y 5m to grant Granted Apr 07, 2026
Patent 12572469
INDEPENDENT FLASH TRANSLATION LAYER TABLES FOR MEMORY
2y 5m to grant Granted Mar 10, 2026
Patent 12572301
PEER-TO-PEER FILE SHARING USING CONSISTENT HASHING FOR DISTRIBUTING DATA AMONG STORAGE NODES
2y 5m to grant Granted Mar 10, 2026
Patent 12561066
DATA STORAGE DURING POWER STATE TRANSITION OF A MEMORY SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12541451
SOLVING SUBMISSION QUEUE ENTRY OVERFLOW WITH AN ADDITIONAL OUT-OF-ORDER SUBMISSION QUEUE ENTRY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+55.6%)
2y 1m
Median Time to Grant
High
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month