Prosecution Insights
Last updated: April 19, 2026
Application No. 18/749,115

DECOUPLED DATABASE SEARCH SYSTEM ARCHITECTURE

Non-Final OA §103
Filed
Jun 20, 2024
Examiner
RICHARDSON, JAMES E
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Mongodb Inc.
OA Round
3 (Non-Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
410 granted / 506 resolved
+26.0% vs TC avg
Strong +32% interview lift
Without
With
+31.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
14 currently pending
Career history
520
Total Applications
across all art units

Statute-Specific Performance

§101
17.5%
-22.5% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 506 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 26 February 2026 has been entered. Accordingly, claims 1-20 are pending in this application. Claims 1, 9, 13, 14, 16, and 20 are currently amended; claims 2, 4, 6-8, 12, and 19 are as previously presented; claims 3, 5, 10, 11, 15, 17, and 18 are original. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hu et al. (previously presented)(US 2021/0081378 A1), hereinafter Hu [which incorporated by reference Hu et al. U.S. patent application Ser. No. 15/268,108 (previously cited)(US 2017/013094 A1), hereinafter Hu’3094], in view of Goel et al. (previously presented)(US 2016/0055248 A1), hereinafter Goel. As to claim 1, Hu discloses a distributed database system (Fig. 2) comprising: a datastore storing data of the distributed database system (Fig. 2, #112, 152; [0015], [0030], I.e. database shards 112 and 152, and analogously shards #260A-C in Fig. 2 of Hu’3094); and at least one processor configured to (Fig. 6; [0030]; (analogously, Hu’3094, [0029]-[0030]), Each database shard has its own physical resources, such as processors, memory, and/or storage device. [0106], [0107] (analogously, Hu’3094, [0187], [0188]), More broadly, each node in a multi-node database system has a processor, e.g. to obviously include Shard Catalog Server 206, and analogously Shard Director 220¸ which would obviously have its own hardware since shards are all share-nothing, and because it is only communicatively coupled to the other components.): receive, from a client device, a query requesting data from the datastore (Figs. 2 and 4; [0029], [0077] Lines 3-5; [0108], Queries for database data from a client application 208 are received by Shard Catalog Server 206. Hu’3094, Figs. 2, 15A-15B; [0044], [0142]; where a client query is received by a sharding coordinator for data in the shards.); transmit the query to a database search system located separate from the datastore and the at least one processor for execution (Fig. 2; [0034], [0066], [0074]-[0076], [0080], [0084], I.e. transmitting the query to Shard Catalog 204 of Shard Catalog Database 212 where it can be “run on” ([0074]), which as previously indicated with [0030] in addition to [0034], maintains a separate physical resources (e.g. processor and storage). Hu’3094, Figs. 2 and 15A-15B; [0043], [0044], I.e. transmitting the query to Shard Catalog 230 executed to determine what shards contain the data being requested.); and after transmitting the query to the database search system: receive, from the database search system, information generated by the database search system from execution of the query (Fig. 4; [0066], [0079]; Hu’3094, Figs. 15A-15B; [0043], [0044], [0091]; Using data from the query, information is returned identifying where to look for the data in the data store.) , retrieve, from the datastore, using information received from the database search system, the data identified by the database search system from execution of the query (Fig. 4; [0066], [0079]; Hu’3094, Figs. 15A-15B; [0091], [0144], The rewritten query using information from the search system is routed to the appropriate shard, e.g. that also identified from information returned from the catalog server, and executed to return appropriate data corresponding to the query.); and transmit, to the client device, the data retrieved from the datastore (Fig. 2; [0029]; Hu’3094, Figs. 15A-15B #1510 and 1530; [0147], [0152], The sharding coordinator disclosed via Hu’3094, and, as apparent to one of ordinary skill in the art before the effective filing date of the claimed invention, analogous to the Shard Catalog Server, aggregates results from the shards and returns them to the client in response to the query.). Hu does not explicitly disclose receive, from the database search system, information generated by the database search system from execution of the query using a search index stored by the database search system, the information identifying data stored in the datastore that matches one or more criteria specified by the query. However, Goel discloses transmit the query to a database search system located separate from the datastore and the at least one processor for execution (Fig. 3); and after transmitting the query to the database search system: receive, from the database search system, information generated by the database search system from execution of the query using a search index stored by the database search system, the information identifying data stored in the datastore that matches one or more criteria specified by the query (Figs. 3, 4, 6; [0065], [0066], A query is received by a database search system separate from a datastore of documents 340. Appropriate index(es) are determined to execute the query on so as to determine what documents match the query and return where the matching documents can be retrieved from.); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hu with the teachings of Goel by modifying Hu such that the database search system comprising the shard catalog used to determine which shards need to be queried is implemented similar to the data search system of Goel such that it implements index servers to execute the query on and determine what shards need to be accessed to return the desired data to the user’s query of Hu. Said artisan would have been aware of the advantages of using indexes to efficiently look up the locations of desired data as is well-known in the art, and have been motivated to make the changes in order to utilize indexes to more efficiently look up the shards relevant to the received query. As to claim 9, Hu discloses a method for processing queries by a distributed database system, the distributed database system comprising a datastore storing data of the distributed database system (Fig. 2, #112, 152; [0015], [0030], I.e. database shards 112 and 152; and analogously shards #260A-C in Fig. 2 of Hu’3094), the method comprising: using at least one processor to (Fig. 6; [0030]; (analogously, Hu’3094, [0029]-[0030]), Each database shard has its own physical resources, such as processors, memory, and/or storage device. [0106], [0107] (analogously, Hu’3094, [0187], [0188]), More broadly, each node in a multi-node database system has a processor, e.g. to obviously include Shard Catalog Server 206, and analogously Shard Director 220¸ which would obviously have its own hardware since shards are all share-nothing, and because it is only communicatively coupled to the other components.)perform: receiving, from a client device, a query requesting data from the datastore (Figs. 2 and 4; [0029], [0077] Lines 3-5; [0108], Queries for database data from a client application 208 are received by Shard Catalog Server 206. Hu’3094, Figs. 2, 15A-15B; [0044], [0142]; where a client query is received by a sharding coordinator for data in the shards.); transmitting the query to a database search system located separate from the datastore and the at least one processor for execution (Fig. 2; [0034], [0066], [0074]-[0076], [0080], [0084], I.e. transmitting the query to Shard Catalog 204 of Shard Catalog Database 212 where it can be “run on” ([0074]), which as previously indicated with [0030] in addition to [0034], maintains a separate physical resources (e.g. processor and storage). Hu’3094, Figs. 2 and 15A-15B; [0043], [0044], I.e. transmitting the query to Shard Catalog 230 executed to determine what shards contain the data being requested.); after transmitting the query to the database search system: receiving, from the database search system, information generated by the database search system from execution of the query (Fig. 4; [0066], [0079]; Hu’3094, Figs. 15A-15B; [0043], [0044], [0091]; Using data from the query, information is returned identifying where to look for the data in the data store.) , retrieving, from the datastore, using information received from the database search system, the data identified by the database search system from execution of the query (Fig. 4; [0066], [0079]; Hu’3094, Figs. 15A-15B; [0091], [0144], The rewritten query using information from the search system is routed to the appropriate shard, e.g. that also identified from information returned from the catalog server, and executed to return appropriate data corresponding to the query.); and transmitting, to the client device, the data retrieved from the datastore (Fig. 2; [0029]; Hu’3094, Figs. 15A-15B #1510 and 1530; [0147], [0152], The sharding coordinator disclosed via Hu’3094, and, as apparent to one of ordinary skill in the art before the effective filing date of the claimed invention, analogous to the Shard Catalog Server, aggregates results from the shards and returns them to the client in response to the query.). Hu does not explicitly disclose receive, from the database search system, information generated by the database search system from execution of the query using a search index stored by the database search system, the information identifying data stored in the datastore that matches one or more criteria specified by the query. However, Goel discloses transmit the query to a database search system located separate from the datastore and the at least one processor for execution (Fig. 3); and after transmitting the query to the database search system: receive, from the database search system, information generated by the database search system from execution of the query using a search index stored by the database search system, the information identifying data stored in the datastore that matches one or more criteria specified by the query (Figs. 3, 4, 6; [0065], [0066], A query is received by a database search system separate from a datastore of documents 340. Appropriate index(es) are determined to execute the query on so as to determine what documents match the query and return where the matching documents can be retrieved from.); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hu with the teachings of Goel by modifying Hu such that the database search system comprising the shard catalog used to determine which shards need to be queried is implemented similar to the data search system of Goel such that it implements index servers to execute the query on and determine what shards need to be accessed to return the desired data to the user’s query of Hu. Said artisan would have been aware of the advantages of using indexes to efficiently look up the locations of desired data as is well-known in the art, and have been motivated to make the changes in order to utilize indexes to more efficiently look up the shards relevant to the received query. As to claim 16, Hu discloses a non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform a method for processing queries by a distributed database system, the distributed database system (Fig. 6; [0122]-[0124]) comprising a datastore storing data of the distributed database system, the method comprising: receiving, from a client device, a query requesting data from the datastore (Figs. 2 and 4; [0029], [0077] Lines 3-5; [0108], Queries for database data from a client application 208 are received by Shard Catalog Server 206. Hu’3094, Figs. 2, 15A-15B; [0044], [0142]; where a client query is received by a sharding coordinator for data in the shards.); transmitting the query to a database search system located separate from the datastore and the at least one processor for execution (Fig. 2; [0034], [0066], [0074]-[0076], [0080], [0084], I.e. transmitting the query to Shard Catalog 204 of Shard Catalog Database 212 where it can be “run on” ([0074]), which as previously indicated with [0030] in addition to [0034], maintains a separate physical resources (e.g. processor and storage). Hu’3094, Figs. 2 and 15A-15B; [0043], [0044], I.e. transmitting the query to Shard Catalog 230 executed to determine what shards contain the data being requested.); after transmitting the query to the database search system: receiving, from the database search system, information generated by the database search system from execution of the query (Fig. 4; [0066], [0079]; Hu’3094, Figs. 15A-15B; [0043], [0044], [0091]; Using data from the query, information is returned identifying where to look for the data in the data store.) retrieving, from the datastore, using information received from the database search system, the data identified by the database search system from execution of the query (Fig. 4; [0066], [0079]; Hu’3094, Figs. 15A-15B; [0091], [0144], The rewritten query using information from the search system is routed to the appropriate shard, e.g. that also identified from information returned from the catalog server, and executed to return appropriate data corresponding to the query.); and transmitting, to the client device, the data retrieved from the datastore (Fig. 2; [0029]; Hu’3094, Figs. 15A-15B #1510 and 1530; [0147], [0152], The sharding coordinator disclosed via Hu’3094, and, as apparent to one of ordinary skill in the art before the effective filing date of the claimed invention, analogous to the Shard Catalog Server, aggregates results from the shards and returns them to the client in response to the query.). Hu does not explicitly disclose receive, from the database search system, information generated by the database search system from execution of the query using a search index stored by the database search system, the information identifying data stored in the datastore that matches one or more criteria specified by the query. However, Goel discloses transmit the query to a database search system located separate from the datastore and the at least one processor for execution (Fig. 3); and after transmitting the query to the database search system: receive, from the database search system, information generated by the database search system from execution of the query using a search index stored by the database search system, the information identifying data stored in the datastore that matches one or more criteria specified by the query (Figs. 3, 4, 6; [0065], [0066], A query is received by a database search system separate from a datastore of documents 340. Appropriate index(es) are determined to execute the query on so as to determine what documents match the query and return where the matching documents can be retrieved from.); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hu with the teachings of Goel by modifying Hu such that the database search system comprising the shard catalog used to determine which shards need to be queried is implemented similar to the data search system of Goel such that it implements index servers to execute the query on and determine what shards need to be accessed to return the desired data to the user’s query of Hu. Said artisan would have been aware of the advantages of using indexes to efficiently look up the locations of desired data as is well-known in the art, and have been motivated to make the changes in order to utilize indexes to more efficiently look up the shards relevant to the received query. As to claim 2, the claim is rejected for the same reasons as claim 1 above. In addition, Hu discloses the database search system is configured to execute a plurality of query management nodes (Goel, Figs. 3-4, #420; [0124], As previously modified, the database search system comprising the catalog server is modified to include the index based database search system of Goel which includes at least one routing server 420 used to route the queries to one or more index servers, i.e. query management nodes, in the database search system.); and the data stored in the datastore comprises a plurality of data partitions each mapped to a respective set of one or more of the plurality of query management node s (Goel, [0116], [0128], Mapping is used with a determined namespace of where data is stored, analogous to a partition, to determine query management nodes to search.), wherein the respective set of one or more query management nodes is configured to index data of the data partition (Goel, Figs. 4-5; [0090], [0137]). As to claim 3, the claim is rejected for the same reasons as claim 2 above. In addition, Hu, as previously modified with Goel, discloses wherein the plurality of data partitions comprises a first data partition mapped to a set of multiple query management nodes (Goel, [0060], [0126], e.g. having a shard replicated across multiple servers.). As to claim 4, the claim is rejected for the same reasons as claim 1 above. In addition, Hu, as previously modified with Goel, discloses wherein: the database search system is configured to execute a plurality of query management nodes (Goel, Figs. 3-4, #420; [0124], As previously modified, the database search system comprising the catalog server is modified to include the index based database search system of Goel which includes at least one routing server 420 used to route the queries to one or more index servers, i.e. query management nodes, in the database search system.); and the data stored in the datastore comprises a plurality of sets of replicated datasets each mapped to a respective one of the plurality of query management nodes of the database search system (Goel, Figs. 4 and 6; [0060], [0124]-[0126], [0177], Each shard 328 is mapped to a respective index server 430. Each index server can be implemented as multiple servers with replicated shards, each mapped to a respective collective index server.). As to claim 5, the claim is rejected for the same reasons as claim 1 above. In addition, Hu, as previously modified with Goel, discloses wherein the distributed database system comprises the database search system (Hu, Fig. 2; Goel, Figs. 3-4; As previously modified, the database search system comprising the catalog server is modified to include the index based database search system, and is part of the distributed database system which receives and responds to user queries using the database shards.). As to claim 6, the claim is rejected for the same reasons as claim 1 above. In addition, Hu, as previously modified with Goel, discloses the at least one processor is further configured to execute a query router configured to route the query to one or more of a plurality of query management nodes of the database search system (Goel, Figs. 3-4, #420; [0124], As previously modified, the database search system comprising the catalog server is modified to include the index based database search system of Goel which includes at least one routing server 420 used to route the queries to one or more index servers, i.e. query management nodes, in the database search system.). As to claim 7, the claim is rejected for the same reasons as claim 6 above. In addition, Hu, as previously modified with Goel, discloses wherein the query router is configured to route the query to one or more of the plurality of query management nodes by performing: selecting the one or more query management nodes based on data targeted by the query (Goel, Figs. 3-4 [0120], [0128], The routing server routes the query to query management nodes by consulting a mapping and making a determination based on the features of the query.) and transmitting the query to the one or more query management nodes (Goel, Fig. 4; [0129]). As to claim 8, the claim is rejected for the same reasons as claim 7 above. In addition, Hu, as previously modified with Goel, discloses wherein selecting the one or more query management nodes based on data targeted by the query comprises: determining that data targeted by the query is stored in a first data partition of a plurality of data partitions stored in the datastore, the first data partition mapped to a first set of one or more query management nodes (Goel, [0116], [0128], Mapping is used with a determined namespace of where data is stored, analogous to a partition, to determine query management nodes to search.); and transmitting the query to the first set of one or more query management nodes for execution (Goel, Fig. 4; [0129]). As to claims 10 and 17, the claims are rejected for the same reasons as claims 9 and 16 above. In addition, Hu, as previously modified with Goel, discloses wherein the database search system comprises a plurality of query management nodes (Goel, Figs. 3-4, #420; [0124], As previously modified, the database search system comprising the catalog server is modified to include the index based database search system of Goel which includes at least one routing server 420 used to route the queries to one or more index servers, i.e. query management nodes, in the database search system.) and the method further comprises: routing, by the database search system, the query to one or more of the plurality of query management nodes for execution (Goel, Figs. 3-4 [0120], [0128], The routing server routes the query to query management nodes by consulting a mapping and making a determination based on the features of the query.). As to claims 11 and 18, the claims are rejected for the same reasons as claims 10 and 17 above. In addition, Hu, as previously modified with Goel, discloses wherein routing the query to one or more of the plurality of query management nodes for execution comprises: selecting the one or more query management nodes based on data targeted by the query (Goel, Figs. 3-4 [0120], [0128], The routing server routes the query to query management nodes by consulting a mapping and making a determination based on the features of the query.); and transmitting the query to the one or more query management nodes (Goel, Fig. 4; [0129]). As to claims 12 and 19, the claims are rejected for the same reasons as claims 11 and 18 above. In addition, Hu, as previously modified with Goel, discloses wherein the datastore comprises a plurality of data partitions each mapped to a respective set of one or more of the plurality of query management nodes (Goel, [0116], [0128], Mapping is used with a determined namespace of where data is stored, analogous to a partition, to determine query management nodes to search.) and selecting the one or more query management nodes based on data targeted by the query comprises: determining that data targeted by the query is stored in a first data partition of the plurality of data partitions, the first data partition mapped to a first set of one or more query management nodes (Goel, [0116], [0128], Mapping is used with a determined namespace of where data is stored, analogous to a partition, to determine query management nodes to search.); and transmitting the query to the first set of one or more query management nodes for execution (Goel, Fig. 4; [0129]). As to claims 13 and 20, the claims are rejected for the same reasons as claims 10 and 17 above. In addition, Hu, as previously modified with Goel, discloses replicating, by the database search system, updates to the search index to each of the plurality of query management nodes (Goel, [0126], [0181], When a shard contains replica servers, an update to a search index therein will be replicated to each of the plurality of servers.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to further combine the teachings of Hu with the teachings of Goel by modifying Hu such that the DBMSs executing queries against the respective database shards mapped to them are modified to include indexes of the shards’ data like is done in the index servers of Goel which each maintain index shards indexing documents of namespaces for the respective shards (Figs. 4-5; [0090], [0137]), and to then notify those indexes of changes to the underlying indexed data like is done by Goel. Said artisan would have been motivated to do so in order to enable the DBMSs for the database shards of Hu to more efficiently respond to queries routed thereto by utilizing indexes of the data therein to more quickly locate data, and to further update said indexes to accurately reflect changes in data in the system so that queries issued thereto return accurate data to a user. As to claim 14, the claim is rejected for the same reasons as claim 12 above. In addition, Hu, as previously modified with Goel, discloses wherein receiving, from the database search system, the information generated by the database search system from execution of the query comprises: receiving, from the first set of one or more query management nodes, an identification of one or more data objects stored in the first data partition (Goel, [0065], e.g. answers with a document identifier and location.). As to claim 15, the claim is rejected for the same reasons as claim 9 above. In addition, Hu, as previously modified with Goel, discloses receiving, from the database search system, an indication of an update to data in the datastore (Goel, [0092], [0095], E.g. notification of a stored document being modified being received by the indexer and thus the respective index.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to further combine the teachings of Hu with the teachings of Goel by modifying Hu such that the DBMSs executing queries against the respective database shards mapped to them are modified to include indexes of the shards’ data like is done in the index servers of Goel which each maintain index shards indexing documents of namespaces for the respective shards (Figs. 4-5; [0090], [0137]), and to then notify those indexes of changes to the underlying indexed data like is done by Goel. Said artisan would have been motivated to do so in order to enable the DBMSs for the database shards of Hu to more efficiently respond to queries routed thereto by utilizing indexes of the data therein to more quickly locate data, and to further update said indexes to accurately reflect changes in data in the system so that queries issued thereto return accurate data to a user. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wieder et al. (US 6,490,589 B1) discloses transmitting a client query to one or more index servers, the index servers executing the query to identify and return the location of data matching the query in one or more data stores separate from the index servers (Fig. 6). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES E RICHARDSON whose telephone number is (571)270-1917. The examiner can normally be reached Mon-Fri 9:00-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /James E Richardson/ Primary Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

Jun 20, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §103
Nov 20, 2025
Examiner Interview (Telephonic)
Nov 20, 2025
Examiner Interview Summary
Nov 25, 2025
Response Filed
Dec 10, 2025
Final Rejection — §103
Feb 11, 2026
Applicant Interview (Telephonic)
Feb 11, 2026
Examiner Interview Summary
Feb 26, 2026
Request for Continued Examination
Mar 09, 2026
Response after Non-Final Action
Mar 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585638
QUERY EXECUTION USING A DATA PROCESSING SCHEME OF A SEPARATE DATA PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12579112
LOCATION DATA PROCESSING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572273
SYSTEM AND METHOD FOR KEY-VALUE SHARD CREATION AND MANAGEMENT IN A KEY-VALUE STORE
2y 5m to grant Granted Mar 10, 2026
Patent 12572534
SELECTION QUERY LANGUAGE METHODS AND SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12566756
EFFICIENT EVENT-TYPE-BASED DISTRIBUTED LOG-ANALYTICS SYSTEM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+31.6%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 506 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month