Prosecution Insights
Last updated: April 19, 2026
Application No. 17/334,570

ENSURING HIGH AVAILABLITY OF REPLICATED DATABASE MANAGEMENT SYSTEMS DURING UPGRADES

Final Rejection §103§112
Filed
May 28, 2021
Examiner
BAKER, IRENE H
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Salesforce Com Inc.
OA Round
4 (Final)
54%
Grant Probability
Moderate
5-6
OA Rounds
3y 0m
To Grant
81%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
129 granted / 238 resolved
-0.8% vs TC avg
Strong +27% interview lift
Without
With
+26.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
270
Total Applications
across all art units

Statute-Specific Performance

§101
26.3%
-13.7% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
4.6%
-35.4% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 238 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement An Information Disclosure Statement (IDS) has not been submitted as of the mailing of the last Office Action dated 18 July 2025. Applicant is reminded of the continuing obligation under 37 CFR 1.56 to timely apprise the Office of any information which is material to patentability of the claims under consideration in this application. Introductory Remarks In response to communications filed on 20 October 2025, claims 1-5, 7-8, 10-15, and 17-20 are amended per Applicant's request. No claims were cancelled. No claims were withdrawn. No new claims were added. Therefore, claims 1-20 are presently pending in the application, of which claims 1, 10, and 17 are presented in independent form. The previously raised 112 rejections of the pending claims are withdrawn in view of the amendments to the claims. A new ground(s) of rejection has been issued. The previously raised 103 rejection of the pending claims is withdrawn in view of the amendments to the claims. A new ground(s) of rejection has been issued. Response to Arguments Applicant’s arguments filed 20 October 2025 with respect to the 112 rejections of the pending claims (see Remarks, p. 9) have been fully considered and are persuasive. However, a new ground(s) of rejection has been raised in view of the amendments to the claims. Applicant’s arguments filed 20 October 2025 with respect to the rejection of the claims under 35 U.S.C. 103 (see Remarks, p. 10-12) have been fully considered but are moot, as Applicant’s arguments are not directed to the new combination of references being used in the current rejection. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Independent Claims 1, 10, and 17 recite “the spare node serves as a standby node but does not process any read or write requests”. This language appears to indicate that the claimed standby node may possess abilities to process requests (usually “read” requests), but is configured to not do so1. However, Specification, [0016] states that “spare node that does not receive requests but acts as standby for high availability”. This appears to indicate that “standby” is meant to be a state in which the spare node does not receive read and write requests, not that standby nodes may process read requests, however the spare node is a standby that does not process read requests. For purposes of examination, the interpretation that a “standby node” refers to a node being in a standby (i.e., idle) state, the standby state meaning that the node does not process read or write requests, i.e., “serves as a standby node that does not process any read or write requests”. The dependent claims are rejected for at least by virtue of their dependency on their respective independent claims, and for failing to cure the deficiencies of their respective independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6-12, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Horowitz et al. (“Horowitz”) (US 2017/0286516 A1, incorporating by reference Horowitz et al. (“IBR-Horowitz”) (Ser. No. 14/969,537, published as US 2017/0169059 A1) and Horowitz et al. (“IBR-Horowitz-290”) (Ser. No. 15/605,141, published as US 2017/0344290 A1) at [0001])), in view of Selvaraj et al. (“Selvaraj”) (US 2014/0376362 A1), in further view of Mankad et al. (“Mankad”) (US 2022/0207053 A1). Regarding claim 1: Horowitz teaches A computer implemented method for upgrading database management systems, the method comprising: receiving a request to upgrade a replicated database management system (IBR-Horowitz, [0002-0003], where an end user inputs a goal state, such as managing upgrades to the database software, resulting in the computer automatically generating an execution plan to upgrade the database nodes from the current state to a goal state (IBR-Horowitz, [0011])) comprising a first database node configured as a master node, a second database node configured as a read-replica node, and a third database node configured as the spare node, wherein the master node processes both read and write requests, the read-replica node processes read requests and does not receive write requests (Horowitz, [0099], where the primary node may handle commands that change the data stored in the database and the secondary nodes may replicate the data in the primary node over time (i.e., “read-replica node”) and process read requests. See IBR-Horowitz-290, [0078], where both read operations may be permitted at any node (including primary node 302 or secondary nodes 308, 310) and write operations limited to primary nodes in response to requests from clients)) … Horowitz does not appear to explicitly teach [a second database node configured as a read-replica node and] the spare node serving as a standby for failover but does not process any read or write requests; upgrading the third database node configured as the spare node; subsequent to upgrading the third database node, configuring the third database node as the read-replica node and the second database node as the spare node to permit upgrading of the second database node; subsequent to configuring the second database node as the spare node, upgrading the second database node; subsequent to upgrading the second database node, configuring- the first database node as the spare node to permit upgrading of the first database node; and one or both of the second and third database nodes so that one of the second and third database nodes is configured as the master node the other one of the second and third database nodes is configured as the read-replica node; and subsequent to configuring the first database node as the spare node, upgrading the first database node. Mankad teaches [a second database node configured as a read-replica node,] the spare node serving as a standby for failover but does not process any read or write requests (Mankad, [0072] and [0075], where at least three instances of the administration database may be deployed, e.g., a first instance is designated as a primary database, while the second instance 320 and third instance 325 may be designated as a secondary or standby administration database, where secondary copies may perform read balancing as well. See, e.g., Mankad, [0130], where a proxy is set up to forward write operations to the leader (i.e., primary) and read operations to the followers (i.e., secondaries) for read balancing, where the follower database server instances do not simply remain on standby waiting to assume the role of a leader (i.e., “the spare node serving as a standby for failover”), but also actively participate in servicing user requests. Although Mankad does not appear to explicitly state that there is a mix of both read-only (e.g., follower) nodes and standby nodes in the additional instances of the administration database, it would have been obvious to one of ordinary skill in the art to have modified Mankad to have at least one of the standby instances (seen in, e.g., Mankad, [FIG. 3]) be a read-only (secondary) administration database (while the third instance is standby), with the motivation of providing the advantages provided by having both types of nodes (i.e., read-only and standby), i.e., providing read balancing (i.e., the advantages of having read-only nodes), in addition to having a dedicated passive standby node, which has the advantages of being simpler to implement, and enables (potentially) faster failover (as there are fewer changes made to the routing of read-only requests)); upgrading the third database node configured as the spare node; … subsequent to configuring the second database node as the spare node, upgrading the second database node; subsequent to upgrading the second database node, configuring the first database node as the spare node to permit upgrading of the first database node and one or both of the second and third database nodes so that one of the second and third database nodes is configured as the master node …; and subsequent to configuring the first database node as the spare node, upgrading the first database node (IBR-Horowitz, [0075-0094], where the system upgrades secondary nodes, one at a time, waits for the member to recover to secondary state, steps down the primary node to a secondary node role, waits for another member to be elected to primary, and then upgrades the previous primary. See also, e.g. Horowitz, [0259-0262], where new resources with upgrades and patches can be instantiated, enabling zero downtime for upgrades and/or maintenance. Recall from Horowitz, [0099] and [0110] above that there are multiple secondary nodes, and not all storage engines provide read commit and/or write commit functionality. Therefore, one of ordinary skill in the art would have been suggested by Horowitz’s disclosure to have more specifically step down the primary node to a secondary node role that has neither read nor write permissions (i.e., the claimed “spare node”) with the motivation of ensuring that during upgrade, the primary node cannot service any requests including read requests (thereby avoiding disruptions to the primary node’s upgrade as well as avoiding delays in servicing read requests). See Mankad, [0072], [0075], and [0130] above with respect to the third instance being a “standby” node)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Horowitz and Mankad (hereinafter “Mankad as modified”) by having at least one of Horowitz’s secondary nodes be a standby node with the motivation of reducing takeover time by having a dedicated standby node2, thereby having a high-availability system. Furthermore, it would have been obvious to one of ordinary skill in the art to have modified Horowitz (i.e., having an instance being a spare node, as in Mankad) such that the standby node is upgraded first with the motivation of ensuring that read requests being serviced by the read-only (secondary) nodes are not interrupted during the upgrade. Howard as modified does not appear to explicitly teach subsequent to upgrading the third database node, configuring the third database node as the read-replica node and the second database node as the spare node to permit upgrading of the second database node; [and configuring] the other one of the second and third database nodes is configured as the read-replica node. Selvaraj teaches subsequent to upgrading the third database node, configuring the third database node as the read-replica node and the second database node as the spare node to permit upgrading of the second database node (Selvaraj, [0086], where the system identifies at least one database access service running in a first configuration on a first one of the nodes to be upgraded, selects a fail-over node, the fail-over node configured to be capable to run the at least one database access service, migrating the at least one database access to the selected fail-over node, shuts down the at least one database access service running on the first one of the nodes to be upgraded (i.e., “the second node as the spare node to permit upgrading of the second node”), and upgrades the first one of the nodes to be upgraded. Note that database “access” includes read (and write) (see, e.g., Selvaraj, [0034]). Therefore, in this manner, Selvaraj’s shutting down of the node’s database access service involves rendering it incapable of performing read (or write) operations, which is the characteristic of the claimed “spare node” (thus disclosing “configuring…the second node as the spare node to permit upgrading of the second node”). See Horowitz, [0099] and IBR-Horowitz-290, [0078], above where the secondary nodes perform only read requests. Therefore, if the secondary node only enabled read-only as a database access service, this would result in only the read-only functions being transferred to the fail-over node (i.e., resulting in “configuring the third node as the read-replica node”, as claimed)); [and] [configuring] the other one of the second and third database nodes is configured as the read-replica node (see Selvaraj, [0086] and Horowitz, [0099] in the section above with regards to the third node being configured as the read-replica node. Note although the prior art does not appear to explicitly state that this same operation occurs for the “second” node as claimed, one of ordinary skill in the art would have found it obvious to have modified Selvaraj and Horowitz with the motivation of always having one primary/active node handling read/write operations, one node handling read-only operations (read-replica node), and one node being a dedicated passive standby node that handles no operations for maximum availability purposes). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Horowitz as modified and Selvaraj (hereinafter “Horowitz as modified”) with the motivation of increasing the level of fault tolerance provided3, as well as providing read balancing for load balancing purposes.4 Regarding claim 2: Horowitz as modified teaches The method of claim 1, wherein configuring the third database node as the read-replica node and the second database node as the spare node comprises: quiescing read requests directed to the second database node; and distributing read requests across the first database node and the third database node (Selvaraj, [0047], where bringing down a node for upgrade can have the effect that services provided by that node are temporarily stopped, and services provided by that node are re-deployed to another node in the cluster. See Horowitz, [0099] above where only the secondary nodes perform read requests. Therefore, if the secondary node only enabled read-only as a database access service, this would result in only the read-only functions being transferred to the fail-over node, with the primary node still performing both read and write operations (Horowitz, [0099] and IBR-Horowitz-290, [0078]) (i.e., “distributing read requests across the first node and the third node”)). Regarding claim 3: Horowitz as modified teaches The method of claim 1, wherein configuring the first database node as the spare node and one of the second database node and the third database node as the master node comprises: quiescing read and write requests directed to the first database node; and subsequent to quiescing the read and write request directed to the first database node, directing write requests to the master node and distributing read requests across the second and third database nodes (Selvaraj, [0034] and [0047], where bringing down a node for upgrade can have the effect that services provided by that node are temporarily stopped, and services (e.g., read and write requests) provided by that node are re-deployed to another node in the cluster. See Horowitz, [0099] and IBR-Horowitz-290, [0078] above with regards to the primary node performing both read and write requests. Therefore, when applying Selvaraj’s disclosure of bringing down the node to Horowitz’s disclosed primary node, this results in the quiescing of read and write requests to Horowitz’s primary node, and directing write requests to another node in the cluster, e.g., Horowitz’s secondary node, such as seen in, e.g., Horowitz, [0137-0138], where one or more secondary systems can take over applying database writes if the primary fails (or is down, e.g., for upgrades as seen in IBR-Horowitz, [0075-0094]), and resulting in both the (new) primary (formerly a secondary node) having the capability of handling both read and write requests, and another secondary node (i.e., the claimed “third node”) handling just read requests (as disclosed by Horowitz, [0099] and IBR-Horowitz-290, [0078] in claim 1 above)). Regarding claim 6: Horowitz as modified teaches The method of claim 1, wherein the spare node is used in case of failure of one of a set of the master node and the read-replica node (Mankad, [0072], where when the primary administration database fails, one of the secondary administration databases (second instance 320 or third instance 325) may assume the role of the primary administration database, where if the second instance 320 also fails, the database server 305 may continue operation with the third instance 325). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Horowitz as modified and Mankad with the motivation of maintaining continuity of operation (Mankad, [0072]). Regarding claim 7: Horowitz as modified teaches The method of claim 1, wherein an upgrade of a database node comprises one or more of: installing a patch or upgrading to a new version of a software for the replicated database management system (IBR-Horowitz, [0045], where the goal may be to update database software version (i.e., “upgrading to a new version of a software for the database management system”)). Regarding claim 8: Horowitz as modified teaches The method of claim 1, wherein the replicated database management system is deployed on a cloud platform, the method further comprising: receiving a cloud platform image for a new version of a software for the replicated database management system, wherein the cloud platform image is used for upgrading any database node (Horowitz, [0244] and [0250], where the disclosed distributed database can be provisioned on cloud resources. Database monitoring services are built into the database deployment automatically for both sharded or replica set models, where cloud platform (e.g., 100) provides database as a service that is configured to be ready to be used in less than 5 minutes. See also Horowitz, [0068], where the disclosed system implements automated cloud instantiation services and provides for additional functionality, including a number of application programming interfaces configured to identify and execute updates and/or specific versions (i.e., “used for upgrading any node”) (e.g., associated with MONGODB binaries (i.e., “a new version of a software for the replicated database management system”). Note that the use of an API for communicating with external resources and identifying and executing updates implies that such updates and/or specific versions were received (i.e., “receiving a cloud platform [version]”)). Although Horowitz does not appear to explicitly state that the versions are received in the form of an “image” as claimed, one of ordinary skill in the art would have been suggested to modify Horowitz to explicitly include (software) images (in addition to, for example binaries) with predictably equivalent operating characteristics, which is that updates to the versions of software running on the nodes are performed. Therefore, one of ordinary skill in the art would have found it obvious to have modified Horowitz to explicitly include (software) images with the motivation of ensuring that the software application will run the same across all types of computing environments.5 Regarding claim 9: Horowitz as modified teaches The method of claim 8, wherein a database management system is stored using: an instructions storage unit storing instructions of the database management system for processing data of a database; and a data storage unit storing data of the database, wherein upgrading the database management system comprises installing new version of software for the database management system on a new instructions storage unit and providing the new instructions storage unit with access to the data storage unit (Horowitz, [0606-0610], where the computer system 2102 includes a memory 2112 and data storage element 2118, where memory 2112 stores programs (e.g., sequences of instructions coded to be executable by the processor 2110) and data during operation of the computer system 2102. The data storage element 2118 includes a data storage medium in which instructions are stored that define a program or other object that is executed by the processor 2110. See Horowitz, [0014-0015], where the disclosed system may automatically provision new cloud resources, and install database subsystems having optimizations, e.g., new application version, updated storage engine, additional replica set nodes, etc., where the system manages transitions between an original database resource and new cloud resource enabling the new cloud resources to operate with the database subsystem and retire the original resource from use (i.e., “installing new version of software for the database management system on a new instructions storage unit and providing the new instructions storage unit with access to the data storage unit”). See also, e.g., IBR-Horowitz, [0057], where the new database node that is instantiated to transition an existing database, can be instantiated with the same components as a mirrored node, e.g., 307. In one example, the mirrored node is instantiated to have all the same components as the original node being upgraded—in essence, a mirror of the original nodes (e.g., same data, same configurations, same architecture except where automation requires new settings)). Regarding claim 10: Claim 10 recites substantially the same claim limitations as claim 1, and is rejected for the same reasons. Note that Horowitz teaches A non-transitory computer readable storage medium for storing instructions that when executed by a computer processor cause the computer processor to perform steps comprising [the claimed steps] (Horowitz, [0606-0610], where the disclosed system may include a data storage element 2118, which may be a computer readable and non-transitory data storage medium in which instructions are stored that define a program or other object executed by the processor 2110 to implement the disclosed functions and processes). Regarding claim 11: Claim 11 recites substantially the same claim limitations as claim 2, and is rejected for the same reasons. Regarding claim 12: Claim 12 recites substantially the same claim limitations as claim 3, and is rejected for the same reasons. Regarding claim 15: Claim 15 recites substantially the same claim limitations as claim 8, and is rejected for the same reasons. Regarding claim 16: Claim 16 recites substantially the same claim limitations as claim 9, and is rejected for the same reasons. Regarding claim 17: Claim 17 recites substantially the same claim limitations as claim 1, and is rejected for the same reasons. Note that Horowitz teaches A computer system comprising: a computer processor; and a non-transitory computer readable storage medium for storing instructions that when executed by the computer processor cause the computer processor to perform steps comprising [the claimed steps] (Horowitz, [0606-0610], where the disclosed system may include a data storage element 2118, which may be a computer readable and non-transitory data storage medium in which instructions are stored that define a program or other object executed by the processor 2110 to implement the disclosed functions and processes). Regarding claim 18: Claim 18 recites substantially the same claim limitations as claim 2, and is rejected for the same reasons. Regarding claim 19: Claim 19 recites substantially the same claim limitations as claim 3, and is rejected for the same reasons. Claims 4, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Horowitz et al. (“Horowitz”) (US 2017/0286516 A1, incorporating by reference Horowitz et al. (“IBR-Horowitz”) (Ser. No. 14/969,537, published as US 2017/0169059 A1) and Horowitz et al. (“IBR-Horowitz-290”) (Ser. No. 15/605,141, published as US 2017/0344290 A1) at [0001])), in view of Selvaraj et al. (“Selvaraj”) (US 2014/0376362 A1), in further view of Mankad et al. (“Mankad”) (US 2022/0207053 A1), in further view of Tock et al. (“Tock”) (US 2019/0222640 A1). Regarding claim 4: Horowitz as modified teaches The method of claim 1, but does not appear to explicitly teach wherein the read and write requests are received from a set of application servers, wherein configuring the third database node as the read-replica node and the second database node as the spare node comprises quiescing requests directed to the second database node by: sending a request to each of the application servers to quiesce requests; and waiting for each of the set of application servers to send an acknowledgement message indicating a completion of quiescing of requests directed to the second database node. Tock teaches wherein the read and write requests are received from a set of application servers, wherein configuring the third database node as the read-replica node and the second database node as the spare node comprises quiescing requests directed to the second database node by: sending a request to each of the application servers to quiesce requests; and waiting for each of the set of application servers to send an acknowledgement message indicating a completion of quiescing of requests directed to the second database node (Tock, [0040-0044], where a target server sends a SWITCH command which indicates the source server (s1) and the target server (s2), which is transmitted to all other servers of the cluster. Upon receiving the SWITCH commands, each server sends an ACK message to both the source server (s1) and the target server (s2), indicating the acknowledging server is indicative that the acknowledging server has acknowledged the switch. This is performed for each ACK received by the target server from different servers. The target server then receives each of the ACK messages from each of the servers in the cluster. See Horowitz and Selvaraj in claim 1 above with regards to the nodes being “database nodes”, the “read and write requests”, “configuring the third node as the read-replica node and the second node as the spare node”, and claims 2-3 above with regards to the “quiescing” of the “second node” as claimed). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Horowitz as modified and Tock with the motivation of ensuring that requests are routed correctly after the migration (see, e.g., Tock, [0052]), e.g., that requests are not routed to a node that is no longer servicing those types of requests, which would cause delays or even a failure to process those requests, thereby providing seamless availability of the system to service requests. Regarding claim 13: Claim 13 recites substantially the same claim limitations as claim 4, and is rejected for the same reasons. Regarding claim 20: Claim 20 recites substantially the same claim limitations as claim 4, and is rejected for the same reasons. Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Horowitz et al. (“Horowitz”) (US 2017/0286516 A1, incorporating by reference Horowitz et al. (“IBR-Horowitz”) (Ser. No. 14/969,537, published as US 2017/0169059 A1) and Horowitz et al. (“IBR-Horowitz-290”) (Ser. No. 15/605,141, published as US 2017/0344290 A1) at [0001])), in view of Selvaraj et al. (“Selvaraj”) (US 2014/0376362 A1), in further view of Mankad et al. (“Mankad”) (US 2022/0207053 A1), in further view of Shutt et al. (“Shutt”) (US 2013/0124916 A1). Regarding claim 5: Horowitz as modified teaches The method of claim 1, but does not appear to explicitly teach wherein database nodes of the replicated database management system are distributed across three data centers that are situated in different physical locations. Shutt teaches wherein database nodes of the replicated database management system are distributed across three data centers that are situated in different physical locations (Shutt, [Claims 1 and 7-10] and [0031-0034], where the claimed system has nodes distributed across at least three data centers (though M data centers may occur). See Horowitz, [0297-0308] and [0588], where different clusters may be situated in different regions, the region corresponding to the physical location of the cluster). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Horowitz as modified and Shutt with the motivation of maintaining adequate redundancy to provide disaster recovery while still providing high data availability rates (Shutt, [0034]), e.g., so that if some unforeseen disaster strikes the primary site, a secondary site will most likely not be affected and should be able to start running so that there is no business disruption.6 Regarding claim 14: Claim 14 recites substantially the same claim limitations as claim 5, and is rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See the enclosed 892 form. Chen et al. (US 2020/0097556 A1), Kassouf et al. (US 2021/0319115 A1), Ferguson (US 2014/0250326 A1) and Lawlor et al. (US 6,038,677 A) are cited to show that the meaning of “standby” does not appear to be consistent in the prior art, thus lending confusion as to the precise meaning of “standby” within the claimed invention (see the 112(b), indefiniteness rejection for more detail). Lawlor et al. is also cited to show why one of ordinary skill in the art would have found it obvious to have included dedicated standby nodes (see the 103 rejection). The prior art should be considered to define the claims over the art of record. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRENE BAKER whose telephone number is (408)918-7601. The examiner can normally be reached M-F 8-5PM PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NEVEEN ABEL-JALIL can be reached at (571)270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IRENE BAKER/Primary Examiner, Art Unit 2152 2 January 2026 1 Note that the language of “standby” does not appear to be utilized consistently within the technological area, as sometimes their functions are more akin to “secondary” nodes that are allowed to process read requests (see, e.g., (1) Chen et al. (US 2020/0097556 A1), which shares the same assignee as the present application, in which standby nodes may perform read requests (Chen et al. at [0030]); and (2) Kassouf et al. (US 2021/0319115 A1) at [0031], in which a standby node may service read requests). However, other references indicate that “standby” means that neither read nor write processes are performed. See, e.g., (1) Ferguson (fReUS 2014/0250326 A1) at [0038] (where if a database system is in a standby state, no reads are sent to the database system in Standby); and (2) Lawlor et al. (US 6,038,677 A) at [4:6-9] (where in a “standby” scheme, the takeover or redundant node is not performing other work, but rather is dedicated to being ready to perform the job of the primary node). 2 Lawlor et al. US 6,038,677 A at [4:6-18]. 3 Torbjornsen et al. US 5,555,404 A at [5:12-20] (“…increasing the number of table replicas increases the level of fault tolerance provided”). 4 Mankad et al. US 2022/0207053 A1 at [0075]. 5 Kuang et al. US 9,811,806 B1 at [Background] (“Software applications may be packaged in a software image, which can be deployed as a container in a computing environment. The software image includes the software application and a filesystem that includes any [of] the components needed to run the software application on a server in a given computing environment. Doing so ensures that the software application will run the same across all types of computing environments…”). 6 Chen et al. US 2018/0239677 A1 at [0026].
Read full office action

Prosecution Timeline

May 28, 2021
Application Filed
Oct 21, 2022
Non-Final Rejection — §103, §112
Mar 25, 2023
Interview Requested
Apr 03, 2023
Response Filed
Aug 08, 2023
Final Rejection — §103, §112
Nov 21, 2023
Examiner Interview Summary
Nov 21, 2023
Applicant Interview (Telephonic)
Nov 22, 2023
Request for Continued Examination
Dec 01, 2023
Response after Non-Final Action
Jul 16, 2025
Non-Final Rejection — §103, §112
Oct 20, 2025
Response Filed
Oct 24, 2025
Applicant Interview (Telephonic)
Oct 24, 2025
Examiner Interview Summary
Jan 02, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602368
ANOMALY DETECTION DATA WORKFLOW FOR TIME SERIES DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12591890
CONCURRENT STATE MACHINE PROCESSING USING A BLOCKCHAIN
2y 5m to grant Granted Mar 31, 2026
Patent 12566880
SEAMLESS UPDATING AND RECONCILIATION OF DATABASE IDENTIFIERS GENERATED BY DIFFERENT AGENT VERSIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12566790
LAKEHOUSE METADATA CHANGE DETERMINATION METHOD, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12536138
FILE SYSTEM REDIRECTOR SERVICE IN A SCALE OUT DATA PROTECTION APPLIANCE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
54%
Grant Probability
81%
With Interview (+26.7%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 238 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month