Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/29/25 has been entered.
In amendments dated 11/26/25, Applicant amended no claims, canceled no claims, and added claims 21-22. Claims 1, 3-8, 10-15, and 17-22 are presented for examination.
Rejections under 35 U.S.C. 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-8, 10-15, and 17-20, and 22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to mental processes without significantly more. Independent claims 1, 8, and 15 each recites an embedded configuration process configured to manage metadata associated with each database partition of the plurality of database partitions. Examiner notes an embedded configuration process is just a software process, and managing metadata is recited broadly and is a mental process accomplishable in the human mind or on paper. Each of these claims recites additional elements of a server process configured to host a respective database partition of the plurality of partitions, and a server process is just software while hosting a database partition is storing the partition and is insignificant extra-solution activity; and an embedded router process configured to route database requests to each database partition of the plurality of database partitions based on the metadata, and an embedded router process is just software while routing database requests is also insignificant extra-solution activity. Claim 1 also recites a plurality of database partitions, at least one cloud-based resource with processor and memory, a database subsystem, and a plurality of shard servers, and claim 8 recites a non-transitory computer-readable medium, which are each generic components of a computer system. Examiner notes specification paragraph 0004 discusses one or more database nodes embedded with shard servers having hosting, routing, and managing metadata functions as allowing for improved sharding functionality and enhanced scalability. Paragraphs 0033-0039 discuss details like providing sharding as a default on these servers, and automatic sharding, resharding, and recommending sharding functions which realize the sharding functionality and scalability improvements mentioned in paragraph 0004. However, the claim steps do not recite any of these improvements in any technology or any function of a computer per MPEP 2106.04(d) and do not recite any unconventional steps in the invention per MPEP 2106.05(a). Therefore, the recited mental processes are not integrated into a practical application. Taking the claims as a whole, hosting a partition of data is storing data which is routine and conventional per the list of routine and conventional activities listed in MPEP 2106.05(d) part II, and routing requests to a partition is recited broadly and amounts to sending data across a network per paragraphs 0056, 0059, 0065, and figure 1, and is also routine and conventional per the aforementioned list of routine and conventional activities. The software processes, plurality of database partitions, at least one cloud-based resource with processor and memory, a database subsystem, a plurality of shard servers, and non-transitory computer-readable medium are each still generic components of a computer system. Therefore these claims do not include additional elements that are sufficient to amount to significantly more than the cited mental process.
Claims 3, 10, and 17 each recites wherein each shard server of the plurality of shard servers is configured to run on a same hardware profile for all instance sizes. (running a server is a mental process accomplishable in the human mind or on paper). Claims 4, 11, and 18 each recites split a first database partition of the plurality of database partitions into the first database partition and a second database partition of the plurality of database partitions (splitting a partition is allocating storage, which is routine and conventional activity per the list of routine and conventional activities listed in MPEP 2106.05(d) part II); and wherein the plurality of shard servers further comprises: a first shard server configured to host the first database partition; and a second shard server configured to host the second database partition (hosting a database partition is storing it and is routine and conventional activity per the list of routine and conventional activities listed in MPEP 2106.05(d) part II). Claims 5, 12, and 19 each recites wherein the database subsystem is configured to split the first database partition into the first database partition and the second database partition automatically without user input (splitting data is a mental process accomplishable in the human mind or on paper).
Claims 6, 13, and 20 each recites comprising, at a first time, the plurality of shard servers; and comprising, at a second time, a single shard server (comprising a server or servers is storing those servers which is routine and conventional activity per the list of routine and conventional activities listed in MPEP 2106.05(d) part II). Claims 7 and 14 each recites migrate a database from a replica set topology to a sharded topology, the sharded topology including the plurality of database partitions hosted by the plurality of shard servers (migrating data is a mental process accomplishable in the human mind or on paper).
Claim 22 recites the plurality of shard servers comprises a number of shard servers each running on a same hardware profile (the shard servers are software entities and generic components of a computer system); and the database subsystem is configured to, based on a measured usage of the database subsystem, scale the number of shard servers each running on the same hardware profile (scaling shard servers is creating or deleting servers running on a hardware profile and is recited broadly and a mental process accomplishable in the human mind or on paper).
Rejections under 35 U.S.C. 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-5, 8, 11-12, 15, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Graefe et al (US 11,567,969) hereafter known as Graefe, in view of Merriman et al (US 10,846,305), hereafter Merriman.
With respect to claims 1, 8, and 13, Graefe teaches:
at least one cloud-based resource, the at least one cloud-based resource including processor and memory (column 8-9 lines 56-9 method performed by a cloud platform, also column 9 lines 42-55 figure 7);
a database subsystem executing on the at least one cloud-based resource (columns 8-9 lines 56-9 database on a cloud platform, also column 9 lines 42-55 cloud platform with a processor), wherein the database subsystem comprises:
a plurality of shard servers (column 5 lines 35-54 plurality of shard servers), wherein each shard server of the plurality of shard servers is configured to execute:
a server process configured to host a respective database partition of the plurality of partitions (column 5 lines 35-54 server nodes host database partitions, example figure 2 node 210 hosting partitions 220, 221); and
an embedded configuration process configured to manage metadata associated with each database partition of the plurality of database partitions (column 5 lines 35-54 figure 2, managing metadata for example shard ID 230, primary key fields).
Graefe does not teach an embedded router process configured to route database requests to each database partition of the plurality of database partitions based on the metadata. Merriman teaches this with router processes run on each shard server that routes database requests to the appropriate shard server (column 50 lines 26-31 describing figure 13, router processes can be executed on each shard server such as shard servers 1302-1308, also column 49 lines 37-51). It would have been obvious to have combined the routine functionality on a shard server in Merriman with the shard server architecture and functionality in Graefe to provide more efficient responses from the individual shard servers to client requests.
With respect to claim 8, Graefe teaches a non-transitory computer-readable medium (columns 10-11 lines 61-15).
With respect to claims 4, 11, and 18, all the limitations in clams 1, 8, and 15 are addressed by Graefe and Merriman above. Graefe also teaches:
split a first database partition of the plurality of database partitions into the first database partition and a second database partition of the plurality of database partitions (column 4 lines 34-59 splitting partitions into multiple partitions on the same server/node),
wherein the plurality of shard servers further comprises: a first shard server configured to host the first database partition; and a second shard server configured to host the second database partition (column 7 lines 25-45 different partitions hosted by different shards, example in figure 4B columns 7-8 lines 60-10).
With respect to claims 5, 12, and 19, all the limitations in clams 1, 4, 8, 11, 15, and 18 are addressed by Graefe and Merriman above. Graefe also teaches:
wherein the database subsystem is configured to split the first database partition into the first database partition and the second database partition automatically without user input (column 7 lines 25-45 partitions split without user input).
Claims 6, 13, and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over Graefe and Merriman in further view of Hurwitz et al (US 11,907,752), hereafter Hurwitz.
With respect to claims 6, 13, and 20, all the limitations in clams 1, 8, and 15 are addressed by Graefe and Merriman above. The combination of Graefe and Merriman does not teach wherein the database subsystem is configured to: comprise, at a first time, the plurality of shard servers; and comprise, at a second time, a single shard server. Hurwitz teaches this in describing how new shards are created per the workload of the database (column 5 lines 37-41). Thus, in the combination of Graefe, Merriman, and Hurwitz, at a first time the database system comprises a plurality of shards servers and at a second time, the database system may comprise a single shard server (column 5 lines 37-41 for example if one shard sever hosted all regions of data). It would have been obvious to have combined this functionality of dynamically creating shard servers at different times in Hurwitz with the sharding functionality in Graefe and Merriman to provide more efficient handling of data and request for data from clients as the number of requests changes over time.
With respect to claim 21, all the limitations in claim 1 are addressed by Graefe and Merriman above. Graefe teaches:
the plurality of shard servers comprises a first number of shard servers (column 5 lines 35-54 a plurality/first number of shard servers);
wherein the database subsystem is configured to: operate the first number of shard servers at a first time (column 5 lines 35-54 a plurality/first number of shard servers operating at a first time);
wherein each additional shard server of the one or more additional shard servers is configured to execute: an additional server process configured to host an additional respective database partition of the plurality of database partitions (column 5 lines 35-54 server nodes host database partitions, example figure 2 node 210 hosting partitions 220, 221); and
an additional embedded configuration process configured to manage metadata associated with each database partition of the plurality of database partitions (column 5 lines 35-54 figure 2, managing metadata for example shard ID 230, primary key fields).
Merriman teaches wherein each additional shard server of the one or more additional shard servers is configured to execute: an additional embedded router process configured to route database requests to each database partition of the plurality of database partitions based on the metadata (column 50 lines 26-31 describing figure 13, router processes can be executed on each shard server such as shard servers 1302-1308, also column 49 lines 37-51).
The combination of Graefe and Merriman does not teach scale, in response to determining that a measured usage of the first number of shard servers exceeds a threshold usage, capacity of the database subsystem by operating a second number of shard servers at a second time, the second number of shard servers comprising the first number of shard servers and one or more additional shard servers. Hurwitz teaches this in describing monitoring a workload on shards (column 10 lines 29-58 figure 8 step 810 tracking activity levels on shards at each instance) and if the workload is overloaded (column 10 lines 29-58 figure 8 step 820 “overloaded” indicative of a threshold point for the workload), splitting the shard into additional shards (resharding). It would have been obvious to have combined this function of scaling the number of shards based on measuring usage past a threshold in Hurwitz with the with the sharding functionality in Graefe and Merriman to provide dynamic load balancing and a recourse for failover.
With respect to claim 22, all the limitations in claim 1 are addressed by Graefe and Merriman above. Graefe also teaches the plurality of shard servers comprises a number of shard servers each running on a same hardware profile (column 5 lines 35-54 a plurality/first number of shard servers on a hardware profile). The combination of Graefe and Merriman does not teach the database subsystem is configured to, based on a measured usage of the database subsystem, scale the number of shard servers each running on the same hardware profile. Hurwitz teaches this in monitoring the instances of key ranges (shards) to activity and scaling the shards if the workload is overloaded (column 10 lines 29-58). It would have been obvious to have combined this function of scaling the number of shards based on measuring usage in Hurwitz with the with the sharding functionality in Graefe and Merriman to provide dynamic load balancing and a recourse for failover.
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Graefe and Merriman in further view of Sankar et al (US 20190362015), hereafter known as Sankar.
With respect to claims 7 and 14, all the limitations in claims 1 and 8 are addressed by Graefe and Merriman above. The combination of Graefe and Merriman does not teach migrate a database from a replica set topology to a sharded topology, the sharded topology including the first database partition hosted by the first shard server. Sankar teaches this in moving message data from shard databases to a de-shard database for replication (paragraphs 0027-0028 figure 3, shard databases 306a, 306b, de-shard database 316). It would have been obvious to have combined this migrating functionality between topologies in Sankar with the database partition techniques in Graefe and Merriman to make the data in Graefe and Merriman more available for users in case of a failure.
Responses to Applicant’s Remarks
Regarding rejection of claims 1, 2-8, 10-15, and 17-20 under 35 U.S.C. 101 for reciting a mental process without significantly more, Applicant’s arguments have been considered and are persuasive in part and not persuasive in part. On page 14 of his Remarks Applicant cites the claims’ two additional elements, “a server process configured to host a respective database partition of the plurality of database partitions,” and “an embedded router process configured to route database requests to each database partition of the plurality of database partitions based on the metadata,” but Examiner did not see an argument made here. On pages 14-21 of his Remarks Applicant discusses Step 2A Prong Two, first noting in section i. that the specification describes improvements to the function of a computer in paragraphs 0004 and 0032-0039, and Examiner agrees. In section ii. Applicant asserts “the case law also makes clear that the claims need not explicitly recite the improvement to the functioning of the computer.” Examiner disagrees and notes MPEP 2106.05(a) states “after the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology.” Applicant points out and Examiner agrees that the claims in Enfish, LLC V. Microsoft Corp recited the components or steps of the improvement, as did the claims in McRO, Inc. v. Bandai Namco Games Am. Inc.,, Visual Memory LLC v. NVIDIA Corp., and Ex Parte Desjardins. In section iii. Applicant asserts the claims reflect improvements in the functioning of a computer and Examiner disagrees as the claims recite broad statements without inventive details showing any such improvement as described in specification paragraphs 0004 and 0032-0039. Examiner notes some of these improvements are recited in dependent claim 21 and that, if the subject matter in claim 21 were moved into claims 1, 8, and 15, then those independent claims would recite eligible subject matter.
In section iv. Applicant asserts the two additional elements recited above are not extra-solution activity. Examiner disagrees and notes, per the first two criteria suggested in MPEP 2106.05(g), that these limitations are well-known activities on a generic database system and each limitation is broad and not significant i.e. they do not impose meaningful limits on the claim. Neither limitation amounts to data gathering or outputting so this third criterion is not applicable here. These additional elements do not provide a practical application. On page 21 of his Remarks Applicant discusses Step 2B of the Eligibility Analysis and asserts the additional elements are non-conventional and non-generic. Examiner disagrees as both limitations are recited broadly and do not recite any inventive details that may show an improvement to the functioning of a computer. Examiner notes again that claim 21 recites subject matter that would make the independent claims 1, 8, and 15 eligible and also claim 22 is recited too broadly and does not recite such inventive details.
Regarding rejections of claims 1, 4-5, 8, 11-12, 15, and 18-19 are under 35 U.S.C. 102(a)(1) by Graefe, Applicant’s arguments overcome Graefe’s teachings, in particular that Graefe does not teach routing database requests to each database partition of the plurality of database partitions on pages 10-11. Examiner conducted another search of the prior art and fond Merriman and Hurwitz, which he believes teaches these claims as well as claims 6, 13, and 20-22 as shown in the rejections under 35 U.S.C. 103 above.
.
Inquiry
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRUCE M MOSER whose telephone number is (571)270-1718. The examiner can normally be reached M-F 9a-5p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at 571 270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRUCE M MOSER/Primary Examiner, Art Unit 2154 3/5/26