Prosecution Insights
Last updated: April 19, 2026
Application No. 18/497,806

CAPACITY CLUSTER RESOURCE RESERVATIONS IN A CLOUD PROVIDER NETWORK

Non-Final OA §102§103
Filed
Oct 30, 2023
Examiner
KHONG, ALEXANDER
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Amazon Technologies, Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
543 granted / 646 resolved
+29.1% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
15 currently pending
Career history
661
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 646 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a Non-Final Office Action Correspondence in response to U.S. Application No. 18/497,806 filed on 1/30/2023. Claims 1-20 are pending. Claims 1, 4 and 15 are independent claims. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 10/25/2024, 02/19/2025, 03/05/2025, 07/02/2025, and 01/14/2026 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 4, 8-15, and 19-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kinney, Jr. et al. (U.S. Patent No. 10,877,796 B1, hereinafter “Kinney”). Regarding claim 4, Kinney teaches a computer-implemented method comprising: generating, by a managed compute service of a cloud provider network, a schedule including a plurality of blocks of compute capacity hosted by the managed compute service that are available to be reserved by users of the managed compute service (Kinney Col 22, Ln 14-19, “The user may first discover compute instances that are available for scheduling and reservation within a desired window of time and may then enter into an agreement with the provider network 190 to purchase a particular quantity of those compute instances for that window of time”), wherein each of the blocks corresponds to compute capacity for a number of compute instances for a window of time (Kinney Col 22, Ln 19-31, “The scheduled reserved compute instances may be of one or more particular instance types having particular processor resources, memory resources, storage resources, network resources, and so on. The window of time may be a one-time window (e.g., 5 PM to 10 PM on a particular day) or a recurring window (e.g., 5 PM to 10 PM on weekdays for one year). By entering into the agreement, the user may be guaranteed to have exclusive access (relative to other clients of the provider network) to the scheduled reserved compute instances for the window of time. The agreement may result in a reservation identifier that can be used to reference the set of scheduled reserved compute instances”); receiving, at the cloud provider network, a request originated on behalf of a user to find a capacity block, the request identifying a desired number of compute instances and an availability duration for the desired number of compute instances (Kinney Col 22 Ln 4-31, and Fig. 5-6, i.e., a desired numbers of compute instances and window of time are identified an requested); identifying, by the managed compute service based on use of the schedule, at least a first block of the plurality of blocks as providing compute capacity for the desired number of compute instances for the desired amount of time (Kinney Col 22, Ln 29-31, i.e., “The agreement may result in a reservation identifier that can be used to reference the set of scheduled reserved compute instances”); transmitting, by the managed compute service, a response to the request that identifies a first capacity block associated with at least the first block (Kinney Col 22, Ln 14-19, i.e., “The user may first discover compute instances that are available for scheduling and reservation within a desired window of time and may then enter into an agreement with the provider network 190 to purchase a particular quantity of those compute instances for that window of time”); receiving a request to obtain the first capacity block for the user (Kinney Col 22, Ln 32-43); and after a beginning of the window of time corresponding to the first capacity block, launching one or more compute instances on behalf of the user (Kinney Col 21, Ln 26-35, i.e., provision the scheduled reserved compute instances). As to claim 8, Kinney also teaches the computer-implemented method of claim 4, wherein: the first block has a different number of compute instances than a number of compute instances of a second block (Kinney Col 12 Ln 51-59); or the first block has a different size window of time than the window of time of the second block (Kinney Col 12 Ln 66 to Col 13 Ln 3). As to claim 9, Kinney also teaches the computer-implemented method of claim 4, wherein the launching of the one or more compute instances includes: receiving a request to launch the one or more compute instances, the request including an identifier of the first capacity block (Kinney Col 23 Ln 9-13, i.e., auto-launch of the scheduled reserved compute instances, and Col 29 Ln 36-38, i.e., a reservation ID is used); and selecting one or more slots to launch the one or more compute instances based on the first capacity block (Kinney Col 23 Ln 13-18). As to 10, Kinney teaches the computer-implemented method of claim 4, but fails to explicitly teach wherein generating the schedule comprises: generating demand forecasts for a plurality of block types, each block type corresponding to a different combination of compute instance count and availability duration (Kinney Col 22 Ln 14-22); and placing the plurality of blocks on the schedule based at least in part on use of the demand forecasts (Kinney Col 22 Ln 22-32). As to claim 11, Kinney also teaches the computer-implemented method of claim 4, further comprising: determining to add a new block to the schedule (Kinney Col 13 Ln 3-8, i.e., “Compute instances may be provisioned and/or added to the managed compute environment 195A automatically (e.g., without direct input from a user) and programmatically (e.g., by execution of program instructions) by the compute environment management system 100”); replacing a second block on the schedule and a third block on the schedule with the new block, or replacing the second block with the new block and a second new block (Kinney Col 12 Ln 66 to Col 13 Ln 3). As to claim 12, Kinney also teaches the computer-implemented method of claim 4, wherein: the request to find the capacity block identifies an earliest start date and the identified block has a corresponding window of time that starts on or after the earliest start date (Kinney Col 22, Ln 14-25, i.e., “The user may first discover compute instances that are available for scheduling and reservation within a desired window of time and may then enter into an agreement with the provider network 190 to purchase a particular quantity of those compute instances for that window of time. The scheduled reserved compute instances may be of one or more particular instance types having particular processor resources, memory resources, storage resources, network resources, and so on. The window of time may be a one-time window (e.g., 5 PM to 10 PM on a particular day) or a recurring window (e.g., 5 PM to 10 PM on weekdays for one year)”); or the request to find the capacity block identifies a latest end date and the identified block has a corresponding window of time that ends on or before the latest end date (Kinney Col 22, Ln 14-25). As to claim 13, Kinney also teaches the computer-implemented method of claim 4, wherein the request to find the capacity block identifies at least one of: a type of compute instance (Kinney Col 22 Ln 19-22); a type of operating system; or a region of the cloud provider network that is to host the one or more compute instances. As to claim 14, Kinney also teaches the computer-implemented method of claim 4, further comprising: emitting a first event indicative of a start of the first capacity block, wherein the event causes a request to be originated seeking the launching of the one or more compute instances (Kinney Fig. 11B, i.e., manually or automatic launching scheduled reserved instances); or emitting a second event indicative of an end or an upcoming end of the first capacity block, wherein the event causes a request to be originated seeking the termination of the one or more compute instances (Kinney Col 12 Ln 24-27). Regarding claim 15, Kinney also teaches a system comprising: a first one or more computing devices to host compute instances for users of a managed compute service in a multi-tenant cloud provider network (Kinney Col 6 Ln 56 to Col 7 Ln 8); and a second one or more computing devices to implement a control plane for the managed compute service in the multi-tenant cloud provider network (Kinney Col 7 Ln 8-14), the control plane including instructions that upon execution cause the control plane to perform the same method as recited claim 4. Claim 15 is similarly rejected. Claim 19 recites the limitations substantially similar to those of claim 8 and is similarly rejected. Claim 20 recites the limitations substantially similar to those of claim 10 and is similarly rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5-7, and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kinney in view of Kurtzer et al. (U.S. Patent No. 10,970,113 B1, hereinafter “Kurtzer”). Regarding claim 1, Kinney teaches a computer-implemented method comprising: generating, by a managed compute service of a cloud provider network, a schedule including a plurality of blocks of compute capacity hosted by the managed compute service that are available to be reserved by users of the managed compute service (Kinney Col 22, Ln 14-19, “The user may first discover compute instances that are available for scheduling and reservation within a desired window of time and may then enter into an agreement with the provider network 190 to purchase a particular quantity of those compute instances for that window of time”), wherein each of the blocks corresponds to compute capacity for a number of compute instances, receiving, at the cloud provider network, a request originated on behalf of a user to find a capacity block, the request identifying a desired number of compute instances and an availability duration for the desired number of compute instances (Kinney Col 22 Ln 4-31, and Fig. 5-6, i.e., a desired numbers of compute instances and window of time are identified an requested); identifying, by the managed compute service based on use of the schedule, at least a first block of the plurality of blocks as providing compute capacity for the desired number of compute instances for the desired amount of time (Kinney Col 22, Ln 29-31, i.e., “The agreement may result in a reservation identifier that can be used to reference the set of scheduled reserved compute instances”); transmitting, by the managed compute service, a response to the request that identifies at least the first block, the response including an offering identifier associated with the first block (Kinney Col 22, Ln 14-19, i.e., “The user may first discover compute instances that are available for scheduling and reservation within a desired window of time and may then enter into an agreement with the provider network 190 to purchase a particular quantity of those compute instances for that window of time”); receiving a request to obtain the first block as a first capacity block for the user, the request including the offering identifier (Kinney Col 22, Ln 32-43); at a beginning of the window of time corresponding to the first capacity block, updating a data store to change an ownership of the compute capacity for the first block to be associated with an account of the user (Kinney Col 21, Ln 35-38); and launching one or more compute instances on behalf of the user using the compute capacity (Kinney Col 21, Ln 26-35, i.e., provision the scheduled reserved compute instances). Kinney fails to explicitly teaches a number of compute instances of a type providing access to graphics processing unit (GPU) processing resources. However, in the same field of endeavor, Kurtzer teaches number of compute instances of a type providing access to graphics processing unit (GPU) processing resources (Kurtzer Col 3, Ln 33-47). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kinney by incorporating the teachings of Kurtzer. The motivation would be to provide specialized hardware that is optimized for different jobs (Kurtzer Col 3, Ln 47-58). As to claim 2, Kinney as modified by Kurtnez also teaches the computer-implemented method of claim 1, wherein the compute capacity for each of the plurality of blocks is selected to be located in a portion of the cloud provider network to ensure a latency characteristic for communications between the compute instances in the block is satisfied (Kurtnez Col 11, Ln 42-59). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kinney by incorporating the teachings of Kurtzer. The motivation would be to generate predictive models based on performance, cost, and/or other benchmarks that are produced from a previous execution of the same job or a related job on the same or similar resources of compute nodes being modeled or scored (Kurtzer Col 3, Ln 37-42). As to claim 3, Kinney as modified by Kurtnez also teaches the computer-implemented method of claim 2, wherein the launching of the one or more compute instances includes utilizing a placement rule that constrains slot selection, for the one or more compute instances, to be within the portion of the network (Kurtzer Col 11 Ln 60 to Col 12 Ln 5, i.e., one or more compute instances (nodes) is selected using the predictive models, and Col 13, Ln 19-25, i.e., network is one of the criteria for the selection). As to claim 5, Kinney teaches the computer-implemented method of claim 4, but fails to explicitly teach wherein at least one of the blocks involves multiple compute instances of a type providing access to graphics processing unit (GPU) processing resources. However, in the same field of endeavor, Kurtzer teaches at least one of the blocks involves multiple compute instances of a type providing access to graphics processing unit (GPU) processing resources. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kinney by incorporating the teachings of Kurtzer. The motivation would be to provide specialized hardware that is optimized for different jobs (Kurtzer Col 3, Ln 47-58). As to claim 6, Kinney as modified by Kurtzer also teaches the computer-implemented method of claim 5, wherein the compute capacity for each of the plurality of blocks is selected to be located in a portion of the cloud provider network to ensure a latency characteristic for communications between the compute instances in the block is satisfied (Kurtnez Col 11, Ln 42-59). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kinney by incorporating the teachings of Kurtzer. The motivation would be to generate predictive models based on performance, cost, and/or other benchmarks that are produced from a previous execution of the same job or a related job on the same or similar resources of compute nodes being modeled or scored (Kurtzer Col 3, Ln 37-42). As to claim 7, Kinney as modified by Kurtzer also teaches the computer-implemented method of claim 6, wherein the launching of the one or more compute instances includes utilizing a placement rule that constrains slot selection, for the one or more compute instances, to be within the portion of the network (Kurtzer Col 11 Ln 60 to Col 12 Ln 5, i.e., one or more compute instances (nodes) is selected using the predictive models, and Col 13, Ln 19-25, i.e., network is one of the criteria for the selection). Claim 16 recites the limitations substantially similar to those of claim 5 and is similarly rejected. Claim 17 recites the limitations substantially similar to those of claim 6 and is similarly rejected. Claim 18 recites the limitations substantially similar to those of claim 7 and is similarly rejected. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Form PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER KHONG whose telephone number is (571)270-7127. The examiner can normally be reached Mon-Fri 8am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at (571)272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER KHONG/Primary Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Oct 30, 2023
Application Filed
Feb 10, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591592
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR REPLICATING DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12585546
DATA LINEAGE BASED MULTI-DATA STORE RECOVERY
2y 5m to grant Granted Mar 24, 2026
Patent 12579170
AUTOMATIC ORGANIZATION OF USER ACTIVITY INTO COLLECTIONS BASED ON TOPICS
2y 5m to grant Granted Mar 17, 2026
Patent 12579165
Restricted Blockchain Cluster
2y 5m to grant Granted Mar 17, 2026
Patent 12579518
SYSTEMS AND METHODS FOR PROVIDING CROSS-SECTIONAL SCALING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+27.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 646 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month