Prosecution Insights
Last updated: April 19, 2026
Application No. 18/596,820

SYSTEM AND METHOD OF PROVIDING A CLOUD-SERVICE PROVIDER EXCHANGE

Final Rejection §103
Filed
Mar 06, 2024
Examiner
ALGIBHAH, HAMZA N
Art Unit
2441
Tech Center
2400 — Computer Networks
Assignee
Adaptive Computing Enterprises Inc.
OA Round
4 (Final)
79%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
82%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
566 granted / 713 resolved
+21.4% vs TC avg
Minimal +3% lift
Without
With
+3.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
744
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 713 resolved cases

Office Action

§103
Details Claims 1-5, 7, 10-14, 16-17, and 20-21 are pending. Claims1-5, 7, 10-14, 16-17, and 20-21 are rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7, 10-14, 16-17 and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Horvitz et al (Pub. No.: US 2010/0332262 A1) in view of MANGLIK et al (Pub. No.: US 2012/0239739 A1) and Bhageria et al (Pub. No.: US 2019/0155643 A1). As per claim 1, Horvitz discloses a method comprising: - receiving, at a compute exchange (resource broker), a request from a user for workload to be processed in (Horvitz, Fig 8, paragraph 0084, wherein at block 802, the resource broker 110 may receive a computing task from a customer 104. In various embodiments, the computing task may be any task that is suitable for performance using cloud computing. For example, but not as a limitation, the computing task may be the optimization of a database by at least one of cloud computing providers 102(1)-102(n)); - receiving, at the compute exchange, respective (Horvitz, Fig 8, paragraph 0085-0087, wherein at decision block 804, the resource broker 110 may decide whether to solicit bids from a plurality of cloud computing providers 102(1)-102(n) for the performance of the computing task. In various embodiments, the bids may be for the lowest cost for the performance of the computing. At decision block 806, the resource broker 110 may determine whether one or more bids are received from the plurality of cloud computing providers 102(1)-102(n) for the performance of the computing task. If the resource broker 110 determines that one or more bids are received ("yes" at decision block 806), the process 800 may proceed to block 808); - determining, at the compute exchange, a first chosen cloud service provider based on the respective real-time pricing and a second chosen cloud service provider based on the respective real-time pricing (Horvitz, Fig 8, paragraph 0087, 0093, wherein block 808, the resource broker 110 may perform the computing task using the one of cloud computing providers 102(1)-102(n) that submitted the most advantageous bid for the customer. For example, but not as a limitation, the most advantageous bid may be the lowest cost bid for the performance of the computing task, the bid for the shortest latency response time during the performance of the computing task, or a bid for the shortest completion time for the computing task. Horvitz paragraph 0093 states: [0093] It will be appreciated that while the process 800 is described with respect to a computing task received from the customer, the process 800 may be equally applicable to each portion of the computing task. In other words, the resource broker 110 may elect to abstract the computing task into a plurality of sub computing tasks, and then apply the process 800 to each of the sub computing tasks. Thus, Horvitz discloses a single computing task divided into a plurality of sub-tasks (for example a first sub-task and second sub-task). For, the first sub-task, cloud computing provider 102 (1) can be selected as the provider with the most advantageous bid to perform the first sub-task and processing the first sub-task on cloud computing provider 102 (1). For, the second sub-task, the process is repeated and a different cloud computing provider such as 102 (2) can be selected as the provider with the most advantageous bid to perform the second sub-task), wherein the first chosen cloud service provider is different from the second chosen cloud service provider (Horvitz, paragraph 0093, wherein “It will be appreciated that while the process 800 is described with respect to a computing task received from the customer, the process 800 may be equally applicable to each portion of the computing task. In other words, the resource broker 110 may elect to abstract the computing task into a plurality of sub computing tasks, and then apply the process 800 to each of the sub computing tasks”; wherein Horvitz does not require the sub-tasks to be performed by the same cloud computing provider. Instead, the selecting of the cloud computing provider is based on the most advantageous bid which may be the lowest cost bid, the bid for the shortest latency response time, or a bid for the shortest completion time. Thus, for a first sub-task, cloud computing provider 102 (1) can be the most advantageous bid having lowest cost bid for the first task and cloud computing provider 102 (2) can be the most advantageous bid having lowest cost bid for the second task); - dividing, based on one or more parameters associated with one or more of the respective (Horvitz, Fig 8, paragraph 0085, wherein at decision block 804, the resource broker 110 may decide whether to solicit bids from a plurality of cloud computing providers 102(1)-102(n) for the performance of the computing task. In various embodiments, the bids may be for the lowest cost for the performance of the computing, the shortest response latency, the shortest computing task completion time, and/or other performance characteristics desired by the customer), the workload into the portion of the workload and a second portion of the workload (Horvitz, Fig 8, paragraph 0093, wherein it will be appreciated that while the process 800 is described with respect to a computing task received from the customer, the process 800 may be equally applicable to each portion of the computing task. In other words, the resource broker 110 may elect to abstract the computing task into a plurality of sub computing tasks, and then apply the process 800 to each of the sub computing tasks); - causing the first portion of the workload to be processed on the first chosen cloud service provider, wherein the compute exchange determine allocation based at least in part on an availability or utilization of the (Horvitz, Fig 8, paragraph 0087, 0093, wherein block 808, the resource broker 110 may perform the computing task using the one of cloud computing providers 102(1)-102(n) that submitted the most advantageous bid for the customer. For example, but not as a limitation, the most advantageous bid may be the lowest cost bid for the performance of the computing task, the bid for the shortest latency response time during the performance of the computing task, or a bid for the shortest completion time for the computing task).and- causing the second portion of the workload to be processed on the second chosen cloud service provider (Horvitz, Fig 8, paragraph 0087, 0093, wherein block 808, the resource broker 110 may perform the computing task using the one of cloud computing providers 102(1)-102(n) that submitted the most advantageous bid for the customer. For example, but not as a limitation, the most advantageous bid may be the lowest cost bid for the performance of the computing task, the bid for the shortest latency response time during the performance of the computing task, or a bid for the shortest completion time for the computing task). Even though Horvitz discloses that the auction may be conducted in real time following the receipt of the computing task, Horvitz does not explicitly disclose an on-premises computing environment and that the pricing is a real time pricing. However, using real time pricing is well known in the art. For example, MANGLIK discloses that the pricing is a real time pricing (MANGLIK, paragraph 0110, wherein a first adaptor such as a Price Profile adaptor may provide updated cloud pricing for clouds 172, including spot and reserved pricing. The data source for the updated pricing may be agents and/or APIs used to query cloud providers, and/or a database with dynamically updated real time cloud pricing information. In one embodiment, the dynamically obtained real time cloud pricing may be used by an algorithm associated with Price Profile adaptor to update Metrics Data 365 with new cloud pricing information in Adapted Metrics 460, which may be passed to a second adaptor and/or stored in database 350). Therefore, it would have it would have been obvious to one ordinary skill in the art before the effective filing date of the invention to incorporate MANGLIK teachings into Horvitz to achieve the claimed limitations because this would have provided a way to ensure that the customer select the most suitable provider based on accurate (most updated) factors/prices.In addition, using on-premises computing environment for at least one of the cloud computing environments is well known in the art. For example, Bhageria discloses an on-premises computing environment (Bhageria, Fig 5, paragraph 0025-0028, wherein “Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).”). Therefore, it would have it would have been obvious to one ordinary skill in the art before the effective filing date of the invention to incorporate Bhageria teachings into Horvitz and MANGLIK to achieve the claimed limitations because this would have provided a way to improve the efficiency and performance of the system by considering cloud computing environments that exist on-premises and off-premises. As per claim 2, claim 1 is incorporated and MANGLIK further discloses wherein the price for processing the workload is obtained from the user or obtained by the compute exchange based on characteristics of the workload (MANGLIK, paragraph 0110, wherein a first adaptor such as a Price Profile adaptor may provide updated cloud pricing for clouds 172, including spot and reserved pricing. The data source for the updated pricing may be agents and/or APIs used to query cloud providers, and/or a database with dynamically updated real time cloud pricing information. In one embodiment, the dynamically obtained real time cloud pricing may be used by an algorithm associated with Price Profile adaptor to update Metrics Data 365 with new cloud pricing information in Adapted Metrics 460, which may be passed to a second adaptor and/or stored in database 350); As per claim 3, claim 1 is incorporated and MANGLIK further discloses wherein the respective real-time pricing of respective cloud compute resources based on a current demand (MANGLIK, paragraph 0110, wherein a first adaptor such as a Price Profile adaptor may provide updated cloud pricing for clouds 172, including spot and reserved pricing. The data source for the updated pricing may be agents and/or APIs used to query cloud providers, and/or a database with dynamically updated real time cloud pricing information. In one embodiment, the dynamically obtained real time cloud pricing may be used by an algorithm associated with Price Profile adaptor to update Metrics Data 365 with new cloud pricing information in Adapted Metrics 460, which may be passed to a second adaptor and/or stored in database 350); As per claim 4, claim 1 is incorporated and Horvitz further discloses wherein determining the first chosen cloud service provider occurs by a user selection or an automated selection by the compute exchange based on the one or more parameters (Horvitz, Fig 8, paragraph 0087, wherein block 808, the resource broker 110 may perform the computing task using the one of cloud computing providers 102(1)-102(n) that submitted the most advantageous bid for the customer. For example, but not as a limitation, the most advantageous bid may be the lowest cost bid for the performance of the computing task, the bid for the shortest latency response time during the performance of the computing task, or a bid for the shortest completion time for the computing task); As per claim 5, claim 1 is incorporated and Horvitz further discloses receiving characteristics of compute resources that would be offered at the respective real-time pricing, wherein the first chosen cloud service provider and the second chosen cloud service provider are determined based on a lowest price for the respective real-time pricing or the characteristics of the compute resources even when the compute resources are not lowest priced compute resources (Horvitz, Fig 8, paragraph 0087, wherein block 808, the resource broker 110 may perform the computing task using the one of cloud computing providers 102(1)-102(n) that submitted the most advantageous bid for the customer. For example, but not as a limitation, the most advantageous bid may be the lowest cost bid for the performance of the computing task, the bid for the shortest latency response time during the performance of the computing task, or a bid for the shortest completion time for the computing task); As per claim 7, claim 1 is incorporated and Horvitz further discloses wherein the compute exchange comprises: a workload manager, a high-performance computing suite, an on-demand data center engine and one or more applications available for the user (Horvitz, Fig 1, paragraph 0032, wherein resource broker 110 may be an entity that facilitates interactions between one or more of the cloud computing resource providers 102(1)-102(n) and the customers 104. Thus, the customers 104 may obtain the use of computing resources without dealing directly with the cloud computing providers 102(1)-102(n). For example, but not as a limitation, the resource broker 110 may locate and obtain the most cost-effective service capability from one or more of cloud computing providers 102(1)-102(n). In various embodiments, as further described below, the resource broker 110 may negotiate for computing resources from one or more of the cloud computing providers 102(1)-102(n), provide computing tasks to selected ones of the cloud computing providers 102(1)-102(n) on behalf of the customers 106, provide results for the customers 104, collect payments from the customers, and provide compensation to the utilized ones of cloud computing providers 102(1)-102(n). In at least some embodiments, the resource broker 110 may perform these actions via the use of data 112 (e.g., prior performance and cost history) on one or more of the cloud computing providers 102(1)-102(n). The resource broker 110 may also derive gain from the difference between the payments received from the customers 104 and the compensation paid to the cloud computing providers as reward for the services provided. It will be appreciated that while only a single resource broker 110 is illustrated in FIG. 1, a plurality of resource broker 110 may interact with the cloud computing providers 102(1)-102(n) and customers 104); and wherein the method further comprises deploying a third portion of the workload on an on-premises compute environment (Horvitz, Fig 8, paragraph 0085, wherein at decision block 804, the resource broker 110 may decide whether to solicit bids from a plurality of cloud computing providers 102(1)-102(n) for the performance of the computing task. In various embodiments, the bids may be for the lowest cost for the performance of the computing, the shortest response latency, the shortest computing task completion time, and/or other performance characteristics desired by the customer). Claims 10-14, 16-17 and 20-21 are rejected under the same rationale as claims 1-5, 7. Response to Arguments Applicant's arguments filed 02/20/2026 have been fully considered but they are not persuasive. Applicant argues in remarks: (1) The rejection does not establish the required two-provider, split-workload processing. The cited portion of Horvitz relied upon for the "processing" steps describes the resource broker performing the computing task using one of the cloud computing providers that submitted the most advantageous bid, i.e., a single provider performs the computing task. That disclosure cannot satisfy the amended claim requirement that two different providers are chosen and used to process different portions of the same workload, with the first portion processed on the first provider and the second portion processed on the second provider. The Office Action's mapping underscores the deficiency: the same disclosure is used for two different claimed processing steps. The Office Action applies the same Horvitz disclosure (the same passages describing performance of the computing task using the one provider with the most advantageous bid) to map both (a) "causing the first portion of the workload to be processed on the first chosen cloud service provider" and (b) the second, separate claimed processing step. Specifically, paragraphs 0087 and 0093 and block 808 are cited twice for causing the first portion of the workload to be processed by the first chosen cloud service provider. Applicant notes that this feature is mentioned twice and there is no mention of causing the second portion of the workload to be processed on the second chosen cloud service provider. Applicant considers this an error in the Office Action and respectfully requests clarification. Even if the Office Action's repetition of the "first portion" language is treated as a clerical error, the mapping still relies on the same single-provider performance disclosure to meet two distinct limitations that require two distinct acts, namely, processing a first portion on a first provider and processing a second portion on a second provider, and the claim now further requires that those providers are different. A single disclosed act of performing a task using one selected provider does not disclose or suggest two separate acts of processing different portions of a workload on two different providers. (1) The examiner respectfully disagrees. First, the examiner has corrected the type of repeating the same feature twice. However, the examiner maintains the position of using the same citation to disclose both features.Second, the flowchart described in Fig 8, teaches determining a first chosen cloud service provider based on the respective real-time pricing (Fig 8, step 808, wherein the clouding computing provider with the most advantageous bid is the first chosen cloud service provider). Third, the same flowchart described in Fig 8, teaches determining a second chosen cloud service provider based on the respective real-time pricing (Fig 8, step 808, wherein the clouding computing provider with the most advantageous bid is the second chosen cloud service provider). This is true because Horvitz teaches that steps of the flowchart described in Fig 8 is equally applicable to each portion of the computing task. Horvitz paragraph 0093 states: [0093] It will be appreciated that while the process 800 is described with respect to a computing task received from the customer, the process 800 may be equally applicable to each portion of the computing task. In other words, the resource broker 110 may elect to abstract the computing task into a plurality of sub computing tasks, and then apply the process 800 to each of the sub computing tasks. Thus, Horvitz discloses a single computing task divided into a plurality of sub-tasks (for example a first sub-task and second sub-task). For, the first sub-task, cloud computing provider 102 (1) can be selected as the provider with the most advantageous bid to perform the first sub-task and processing the first sub-task on cloud computing provider 102 (1). For, the second sub-task, the process is repeated and a different cloud computing provider such as 102 (2) can be selected as the provider with the most advantageous bid to perform the second sub-task. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAMZA N ALGIBHAH whose telephone number is (571)270-7212. The examiner can normally be reached 7:30 am - 3:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wing Chan can be reached at (571) 272-7493. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAMZA N ALGIBHAH/Primary Examiner, Art Unit 2441
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
Apr 19, 2025
Non-Final Rejection — §103
Jul 24, 2025
Response Filed
Aug 05, 2025
Final Rejection — §103
Oct 07, 2025
Response after Non-Final Action
Nov 07, 2025
Request for Continued Examination
Nov 13, 2025
Response after Non-Final Action
Nov 20, 2025
Non-Final Rejection — §103
Feb 20, 2026
Response Filed
Mar 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602224
NON-TERMINATING FIRMWARE UPDATE
2y 5m to grant Granted Apr 14, 2026
Patent 12598111
ENABLING INTENT-BASED NETWORK MANAGEMENT WITH GENERATIVE AI AND DIGITAL TWINS
2y 5m to grant Granted Apr 07, 2026
Patent 12598656
METHOD FOR EDGE COMPUTING
2y 5m to grant Granted Apr 07, 2026
Patent 12598096
METHOD AND APPARATUS FOR ACCESSING VIRTUAL MACHINE, DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12528442
SYSTEM, METHOD, AND APPARATUS FOR MANAGING VEHICLE DATA COLLECTION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
79%
Grant Probability
82%
With Interview (+3.1%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 713 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month