Prosecution Insights
Last updated: April 19, 2026
Application No. 18/487,564

RESOURCE USAGE MANAGEMENT FOR INTEGRATION FLOWS IN A CLOUD ENVIRONMENT

Non-Final OA §103
Filed
Oct 16, 2023
Examiner
CHU JOY, JORGE A
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
SAP SE
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
314 granted / 408 resolved
+22.0% vs TC avg
Strong +37% interview lift
Without
With
+37.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
41 currently pending
Career history
449
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
3.2%
-36.8% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kolev et al. (US 11,601,326 B1) in view of Bougon et al. (US 2021/0142254 A1). Regarding claim 1, Kolev teaches an integration platform system (Fig. 9, Platform 900; Col. 2, lines 28-35: Integration system) comprising: at least one computing device implementing a cloud environment (Col. 1, lines 1-2: a cloud computing environment), the at least one computing system being programmed to execute: an integration runtime, the integration runtime being configured to run a plurality of integration flows associated with a first tenancy at the cloud environment, a first integration flow of the plurality of integration flows configured to interface at least one message between a first software component and a second software component (Col. 2, lines 43-55: As used herein, an “integration flow” refers to a software instance for integrating multiple applications via message passing. Referring to FIG. 1, integration flow 105 may be defined by a tenant (i.e., a customer of integration middleware) in a multi-tenant cloud computing environment to specify how messages are integrated between systems such as, for example, the sender system 110 (e.g., a system of a buyer executing an application that submits a purchase order using a first communication protocol) and the receiver system 115 (e.g., a system of a seller executing an application that receives the purchase order using a second communication protocol). In some embodiments, each tenant may have multiple integration flows.; Col. 5, lines 29-32: Architecture or framework 400 includes an example of a productive tenant 405 having defined a number of integration flows 410 that integrate multiple applications (not shown)). Kolev teaches a monitor system that is able to “detect an anomaly or problem in applications currently executing in a distributed computing environment. For example, the provider might restart, provide additional computing resources to, for example, applications or other remedies in response to a detected anomaly or problem to improve performance.” (Col. 1, lines 26-31). Further, “a monitoring system and process herein may evaluate a rate of successfully processed messages, per integration flow.” (Col. 3, lines 29-31). Lastly, Kolev teaches “FIG. 8 is an outward-facing user interface related to a system and process for detection and characterization of integration flows, in accordance with an example embodiment. FIG. 8 is a human-machine interface display 800 in accordance with some embodiments. The display 800 includes a graphical representation 805 or dashboard that might be used to manage or monitor a problem detection and characterization of integration flows (e.g., associated with a multi-tenant cloud provider).” (Col. 8, lines 28-36) Kolev does not expressly teach a first agent associated with the integration runtime, the first agent being programmed to monitor usage of a first cloud environment resource by at least one integration flow of the plurality of integration flows; and an integration inspect service, the integration inspect service being programmed to perform operations comprising: accessing first resource usage data generated by the first agent and describing use of the first cloud environment resource by the first integration flow; and serving, to a user computing device, a user interface based at least in part on the first resource usage data. However, Bougon teaches a first agent associated with the integration runtime, the first agent being programmed to monitor usage of a first cloud environment resource by at least one integration flow [software application service] of the plurality of integration flows [software application services] ([0006]; [0022]; [0033] A cost-to-serve service (CTS) module 105 is configured and implemented with each of the resources to spawn a plurality of CTS agents 107 to each resource, input, controller etc. to monitor and to run on every public cloud hardware host, collect service usage statistics for every service (i.e., integration flow) and instance in use and send these metrics to the Cost-To-Serve Service module 105 to determine real-time usage and costs of the deployed resources in the configured pod architecture…Each CTS agent 107 receives data to send for processing by the CTS service module 105 and for storing at a CTS metric database 110.); and an integration inspect service, the integration inspect service being programmed to perform operations comprising: accessing first resource usage data generated by the first agent and describing use of the first cloud environment resource by the first integration flow ([0033] Each CTS agent 107 receives data to send for processing by the CTS service module 105 and for storing at a CTS metric database 110. The data sent is analyzed and displayed by a CTS analytics engine 120.; [0037]; Fig. 9); and serving, to a user computing device, a user interface based at least in part on the first resource usage data ([0033] Each CTS agent 107 receives data to send for processing by the CTS service module 105 and for storing at a CTS metric database 110. The data sent is analyzed and displayed by a CTS analytics engine 120.; Fig. 9; [0043] Additionally, a processing system 920 runs loads 915 to measure and produce the CTS metrics 920 as instructed by instructions in memory 922. The processing system 920 uses a processor configured for metric processing 925 and implements various CTS models 930. The results are displayed by an analysis engine 935 that, via a user interface, can display multiple CTS analytics in a dashboard.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Bougon with the teachings of Kolev to spawn agents on each cloud resource to monitor metrics such as costs, storage, IOPS, etc. for a given service deployed on a cloud in a multi-tenant environment. The modification would have been motivated by the desire of allowing users of services to review resource usage patterns of their hosted services. Regarding claim 2, Kolev teaches integration flow of the plurality of integration flows (Col. 2, lines 43-55: As used herein, an “integration flow” refers to a software instance for integrating multiple applications via message passing.). In addition, Bougon teaches the at least one computing device being further programmed to execute a second agent associated with the integration runtime, the second agent being programmed to monitor usage of a second cloud environment resource by the at least one integration flow [software application service] of the plurality of integration flows [software application services], the user interface also being based at least in part on second resource usage data generated by the second agent ([0006]; [0022]; [0033] A cost-to-serve service (CTS) module 105 is configured and implemented with each of the resources to spawn a plurality of CTS agents 107 to each resource, input, controller etc. to monitor and to run on every public cloud hardware host, collect service usage statistics for every service (i.e., integration flow) and instance in use and send these metrics to the Cost-To-Serve Service module 105 to determine real-time usage and costs of the deployed resources in the configured pod architecture…Each CTS agent 107 receives data to send for processing by the CTS service module 105 and for storing at a CTS metric database 110. The data sent is analyzed and displayed by a CTS analytics engine 120.; Fig. 9; [0043] Additionally, a processing system 920 runs loads 915 to measure and produce the CTS metrics 920 as instructed by instructions in memory 922. The processing system 920 uses a processor configured for metric processing 925 and implements various CTS models 930. The results are displayed by an analysis engine 935 that, via a user interface, can display multiple CTS analytics in a dashboard.). Regarding claim 3, Kolev teaches further programmed to execute: a multitenant service, the multitenant service being programmed to provide a service to at least one executable at the first tenancy and at least one executable at a second tenancy at the cloud environment (Col. 2, lines 40-50 Referring to FIG. 1, integration flow 105 may be defined by a tenant (i.e., a customer of integration middleware) in a multi-tenant cloud computing environment to specify how messages are integrated between systems such as, for example, the sender system 110 (e.g., a system of a buyer executing an application that submits a purchase order using a first communication protocol) and the receiver system 115 (e.g., a system of a seller executing an application that receives the purchase order using a second communication protocol). In some embodiments, each tenant may have multiple integration flows.). and at least one integration flow of the plurality of integration flows (Col. 2, lines 43-55: As used herein, an “integration flow” refers to a software instance for integrating multiple applications via message passing ; Col. 5, lines 29-32: Architecture or framework 400 includes an example of a productive tenant 405 having defined a number of integration flows 410 that integrate multiple applications (not shown)). In addition, Bougon teaches a multitenant service agent, the multitenant service agent being programmed to monitor usage of the multitenant service by at least one integration flow of the plurality of integration flows [software instances] ([0006] systems implementing cost-to-serve agents, which are executed on hosted public cloud hardware and collect service usage statistics in a multi-tenant environment.). Regarding claim 4, Kolev teaches at least one integration flow of the plurality of integration flows (Col. 2, lines 43-55: As used herein, an “integration flow” refers to a software instance for integrating multiple applications via message passing ; Col. 5, lines 29-32: Architecture or framework 400 includes an example of a productive tenant 405 having defined a number of integration flows 410 that integrate multiple applications (not shown)). In addition, Bougon teaches the integration inspect service being further programmed to perform operations comprising accessing multitenant service resource usage data generated by the multitenant service agent and describing use of the multitenant service by the at least one integration flow [software instance] of the plurality of integration flows, the user interface also being based at least in part on the multitenant service resource usage data (Abstract: a CTS agent transaction module to publish a set of metrics established by the CTS agent for each instance and usage type; [0028]; [0033] metrics database). Regarding claim 5, Bougon teaches the first agent being programmed to perform operations comprising writing the first resource usage data to an intermediate storage at the cloud environment, and the integration inspect service being further programmed to perform operations comprising accessing the first resource usage data from the intermediate storage ([0033] Each CTS agent 107 receives data to send for processing by the CTS service module 105 and for storing at a CTS metric database 110. The data sent is analyzed and displayed by a CTS analytics engine 120.). Regarding claim 6, Kolev teaches the integration inspect service being further programmed to perform operations comprising: determining that the first resource usage data describes use of the first cloud environment resource by the first integration flow that does not meet a threshold usage (Col 4, line 50 through Col. 5, line 11: In some aspects, the present disclosure refers to a detection of a “significant” increase of failed messages. Some embodiments herein are related to and concerned with increases in failed messages of integration flows that are impactful to the operation of the integration flows. Accordingly, the term “significant” increase of failed messages is analogous to an “impactful” increase in failed messages that negatively affects an integration flow. For example, embodiments herein might not be interested in or concerned with the failure of individual messages since, for example, a failure of 1 out of 100 messages is not significant to impact or disrupt the operation of an integration flow (even though the failed message will have to be resent). However, 40 out of 100 messages indicating a service is unreachable might compromise an integration flow to the extent that it no longer functions at an acceptable level. In some embodiments, an increase in failed messages after an event of about at least 20% relative to failed messages in a reference interval before the event is indicative of the event causing a problem. In some other embodiments, an increase in failed messages after an event of about at least 40% relative to failed messages in a reference interval before the event is indicative of the event triggering the creation of a problem. In some embodiments, the value of 40% might be selectively adjusted, based on, for example, a desired quality of service for a particular integration flow. For example, in a use case where every message is considered important for a particular integration flow, a threshold of 2% might be set for that integration flow, or for the tenant.); and sending, via the user interface, an alert to the user computing device, the alert describing at least one of the first cloud environment resource and the first integration flow (FIG. 8 is an outward-facing user interface related to a system and process for detection and characterization of integration flows, in accordance with an example embodiment. FIG. 8 is a human-machine interface display 800 in accordance with some embodiments. The display 800 includes a graphical representation 805 or dashboard that might be used to manage or monitor a problem detection and characterization of integration flows (e.g., associated with a multi-tenant cloud provider). In particular, selection of an element (e.g., an integration flow via a touchscreen or computer mouse pointer 810) might result in the display of a popup window that contains configuration data. Display 800 may also include a user selectable “Edit System” icon 815 to request system changes (e.g., to investigate problems associated with integration or improve system performance).). Regarding claim 7, Kolev teaches integration platform system of claim 1, the integration inspect service being further programmed to perform operations comprising: determining that the first resource usage data describes use of the first cloud environment resource by the first integration flow that does not meet a threshold usage (Col. 4, lines 3-6: As an example, event 205 might correspond to a middleware software update, where the software was executing properly and); and responsive to determining that the first resource usage data describes use of the first cloud environment resource by the first integration flow that does not meet a threshold usage, modifying, at least one parameter of the first integration flow (Col. 4, lines 3-19: the middleware software was updated at some point in time (i.e., the event). An analysis of message states after the software update event during a current interval of time can be compared to an analysis of message states during an interval of time before the software update (i.e., during a reference interval), to determine whether message failures increased significantly after the software update event. Note that this message state analysis may be performed for each integration flow. In the instance the message failures after the software update event significantly increased relative to the failed messages in the reference interval before the software update event, then the software update event can be deemed to have caused the problem for some reason (not yet determined). As illustrated by this example, the event is not static.). Regarding claim 8, Kolev teaches the integration inspect service being further programmed to perform operations comprising: providing, via the user interface, a first screen displaying usage of first cloud environment resource during a first time period, the first screen comprising a user interface element; receiving, an indication that the user interface element has been selected; and responsive to the indication that the user interface element has been selected, providing, via the user interface, a second screen displaying usage of the first cloud environment resource by the first integration flow over the first time period and usage of the first cloud environment resource by a second integration flow over the first time period (Col. 8, lines 28-42: FIG. 8 is an outward-facing user interface related to a system and process for detection and characterization of integration flows, in accordance with an example embodiment. FIG. 8 is a human-machine interface display 800 in accordance with some embodiments. The display 800 includes a graphical representation 805 or dashboard that might be used to manage or monitor a problem detection and characterization of integration flows (e.g., associated with a multi-tenant cloud provider). In particular, selection of an element (e.g., an integration flow via a touchscreen or computer mouse pointer 810) might result in the display of a popup window that contains configuration data. Display 800 may also include a user selectable “Edit System” icon 815 to request system changes (e.g., to investigate problems associated with integration or improve system performance).). Regarding claim 9, Bougon teaches the first cloud environment resource being at least one of a memory resource, a network resource, a database resource, and a central processing unit (CPU) resource ([0063]). Regarding claim 10, it is a method type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale above. Regarding claim 11, it is a method type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale above. Regarding claim 12, it is a method type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale above. Regarding claim 13, it is a method type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale above. Regarding claim 14, it is a method type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale above. Regarding claim 15, it is a method type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale above. Regarding claim 16, it is a method type claim having similar limitations as claim 7 above. Therefore, it is rejected under the same rationale above. Regarding claim 17, it is a method type claim having similar limitations as claim 8 above. Therefore, it is rejected under the same rationale above. Regarding claim 18, it is a method type claim having similar limitations as claim 9 above. Therefore, it is rejected under the same rationale above. Regarding claim 19, it is a media/product type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale above. Regarding claim 10, it is a method type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CHU JOY-DAVILA whose telephone number is (571)270-0692. The examiner can normally be reached Monday-Friday, 6:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORGE A CHU JOY-DAVILA/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Oct 16, 2023
Application Filed
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602244
OFFLOADING PROCESSING TASKS TO DECOUPLED ACCELERATORS FOR INCREASING PERFORMANCE IN A SYSTEM ON A CHIP
2y 5m to grant Granted Apr 14, 2026
Patent 12596565
USER ASSIGNED NETWORK INTERFACE QUEUES
2y 5m to grant Granted Apr 07, 2026
Patent 12591821
DYNAMIC ADJUSTMENT OF WELL PLAN SCHEDULES ON DIFFERENT HIERARCHICAL LEVELS BASED ON SUBSYSTEMS ACHIEVING A DESIRED STATE
2y 5m to grant Granted Mar 31, 2026
Patent 12585490
MIGRATING VIRTUAL MACHINES WHILE PERFORMING MIDDLEBOX SERVICE OPERATIONS AT A PNIC
2y 5m to grant Granted Mar 24, 2026
Patent 12579065
LIGHTWEIGHT KERNEL DRIVER FOR VIRTUALIZED STORAGE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+37.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month