DETAILED ACTION This office action is in response to application filed on 11/13/2023. Claims 1 – 20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim (s) 1, 2, 4 – 7, 14 – 16 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kairali et al (USPAT 11561849, prior art part of IDS dated 11/13/2023, hereinafter Kairali ) , in view of Carpenter et al (US 20190130327, hereinafter Carpenter) . As per claim 1, Kairali discloses: A system for monitoring and adjusting an application programming interface (API) function, the system comprising: one or more memories; and one or more processors, communicatively coupled to the one or more memories ( Kairali figure 1 ) , configured to: provide traffic information associated with the API function to a machine learning model; ( Kairali col 25, line 55 – col 26, line 7: “During step 601, an API call transmitted by a user of an application 203 may be received by the service mesh 211. The API call may be associated with a user profile of the user making the API call and invoke a microservice chain of the application 203 comprising one or more services 215 that make up the microservice chain… In step 607, knowledge base 208 checks and analyzes the current error rates and historical error rates for the microservice chain being invoked for the user profile as well as the error rates of the individual microservices of the chain. In step 609, the health status of each individual microservice 215 in the microservice chain is mapped to the microservice chain's error rate by the health module 206.”. Examiner notes that the microservice 215 is mapped to the claimed “API function”.) determine, based on output from the machine learning model, whether the API function complies with one or more [metrics] associated with the API function; transmit, to an administrator device, a report indicating whether the API function complies with the one or more [metrics] ; ( Kairali col 26, lines 12 – 22: “ In step 610, based on the error rate for the user profile submitting the API call, mapping of the health status of the individual microservices 215, and historical errors in the service mesh history, the knowledge base 208 predicts the error rate of the API call for the user profile invoking the API call. The predicted error rate outputted by the knowledge base 208 is reported to the second AI module 204, for example via reporting module 216. ”; col 26, lines 23 - “ In step 611, upon analysis of the error rate predicted by the first AI module 202, a determination is made by the analysis module 210 of the second AI module 204, whether or not the predicted error rate was made by the first AI module 202 with a level of confidence above a threshold level established by the application 203 and/or service mesh 211. ”; col 26, line 6 5 – col 27, line 4 : “ if in step 613 a determination is made that API call failure is predicted by the first AI module 202, in step 617 the first AI module 202 may further predict and report to the second AI module 204, the type of failure predicted to occur, the portion of the code or module predicted to fail and whether or not the errors predicted to cause the application failure are expected to be self-healing. ”.) receive, from the machine learning model, an indication that the API function is predicted to fail; and transmit an instruction to [modify] the API function based on the indication that the API function is predicted to fail. ( Kairali col 26, lines 13 – 21: “ if the API call failures predicted by the first AI module 202 are not considered self-healing by the self-healing module 214, the method 600 may proceed to step 623. During step 623, a reporting module 216 of the first AI module 202 may transmit a notification to the pods, containers, or proxies 217 of the microservices 215 within the service mesh requesting the proxies 217, pods and/or containers hosting the microservices check the current log levels and report back the log levels to the first AI module 202. ”; col 27, lines 41 – 53: “In step 631, in response to the log levels being considered insufficient for the predicted API call error rate, the dynamic log level changer 212 of the second AI module 204 may modify the log levels of the service mesh 211. More specifically, the dynamic log level changer 212 may increase the log level being applied to the proxies 217, pods and/or containers, increasing the amount of information being captured in the logs of the application 203 invoking the service chain(s). Upon a successful increase of the log levels in step 631, in step 633, the API call is executed by the service mesh 211 via the invoked microservice chain at the logging levels applied to the proxies, pods and/or containers by dynamic log level changer of the second AI module 204.” .) Kairali did not explicitly disclose: wherein the one or more [metrics] comprises one or more requirements in a service level agreement (SLA) ; wherein the [modify] further comprises to scale the API functions; However, Carpenter teaches: wherein the one or more [metrics] comprises one or more requirements in a service level agreement (SLA) ; wherein the [modify] further comprises to scale the API functions; (Carpenter [0022]: “A client may make use of the applications 114 using a service level agreement (SLA) that specifies various services and operational metrics for the services. The SLA may also specify monetary penalties, legal penalties, or both if the operational metrics are not met. For example, if the client sends a service request and the request is not processed within a particular period of time specified in the SLA, then the SLA may specify that the owner of the applications 114 is to pay the client a monetary penalty, resulting in revenue loss for the application owner. The systems and techniques described herein use machine learning to scale the cloud services 102 to avoid violating the SLA and avoid incurring monetary penalties or legal penalties.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Carpenter into that of Kairali in order to have the one or more [metrics] comprises one or more requirements in a service level agreement (SLA) and wherein the [modify] further comprises to scale the API functions. Kairali figure 6 describes using error rate of microservice to predict if an API call is going to fail. One of ordinary skill in the art can easily see that error rate is a commonly well known criteria in SLA metric. Kairali figure 6 further teaches adjusting the log rate of the microservice and service mesh in response to prediction of API call failing, however, one of ordinary skill in art can readily see that other forms of remediation can be used here without deviating from the general teaching of the prior art, such as scaling resource (microservice) as shown by Carpenter [0022]. Applicants have merely claimed the combination of known parts in the field to achieve predictable results of error prediction and correction and is therefore rejected under 35 USC 103. As per claim 2, the combination of Kairali and Carpenter further teach: The system of claim 1, wherein the one or more processors are configured to: receive, from the machine learning model, an indication of a suggested configuration change to the API function based on whether the API function complies with the one or more requirements; and transmit an instruction to apply the suggested configuration change. ( Kairali col 26, lines 13 – 21: “ if the API call failures predicted by the first AI module 202 are not considered self-healing by the self-healing module 214, the method 600 may proceed to step 623. During step 623, a reporting module 216 of the first AI module 202 may transmit a notification to the pods, containers, or proxies 217 of the microservices 215 within the service mesh requesting the proxies 217, pods and/or containers hosting the microservices check the current log levels and report back the log levels to the first AI module 202. ”; col 27, lines 41 – 53: “In step 631, in response to the log levels being considered insufficient for the predicted API call error rate, the dynamic log level changer 212 of the second AI module 204 may modify the log levels of the service mesh 211. More specifically, the dynamic log level changer 212 may increase the log level being applied to the proxies 217, pods and/or containers, increasing the amount of information being captured in the logs of the application 203 invoking the service chain(s). Upon a successful increase of the log levels in step 631, in step 633, the API call is executed by the service mesh 211 via the invoked microservice chain at the logging levels applied to the proxies, pods and/or containers by dynamic log level changer of the second AI module 204.” .) As per claim 4, the combination of Kairali and Carpenter further teach: The system of claim 1, wherein the traffic information indicates one or more sources associated with inputs to the API function ( Kairali col 15, lines 19 – 22: “ Health checking tasks may include determining whether upstream services 215 or instances 213 returned by service discovery are healthy and ready to accept network traffic. ”) , an average packet size associated with the inputs, or an average response time associated with the API function. As per claim 5, the combination of Kairali and Carpenter further teach: The system of claim 1, wherein the machine learning model is trained using a dataset labeled according to the one or more requirements in the SLA. ( Carpenter [0054] ) As per claim 6, the combination of Kairali and Carpenter further teach: The system of claim 1, wherein the indication that the API function is predicted to fail includes a future datetime. ( Kairali col 18, lines 40 – 45: “ The historical compilation of datasets from one or more databases, microservices 215, proxies 217, microservice chains, user profiles, etc., along with user or service mesh administration feedback can be applied to making future predictions about the error rates of one or more API calls. ” ) As per claim 7, the combination of Kairali and Carpenter further teach: The system of claim 1, wherein the instruction to scale is further based on the traffic information. (Carpenter [0062]) As per claim 14, Kairali discloses: A non-transitory computer-readable medium storing a set of instructions for monitoring and adjusting an application programming interface (API) function ( Kairali col 5, lines 17 – 41: CRM) , the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: provide traffic information associated with the API function to a machine learning model; ( Kairali col 25, line 55 – col 26, line 7: “During step 601, an API call transmitted by a user of an application 203 may be received by the service mesh 211. The API call may be associated with a user profile of the user making the API call and invoke a microservice chain of the application 203 comprising one or more services 215 that make up the microservice chain… In step 607, knowledge base 208 checks and analyzes the current error rates and historical error rates for the microservice chain being invoked for the user profile as well as the error rates of the individual microservices of the chain. In step 609, the health status of each individual microservice 215 in the microservice chain is mapped to the microservice chain's error rate by the health module 206.”. Examiner notes that the microservice 215 is mapped to the claimed “API function”.) determine, based on output from the machine learning model, whether the API function complies with one or more [metrics] associated with the API function; and transmit, to an administrator device, a report indicating whether the API function complies with the one or more . ( Kairali col 26, lines 12 – 22: “ In step 610, based on the error rate for the user profile submitting the API call, mapping of the health status of the individual microservices 215, and historical errors in the service mesh history, the knowledge base 208 predicts the error rate of the API call for the user profile invoking the API call. The predicted error rate outputted by the knowledge base 208 is reported to the second AI module 204, for example via reporting module 216. ”; col 26, lines 23 - “ In step 611, upon analysis of the error rate predicted by the first AI module 202, a determination is made by the analysis module 210 of the second AI module 204, whether or not the predicted error rate was made by the first AI module 202 with a level of confidence above a threshold level established by the application 203 and/or service mesh 211. ”; col 26, line 6 5 – col 27, line 4 : “ if in step 613 a determination is made that API call failure is predicted by the first AI module 202, in step 617 the first AI module 202 may further predict and report to the second AI module 204, the type of failure predicted to occur, the portion of the code or module predicted to fail and whether or not the errors predicted to cause the application failure are expected to be self-healing. ”.) Kairali did not explicitly disclose: wherein the one or more [metrics] comprises one or more requirements in a service level agreement (SLA) ; However, Carpenter teaches: wherein the one or more [metrics] comprises one or more requirements in a service level agreement (SLA) ; (Carpenter [0022]: “A client may make use of the applications 114 using a service level agreement (SLA) that specifies various services and operational metrics for the services. The SLA may also specify monetary penalties, legal penalties, or both if the operational metrics are not met. For example, if the client sends a service request and the request is not processed within a particular period of time specified in the SLA, then the SLA may specify that the owner of the applications 114 is to pay the client a monetary penalty, resulting in revenue loss for the application owner. The systems and techniques described herein use machine learning to scale the cloud services 102 to avoid violating the SLA and avoid incurring monetary penalties or legal penalties.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Carpenter into that of Kairali in order to have the one or more [metrics] comprises one or more requirements in a service level agreement (SLA) . Kairali figure 6 describes using error rate of microservice to predict if an API call is going to fail. One of ordinary skill in the art can easily see that error rate is a commonly well known criteria in SLA metric. Kairali figure 6 further teaches adjusting the log rate of the microservice and service mesh in response to prediction of API call failing, however, one of ordinary skill in art can readily see that other forms of remediation can be used here without deviating from the general teaching of the prior art, such as scaling resource (microservice) as shown by Carpenter [0022]. Applicants have merely claimed the combination of known parts in the field to achieve predictable results of error prediction and correction and is therefore rejected under 35 USC 103. As per claim 15, the combination of Kairali and Carpenter further teach: The non-transitory computer-readable medium of claim 14 , wherein the report comprises a file encoding an indication of whether the API function complies with the one or more requirements. ( Kairali col 26, lines 13 – 21: “ if the API call failures predicted by the first AI module 202 are not considered self-healing by the self-healing module 214, the method 600 may proceed to step 623. During step 623, a reporting module 216 of the first AI module 202 may transmit a notification to the pods, containers, or proxies 217 of the microservices 215 within the service mesh requesting the proxies 217, pods and/or containers hosting the microservices check the current log levels and report back the log levels to the first AI module 202. ”; col 27, lines 41 – 53: “In step 631, in response to the log levels being considered insufficient for the predicted API call error rate, the dynamic log level changer 212 of the second AI module 204 may modify the log levels of the service mesh 211. More specifically, the dynamic log level changer 212 may increase the log level being applied to the proxies 217, pods and/or containers, increasing the amount of information being captured in the logs of the application 203 invoking the service chain(s). Upon a successful increase of the log levels in step 631, in step 633, the API call is executed by the service mesh 211 via the invoked microservice chain at the logging levels applied to the proxies, pods and/or containers by dynamic log level changer of the second AI module 204.” .) As per claim 16, the combination of Kairali and Carpenter further teach: The non-transitory computer-readable medium of claim 14 , wherein the report comprises instructions to output a user interface (UI), wherein the UI includes a visual indicator, associated with the API function, that indicates whether the API function complies with the one or more requirements. ( Kairali col 26, line 6 5 – col 27, line 4 and Carpenter [0047]: GUI with visual indicators.) As per claim 18, the combination of Kairali and Carpenter further teach: The non-transitory computer-readable medium of claim 14 , wherein the one or more requirements in the SLA include one or more thresholds associated with input to, or output from, the API function. (Carpenter [0034]) Claim (s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kairali and Carpenter , and further in view of Jiang et al (US 20220019482, hereinafter Jiang) . As per claim 3, the combination of Kairali and Carpenter did not teach: The system of claim 2, wherein the report indicates the suggested configuration change, and the one or more processors are configured to: receive, from the administrator device, an approval of the suggested configuration change, wherein the instruction to apply the suggested configuration change is transmitted in response to the approval. However, Jiang teaches: The system of claim 2, wherein the report indicates the suggested configuration change, and the one or more processors are configured to: receive, from the administrator device, an approval of the suggested configuration change, wherein the instruction to apply the suggested configuration change is transmitted in response to the approval. (Jiang [0068]) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Jiang into that of Kairali and Carpenter in order the report indicates the suggested configuration change, and the one or more processors are configured to: receive, from the administrator device, an approval of the suggested configuration change, wherein the instruction to apply the suggested configuration change is transmitted in response to the approval . Carpenter [0062] teaches the scaling operation is performed automatically. However, one of ordinary skill in the art can easily see the alternative way of requiring manual approval from an administrator can also be employed here without deviating from the general teachings of the prior arts. Allowing administrator approval rights for changes would give the administrator more fine-grained control over the load balancing aspect, and applicants have thus merely claimed the combination of known parts in the field to achieve predictable results of error prediction and correction and is therefore rejected under 35 USC 103. Claim (s) 8 and 10 – 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kairali , in view of Harguindeguy et al (US 20200220875, hereinafter Harguindeguy ) . As per claim 8, Kairali discloses: A method of monitoring and adjusting an application programming interface (API) function, comprising: providing, by an API monitor, traffic information associated with the API function to a machine learning model; ( Kairali col 25, line 55 – col 26, line 7: “During step 601, an API call transmitted by a user of an application 203 may be received by the service mesh 211. The API call may be associated with a user profile of the user making the API call and invoke a microservice chain of the application 203 comprising one or more services 215 that make up the microservice chain… In step 607, knowledge base 208 checks and analyzes the current error rates and historical error rates for the microservice chain being invoked for the user profile as well as the error rates of the individual microservices of the chain. In step 609, the health status of each individual microservice 215 in the microservice chain is mapped to the microservice chain's error rate by the health module 206.”. Examiner notes that the microservice 215 is mapped to the claimed “API function”.) receiving, from the machine learning model, an indication of [at least one error] of the API function ; transmitting, to an administrator device, the indication of the at least one [error] ; ( Kairali col 26, lines 12 – 22: “ In step 610, based on the error rate for the user profile submitting the API call, mapping of the health status of the individual microservices 215, and historical errors in the service mesh history, the knowledge base 208 predicts the error rate of the API call for the user profile invoking the API call. The predicted error rate outputted by the knowledge base 208 is reported to the second AI module 204, for example via reporting module 216. ”; col 26, lines 23 - “ In step 611, upon analysis of the error rate predicted by the first AI module 202, a determination is made by the analysis module 210 of the second AI module 204, whether or not the predicted error rate was made by the first AI module 202 with a level of confidence above a threshold level established by the application 203 and/or service mesh 211. ”; col 26, line 6 5 – col 27, line 4 : “ if in step 613 a determination is made that API call failure is predicted by the first AI module 202, in step 617 the first AI module 202 may further predict and report to the second AI module 204, the type of failure predicted to occur, the portion of the code or module predicted to fail and whether or not the errors predicted to cause the application failure are expected to be self-healing. ”.) and transmitting, based on the indication of the at least one [error] and to the API function, an instruction to [modify the API function] . ( Kairali col 26, lines 13 – 21: “ if the API call failures predicted by the first AI module 202 are not considered self-healing by the self-healing module 214, the method 600 may proceed to step 623. During step 623, a reporting module 216 of the first AI module 202 may transmit a notification to the pods, containers, or proxies 217 of the microservices 215 within the service mesh requesting the proxies 217, pods and/or containers hosting the microservices check the current log levels and report back the log levels to the first AI module 202. ”; col 27, lines 41 – 53: “In step 631, in response to the log levels being considered insufficient for the predicted API call error rate, the dynamic log level changer 212 of the second AI module 204 may modify the log levels of the service mesh 211. More specifically, the dynamic log level changer 212 may increase the log level being applied to the proxies 217, pods and/or containers, increasing the amount of information being captured in the logs of the application 203 invoking the service chain(s). Upon a successful increase of the log levels in step 631, in step 633, the API call is executed by the service mesh 211 via the invoked microservice chain at the logging levels applied to the proxies, pods and/or containers by dynamic log level changer of the second AI module 204.” .) Kairali did not explicitly disclose: wherein the at least one error comprises at least one source that is abusing the API function ; wherein the modify of the API function comprises blocking calls from the at least one source; Harguindeguy teaches: wherein the at least one error comprises at least one source that is abusing the API function ; wherein the modify of the API function comprises blocking calls from the at least one source; ( Harguindeguy [0046]: “( i ) receiving at a security server, server resource request message data extracted from a server resource request message received at an access control server,… (ii) initiating a first security response at the security server, wherein (c) the initiated first security response is dependent on a result of analysis of the server resource request message data received at the security server, and (d) responsive to said analysis of the server resource request message data resulting in identification of an indicator of compromise by the security server or that an originating terminal corresponding to the server resource request is identified within a blacklist, said first security response comprises non-transmission of at least one server resource request message received from the originating terminal corresponding to the server resource request by the access control server to a resource server.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Carpenter into that of Kairali in order to have the at least one error comprises at least one source that is abusing the API function ; wherein the modify of the API function comprises blocking calls from the at least one source. Kairali figure 6 describes using error rate of microservice to predict if an API call is going to fail and scale the log level accordingly. One of ordinary skill in the art can easily see that other forms of remediation can be used here without deviating from the general teaching of the prior art, blacklisting a input source as shown by Harguindeguy [0046]. Applicants have merely claimed the combination of known parts in the field to achieve predictable results of error prediction and correction and is therefore rejected under 35 USC 103. As per claim 10, the combination of Kairali and Harguindeguy further teach: The method of claim 8 , wherein the indication of the at least one source includes an Internet protocol (IP) address ( Harguindeguy [0050]) , a source name, or a combination thereof. As per claim 11, the combination of Kairali and Harguindeguy further teach: The method of claim 8 , wherein the machine learning model is configured to detect abuse of the API function based on a rate of inputs to the API function ( Harguindeguy [00 17 ]) , a size associated with the inputs, or a combination thereof. As per claim 12, the combination of Kairali and Harguindeguy further teach: The method of claim 8 , further comprising: transmitting, to a device associated with the at least one source, an indication that the at least one source is blocked. ( Kairali col 26, line 65 – col 27, line 4) Claim (s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kairali and Harguindeguy , and further in view of Jiang . As per claim 9, the combination of Kairali and Harguindeguy did not teach: The method of claim 8 , further comprising: receiving, from the administrator device, a confirmation in response to the indication of the at least one source, wherein the instruction to block calls is transmitted based on the confirmation. However, Jiang teaches: The method of claim 8 , further comprising: receiving, from the administrator device, a confirmation in response to the indication of the at least one source, wherein the instruction to block calls is transmitted based on the confirmation. (Jiang [0068]) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Jiang into that of Kairali and Harguindeguy in order to receive a confirmation in response to the indication of the at least one source, wherein the instruction to block calls is transmitted based on the confirmation . Kairali figure 6 teaches the scaling operation is performed automatically. However, one of ordinary skill in the art can easily see the alternative way of requiring manual approval from an administrator can also be employed here without deviating from the general teachings of the prior arts. Allowing administrator approval rights for changes would give the administrator more fine-grained control over the load balancing aspect, and applicants have thus merely claimed the combination of known parts in the field to achieve predictable results of error prediction and correction and is therefore rejected under 35 USC 103. Claim (s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kairali and Harguindeguy , and further in view of Gupta et al (US 20200106806, hereinafter Gupta) . As per claim 13, the combination of Kairali and Harguindeguy did not teach: The method of claim 8 , wherein the machine learning model is trained using a dataset associated with denial-of-service attacks. However, Gupta teaches: The method of claim 8 , wherein the machine learning model is trained using a dataset associated with denial-of-service attacks. (Gupta [0027]) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Gupta into that of Kairali and Harguindeguy in order to the machine learning model is trained using a dataset associated with denial-of-service attacks . Kairali figure 6 using ML model to determine if the API call would fail. However, one of ordinary skill in the art can easily see the alternative causes for error detection and correction can be applied here as well without deviating from the general teaching of the prior arts. Applicants have thus merely claimed the combination of known parts in the field to achieve predictable results of error prediction and correction and is therefore rejected under 35 USC 103. Claim (s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kairali and Carpenter , and further in view of Hermann et al (US 20200202053, hereinafter Hermann) . As per claim 17, the combination of Kairali and Carpenter did not teach: The non-transitory computer-readable medium of claim 16, wherein the one or more instructions, when executed by the one or more processors, cause the device to: receive, from the administrator device, an indication of an interaction with the visual indicator; and transmit, to the administrator device, instructions to output a pop-up window including information associated with the one or more requirements. However, Hermann teaches: The non-transitory computer-readable medium of claim 16, wherein the one or more instructions, when executed by the one or more processors, cause the device to: receive, from the administrator device, an indication of an interaction with the visual indicator; and transmit, to the administrator device, instructions to output a pop-up window including information associated with the one or more requirements. (Hermann [0115]) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Saxena into that of Kairali and Carpenter in order receive an indication of an interaction with the visual indicator; and transmit, to the administrator device, instructions to output a pop-up window including information associated with the one or more requirements . Carpenter [0047] teaches displaying data in a GUI. Hermann [0115] teaches using well known GUI display method such as pop up window to alert users . Applicants have merely claimed the combination of known parts in the field to achieve predictable results of displaying error to user and is therefore rejected under 35 USC 103. Claim (s) 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kairali and Carpenter , and further in view of Saxena et al (USPAT 11513854, hereinafter Saxena) . As per claim 19, the combination of Kairali and Carpenter did not teach: The non-transitory computer-readable medium of claim 14 , wherein the one or more instructions, when executed by the one or more processors, cause the device to: receive, from the administrator device, an instruction to disable the API function in response to the report; and transmit, based on the instruction and to a host associated with the API function, a command to disable the API function. However, Saxena teaches: The non-transitory computer-readable medium of claim 14 , wherein the one or more instructions, when executed by the one or more processors, cause the device to: receive, from the administrator device, an instruction to disable the API function in response to the report; and transmit, based on the instruction and to a host associated with the API function, a command to disable the API function. (Saxena col 11, lines 17 – 48.) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Saxena into that of Kairali and Carpenter in order receive an instruction to disable the API function in response to the report; and transmit, based on the instruction and to a host associated with the API function, a command to disable the API function. Kairali figure 6 further teaches adjusting the log rate of the microservice and service mesh in response to prediction of API call failing, however, one of ordinary skill in art can readily see that other forms of remediation can be used here without deviating from the general teaching of the prior art, such as disabling affected resource (microservice) as shown by Saxena col 11, lines 17 – 48. Applicants have merely claimed the combination of known parts in the field to achieve predictable results of error prediction and correction and is therefore rejected under 35 USC 103. As per claim 20, the combination of Kairali and Carpenter did not teach: The non-transitory computer-readable medium of claim 14 , wherein the one or more instructions, when executed by the one or more processors, cause the device to: transmit, based on whether the API function complies with the one or more requirements, a command to throttle the API function. However, Saxena teaches: The non-transitory computer-readable medium of claim 14 , wherein the one or more instructions, when executed by the one or more processors, cause the device to: transmit, based on whether the API function complies with the one or more requirements, a command to throttle the API function. (Saxena col 11, lines 17 – 48.) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Saxena into that of Kairali and Carpenter in order to transmit, based on whether the API function complies with the one or more requirements, a command to throttle the API function . Kairali figure 6 further teaches adjusting the log rate of the microservice and service mesh in response to prediction of API call failing, however, one of ordinary skill in art can readily see that other forms of remediation can be used here without deviating from the general teaching of the prior art, such as throttling affected resource (microservice) as shown by Saxena col 11, lines 17 – 48. Applicants have merely claimed the combination of known parts in the field to achieve predictable results of error prediction and correction and is therefore rejected under 35 USC 103. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Levert et al (US 20250126184) teaches “ An application programming interface (API) proxy intercepts API calls and responses for an application under test in a development environment, simulating (e.g., mocking) rate limiting and throttling behavior, which is otherwise challenging to test. The API proxy receives a API call and, based on a resource limiting parameter (e.g., rate-limiting or otherwise throttling), determines that the API call should be forwarded to the API endpoint. When the API proxy receives another API call from the application, destined for the same API endpoint, the API proxy determines to not forward the second API call, based on the resource limiting parameter (e.g., too soon after the first API call, or requests too much of a computational burden, such as exceeding a resource quota). The API proxy instead returns a throttling response, as would be expected from the API endpoint. The API proxy provides guidance messages for both outgoing calls and incoming responses. ”; Kosim-Satyaputra et al (US 20180352053) teaches “ An API rate limiting system may receive a client request from an API client associated with a tenant, formulate a proxied request with an internal authentication specific to the tenant, and send the proxied request to API endpoints (tenant resources) at a store. The store fulfills the request, accessing and modifying local database(s) as needed, and returns a response to the system. The system returns the response to the API client along with information about the API client's quota for the current time window. The system may calculate the quota based on a resource limit with respect to a number of clients accessing a resource. In some embodiments, the system may implement an exponential distribution function in making a determination on a quota per API client per time window. ” Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT CHARLES M SWIFT whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-7756 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday - Friday: 9:30 AM - 7PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT April Blair can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 5712701014 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES M SWIFT/ Primary Examiner, Art Unit 2196