Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-3, 6-11, 14-18, 20-23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Referring to claims 1, 9, 17, and consequently their dependent claims, “the respective expected test outcome” lacks antecedent basis. Further, there is only claimed “an expected test outcome” so it is unclear what such outcome would be respective of.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 8-11, 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over US10795793 to Arunachalam et al. in view of US20120287871 to Marini et al.
Referring to claim 1, Arunachalam discloses a testing framework, comprising: one or more processors; a framework controller executing on the one or more processors; a framework monitor executing on the one or more processors; a framework injector executing on the one or more processors; and one or more non-transitory computer readable media storing instructions which, when executed by the one or more processors (See figure 1 and below.), cause the one or more processors to:
receive, at the framework controller, a test execution request comprising test parameters and an expected test outcome (From line 24 of column 6, “The natural language input received by domain-specific language translator 132 from a client device 120 generally identifies one or more failures to inject into an application executing on one or more application servers 140, the properties of the failures to be injected into the application, and an expected outcome of the simulated failure scenario represented by the natural language input. In some embodiments, the expected outcome of the simulated failure scenario may include an expected state of the application servers 140 on which application components execute after injection of the failure into the specified application servers to verify that the system has failed according to the failures identified in the natural language input and an expected state of the application servers 140 after recovery operations have been invoked on the application servers 140. The properties of the failures to be injected into the application may include, for example, information identifying the application servers 140 or other resources to execute a failure on, an amount of time to wait before verifying that the specified failure was successfully injected into the application, an amount of time to wait after initiating recovery processes before verifying that the application has recovered from a simulated failure scenario, numbers of active application servers or other resources allocated to executing or orchestrating execution of the application, and other parameters that may be appropriate for simulating a failure scenario on application servers 140.”),
wherein the test execution request comprises a request for testing a microservice architecture implemented in a cloud infrastructure (From line 16 of column 1, “Applications may be implemented as a collection of services that work together to perform a specified task. In these applications, the services that are deployed to implement the functionality of the application may be hosted on different computing devices, such as physical servers, virtual servers executing in a virtualized environment, server pools, distributed computing environments, dynamically load-balanced cloud computing environments, or other computing environments. The functionality of the overall application may be adversely affected by unavailability or degraded performance of specific computing systems on which services may execute. For example, unavailability of a specific service may cause certain functions of an application to be partially or wholly unavailable for use by users of the application. In another example, degraded performance of a specific service, which may include performance degradation from network latencies, non-responsive computing services, spinlock scenarios, or other scenarios in which a computing system is available but unresponsive, may cause time-out events or other failures in an application. In some cases, applications may include recovery measures that attempt to recover from system failures or degraded performance of various services used by an application. These recovery measures may include, for example, re-instantiating services on different servers (physical or virtual), migrating execution of services to different pools of servers, re-instantiating load balancers or other infrastructure components that orchestrate execution of the application, terminating and re-instantiating unresponsive services executing on a server, and the like.” From line 26 of column 12, “As discussed, block 260 may be reached, for example, if an assertion that the monitored outcome matches the expected outcome fails. In some embodiments, the system may proceed to take proactive or remedial action with respect to the application code being tested to prevent code in a development stage of the software development pipeline from being promoted or reverting a promotion of code to a production environment so that code that has been tested to respond in the expected manner to a failure scenario is made available in the production environment. Operations 200 may proceed to block 270, where the system reverts the distributed computing system to a state prior to the simulated system failure. Generally, reverting the distributed computing system to a state prior to the simulated system failure may include terminating an instance of the distributed computing system (e.g., in a cloud computing environment), restarting physical servers and other infrastructure components in the distributed computing system, terminating and restarting services executing on a computing service, or other actions that may be taken to reset the distributed computing environment to a known state.” From line 5 of column 10, “Application servers 140 generally host applications or components of an application that serve content to a user on an endpoint device and process user input received from the endpoint device. In some embodiments, the application components may be implemented and deployed across a number of application servers 140 in a distributed computing environment. These application components may be services or microservices that, together, expose the functionality of an application to users of the application. The application servers 140 may host components that may be shared across different applications. In some embodiments, the application servers 140 may additionally include infrastructure components used to manage the distributed computing environment in which an application executes.”);
provide, by the framework controller, the test execution request and the test parameters to the framework injector (From line 18 of column 7, “Failure simulator 134 receives the commands generated by domain-specific language translator 132 and transmits the commands to the one or more application servers 140 and/or other infrastructure components for execution. Generally, the commands generated by domain-specific language translator 132 and transmitted to application servers 140 for execution may include commands to remove an application server 140 or other infrastructure component (e.g., load balancers, storage components, virtualized networking components, scalers, etc.) from the set of components used to execute application services, simulate increased network latencies on specified application servers 140, simulate spinlocks or other high processor utilization scenarios on specified application servers 140, terminate processes on an application server 140, and other scenarios that may arise in a system failure scenario. After transmitting commands to the application servers 140 to inject simulated failures into the application servers 140 in a distributed computing system, failure simulator 134 may subsequently transmit one or more commands to initiate a recovery process from the simulated failures. In some embodiments, failure simulator 134 may transmit these commands to initiate a recovery process after a waiting period included in the natural language input defining the simulated failure scenario, and in some embodiments, the recovery process may be initiated upon determining that the generated commands to inject a simulated failure into the distributed computing system successfully executed.”);
provide, by the framework controller, the expected test outcome to the framework monitor (From column 46 of column 7, “System failure analyzer 136 generally monitors the application servers 140 during and after execution of a simulated system failure to determine whether a simulated system failure executed successfully and whether the application servers 140 in a distributed computing environment on which an application executes successfully recovered from the simulated system failure. In some embodiments, system failure analyzer 136 may use assertions to break execution of a simulated system failure if the actual outcome of a simulated system failure does not match the expected outcome of a simulated system failure. For example, if a simulated system failure was introduced to simulate a server failure in the distributed computing environment, system failure analyzer 136 may compare the number of active servers in the distributed computing environment to an expected number of active servers (e.g., the number of servers prior to the simulated system failure, less the number of servers identified in the natural language input to remove from the distributed computing environment) to determine whether the server failure was injected into the distributed computing environment. In another example, if a simulated system failure was introduced to simulate a spinlock or other high processor utilization scenario on a specified application server 140, system failure analyzer 136 may determine whether the specified application server 140 is in a spinlock or high processor utilization scenario by determining whether the specified application server 140 responds to status requests transmitted by system failure analyzer 136. If commands to introduce a simulated failure into the distributed computing environment fail to actually introduce the simulated failure into the computing environment, attempting to recover from the system failure may waste computing resources in testing an incomplete failure because part or all of the simulated system failure did not actually execute. Thus, system failure analyzer 136 may halt execution of the simulated system failure prior to execution of commands to recover from the simulated system failure. In some embodiments, system failure analyzer 136 may further generate an alert informing a developer that the code for introducing the simulated system failure failed to do so.”);
execute, by the framework injector, a test corresponding to the test execution request using a test injector of a plurality of test injectors implemented by the framework injector (From line 18 of column 7, “Failure simulator 134 receives the commands generated by domain-specific language translator 132 and transmits the commands to the one or more application servers 140 and/or other infrastructure components for execution. Generally, the commands generated by domain-specific language translator 132 and transmitted to application servers 140 for execution may include commands to remove an application server 140 or other infrastructure component (e.g., load balancers, storage components, virtualized networking components, scalers, etc.) from the set of components used to execute application services, simulate increased network latencies on specified application servers 140, simulate spinlocks or other high processor utilization scenarios on specified application servers 140, terminate processes on an application server 140, and other scenarios that may arise in a system failure scenario. After transmitting commands to the application servers 140 to inject simulated failures into the application servers 140 in a distributed computing system, failure simulator 134 may subsequently transmit one or more commands to initiate a recovery process from the simulated failures. In some embodiments, failure simulator 134 may transmit these commands to initiate a recovery process after a waiting period included in the natural language input defining the simulated failure scenario, and in some embodiments, the recovery process may be initiated upon determining that the generated commands to inject a simulated failure into the distributed computing system successfully executed.”),
wherein the test injector comprises a communication error injector configured to cause communication errors, thereby causing communication to function in a way that deviates from the “respective” expected test outcome (From line 18 of column 7, “Failure simulator 134 receives the commands generated by domain-specific language translator 132 and transmits the commands to the one or more application servers 140 and/or other infrastructure components for execution. Generally, the commands generated by domain-specific language translator 132 and transmitted to application servers 140 for execution may include commands to remove an application server 140 or other infrastructure component (e.g., load balancers, storage components, virtualized networking components, scalers, etc.) from the set of components used to execute application services, simulate increased network latencies on specified application servers 140, simulate spinlocks or other high processor utilization scenarios on specified application servers 140, terminate processes on an application server 140, and other scenarios that may arise in a system failure scenario. After transmitting commands to the application servers 140 to inject simulated failures into the application servers 140 in a distributed computing system, failure simulator 134 may subsequently transmit one or more commands to initiate a recovery process from the simulated failures. In some embodiments, failure simulator 134 may transmit these commands to initiate a recovery process after a waiting period included in the natural language input defining the simulated failure scenario, and in some embodiments, the recovery process may be initiated upon determining that the generated commands to inject a simulated failure into the distributed computing system successfully executed.” From line 47 of column 8, “In another example, where the simulated system failure simulates a spinlock, high processor utilization, or degraded network connectivity scenario on a specific application 50 server, system failure analyzer 136 may determine whether the distributed computing environment recovered from the simulated system failure by determining whether the targeted application server 140 was replaced or otherwise responds to status requests from system failure analyzer 136 55 prior to a timeout period. If the targeted application server responds to a status request within the specified timeout period, system failure analyzer 136 can determine that the distributed computing environment successfully recovered from the simulated system failure; however, if a replacement 60 server is not detected or the targeted application server does not respond within a timeout period, system failure analyzer 136 can determine that the distributed computing system failed to recover from the specified system failure.”);
monitor, by the framework monitor, a response of the microservice architecture of the cloud infrastructure to the test; and provide, by the framework controller and from the framework monitor, a test result based on the response (From column 46 of column 7, “System failure analyzer 136 generally monitors the application servers 140 during and after execution of a simulated system failure to determine whether a simulated system failure executed successfully and whether the application servers 140 in a distributed computing environment on which an application executes successfully recovered from the simulated system failure. In some embodiments, system failure analyzer 136 may use assertions to break execution of a simulated system failure if the actual outcome of a simulated system failure does not match the expected outcome of a simulated system failure. For example, if a simulated system failure was introduced to simulate a server failure in the distributed computing environment, system failure analyzer 136 may compare the number of active servers in the distributed computing environment to an expected number of active servers (e.g., the number of servers prior to the simulated system failure, less the number of servers identified in the natural language input to remove from the distributed computing environment) to determine whether the server failure was injected into the distributed computing environment. In another example, if a simulated system failure was introduced to simulate a spinlock or other high processor utilization scenario on a specified application server 140, system failure analyzer 136 may determine whether the specified application server 140 is in a spinlock or high processor utilization scenario by determining whether the specified application server 140 responds to status requests transmitted by system failure analyzer 136. If commands to introduce a simulated failure into the distributed computing environment fail to actually introduce the simulated failure into the computing environment, attempting to recover from the system failure may waste computing resources in testing an incomplete failure because part or all of the simulated system failure did not actually execute. Thus, system failure analyzer 136 may halt execution of the simulated system failure prior to execution of commands to recover from the simulated system failure. In some embodiments, system failure analyzer 136 may further generate an alert informing a developer that the code for introducing the simulated system failure failed to do so.” From line 57 of column 11, “At block 240, the system monitors the distributed computing system to record an outcome of the simulated system failure. In some embodiments, monitoring the distributed computing system to record an outcome of the simulated system failure may include requesting status messages from one or more application servers 140 and/or infrastructure components, requesting information about a number of servers included in the distributed computing system for hosting an application or application services, and other monitoring to determine if services, application servers, and infrastructure components are responsive. At block 250, the system determines whether the monitored outcome matches the expected outcome of the simulated system failure. The monitored outcome may match the expected outcome, for example, if the monitored and expected outcomes of the simulated system failure match. For example, the recorded outcome and expected outcome of the simulated system failure may be a state of an alert message. After recovery operations have been initiated, the expected outcome may be an alert message with a status of “OK.” If the recorded outcome is some value other than a status of “OK,” which indicates that an error condition still exists in the distributed computing environment, the system can determine that the monitored outcome of the simulated system failure does not match the expected outcome of the simulated system failure, at block 260, the system generates an alert identifying a difference between the recorded outcome and the expected outcome. In another example, the recorded outcome and expected outcome for the simulated system failure may be a number of active application servers in the distributed computing system. A mismatch between the number of active application servers and an expected number of active application servers generally indicates that recovery operations on the distributed computing environment failed, and operations 200 may thus proceed to block 260.”).
Although Arunachalam does not specifically disclose that such communication errors may be protocol compliance errors, this is known in the art. In a related field of computing, an example of this is shown by Marini, from paragraph 3, “The network infrastructure requires testing in order to check correct operation. Tests on the entire network infrastructure or on part of it can become necessary for various reasons. For example, in the design and implementation phases, it might be necessary to check the functionality of the Base Radio Station when linked to one or more terminals, the behaviour of which can be modified for the purpose of simulating fault situations or communications protocol errors. In addition, it might be necessary to check the expected behaviour of the base radio station in the presence of network loads caused by a number of simultaneously active user terminals.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to inject a protocol error because, from Marini, “The network infrastructure requires testing in order to check correct operation.”
Referring to claim 2, Arunachalam and Marini discloses wherein the test injector further comprises a disruption injector (Arunachalam From line 18 of column 7, “Failure simulator 134 receives the commands generated by domain-specific language translator 132 and transmits the commands to the one or more application servers 140 and/or other infrastructure components for execution. Generally, the commands generated by domain-specific language translator 132 and transmitted to application servers 140 for execution may include commands to remove an application server 140 or other infrastructure component (e.g., load balancers, storage components, virtualized networking components, scalers, etc.) from the set of components used to execute application services, simulate increased network latencies on specified application servers 140, simulate spinlocks or other high processor utilization scenarios on specified application servers 140, terminate processes on an application server 140, and other scenarios that may arise in a system failure scenario. After transmitting commands to the application servers 140 to inject simulated failures into the application servers 140 in a distributed computing system, failure simulator 134 may subsequently transmit one or more commands to initiate a recovery process from the simulated failures. In some embodiments, failure simulator 134 may transmit these commands to initiate a recovery process after a waiting period included in the natural language input defining the simulated failure scenario, and in some embodiments, the recovery process may be initiated upon determining that the generated commands to inject a simulated failure into the distributed computing system successfully executed.”).
Referring to claim 3, Arunachalam and Marini discloses wherein the test is performed using the disruption injector and comprises inducing a disruption in one or more components of the cloud infrastructure (Arunachalam From line 18 of column 7, “Failure simulator 134 receives the commands generated by domain-specific language translator 132 and transmits the commands to the one or more application servers 140 and/or other infrastructure components for execution. Generally, the commands generated by domain-specific language translator 132 and transmitted to application servers 140 for execution may include commands to remove an application server 140 or other infrastructure component (e.g., load balancers, storage components, virtualized networking components, scalers, etc.) from the set of components used to execute application services, simulate increased network latencies on specified application servers 140, simulate spinlocks or other high processor utilization scenarios on specified application servers 140, terminate processes on an application server 140, and other scenarios that may arise in a system failure scenario. After transmitting commands to the application servers 140 to inject simulated failures into the application servers 140 in a distributed computing system, failure simulator 134 may subsequently transmit one or more commands to initiate a recovery process from the simulated failures. In some embodiments, failure simulator 134 may transmit these commands to initiate a recovery process after a waiting period included in the natural language input defining the simulated failure scenario, and in some embodiments, the recovery process may be initiated upon determining that the generated commands to inject a simulated failure into the distributed computing system successfully executed.”).
Referring to claim 8, Arunachalam and Marini discloses wherein the test result comprises an indication of whether the response matches the expected test outcome (Arunachalam From column 46 of column 7, “System failure analyzer 136 generally monitors the application servers 140 during and after execution of a simulated system failure to determine whether a simulated system failure executed successfully and whether the application servers 140 in a distributed computing environment on which an application executes successfully recovered from the simulated system failure. In some embodiments, system failure analyzer 136 may use assertions to break execution of a simulated system failure if the actual outcome of a simulated system failure does not match the expected outcome of a simulated system failure. For example, if a simulated system failure was introduced to simulate a server failure in the distributed computing environment, system failure analyzer 136 may compare the number of active servers in the distributed computing environment to an expected number of active servers (e.g., the number of servers prior to the simulated system failure, less the number of servers identified in the natural language input to remove from the distributed computing environment) to determine whether the server failure was injected into the distributed computing environment. In another example, if a simulated system failure was introduced to simulate a spinlock or other high processor utilization scenario on a specified application server 140, system failure analyzer 136 may determine whether the specified application server 140 is in a spinlock or high processor utilization scenario by determining whether the specified application server 140 responds to status requests transmitted by system failure analyzer 136. If commands to introduce a simulated failure into the distributed computing environment fail to actually introduce the simulated failure into the computing environment, attempting to recover from the system failure may waste computing resources in testing an incomplete failure because part or all of the simulated system failure did not actually execute. Thus, system failure analyzer 136 may halt execution of the simulated system failure prior to execution of commands to recover from the simulated system failure. In some embodiments, system failure analyzer 136 may further generate an alert informing a developer that the code for introducing the simulated system failure failed to do so.” From line 57 of column 11, “At block 240, the system monitors the distributed computing system to record an outcome of the simulated system failure. In some embodiments, monitoring the distributed computing system to record an outcome of the simulated system failure may include requesting status messages from one or more application servers 140 and/or infrastructure components, requesting information about a number of servers included in the distributed computing system for hosting an application or application services, and other monitoring to determine if services, application servers, and infrastructure components are responsive. At block 250, the system determines whether the monitored outcome matches the expected outcome of the simulated system failure. The monitored outcome may match the expected outcome, for example, if the monitored and expected outcomes of the simulated system failure match. For example, the recorded outcome and expected outcome of the simulated system failure may be a state of an alert message. After recovery operations have been initiated, the expected outcome may be an alert message with a status of “OK.” If the recorded outcome is some value other than a status of “OK,” which indicates that an error condition still exists in the distributed computing environment, the system can determine that the monitored outcome of the simulated system failure does not match the expected outcome of the simulated system failure, at block 260, the system generates an alert identifying a difference between the recorded outcome and the expected outcome. In another example, the recorded outcome and expected outcome for the simulated system failure may be a number of active application servers in the distributed computing system. A mismatch between the number of active application servers and an expected number of active application servers generally indicates that recovery operations on the distributed computing environment failed, and operations 200 may thus proceed to block 260.”).
Referring to claims 9-11, 16-18 see rejection of claims 1-3, 8 above.
Claim(s) 6, 7, 14, 15, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arunachalam and Marini as applied to claim 1, 9, 17 above, and further in view of US 6701460 to Suwandi et al.
Referring to claim 6, although Arunchalam and Marini does not specifically disclose the test injector further comprises a crash point injector, this is known in the art. In a related field of computing, an example of this is shown by Suwandi, from the abstract, “One embodiment of the present invention provides a system for testing a computer system by using software to inject faults into the computer system while the computer system is operating. This system operates by allowing a programmer to include a fault point into source code for a program. This fault point causes a fault to occur if a trigger associated with the fault point is set and if an execution path of the program passes through the fault point. The system allows this source code to be compiled into executable code. Next, the system allows the computer system to be tested. This testing involves setting the trigger for the fault point, and then executing the executable code, so that the fault occurs if the execution path passes through the fault point. This testing also involves examining the result of the execution. In one embodiment of the present invention, if the fault point is encountered while executing the executable code, the system executes the fault point by: looking up a trigger associated with the fault point; determining whether the trigger has been set; and executing code associated with the fault point if the trigger has been set.” Further from line 27 of column 4, “If a fault point 304 is executed during execution of program 306, and a trigger for fault point 304 has been set, fault point 304 causes a fault 305 to be generated. Note that fault 305 can generally include any type of fault or other event that can be triggered through software. This includes, but is not limited to, a computer system reboot operation, a computer system panic operation (that causes operation of a computer system to terminate), a return of an error code, a forced change in control flow, a resource (memory) allocation failure, a response delay, an erroneous message and a deadlock.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to inject a fault into code because, from line 25 of column 1 of Suwandi, “In order to ensure that highly available computer systems operate properly, it is necessary to perform rigorous testing. This testing is complicated by the fact that highly available computer systems typically include a large number of components and subsystems that are subject to failure. Furthermore, an operating system for a highly available computer system contains a large number of pathways to handle error conditions that must also be tested. Some types of testing can be performed manually, for example by unplugging a computer system component, disconnecting a cable, or by pulling out a computer system board while the computer system is running. However, an outcome of this type of manual testing is typically not repeatable and is imprecise because the manual event can happen at random points in the execution path of a program and/or operating system that is executing on the highly available computer system. What is needed is a method and an apparatus that facilitates testing a computer system by injecting faults at precise locations in the execution path of an operating system and/or program that is executing on a computer system.”
Referring to claim 7, Arunachalam and Marini and Suwandi discloses the test is performed using the crash point injector and comprises inserting a crash point into code corresponding to one or more microservices of the cloud infrastructure (Arunachalam, From line 16 of column 1, “Applications may be implemented as a collection of services that work together to perform a specified task. In these applications, the services that are deployed to implement the functionality of the application may be hosted on different computing devices, such as physical servers, virtual servers executing in a virtualized environment, server pools, distributed computing environments, dynamically load-balanced cloud computing environments, or other computing environments. The functionality of the overall application may be adversely affected by unavailability or degraded performance of specific computing systems on which services may execute. For example, unavailability of a specific service may cause certain functions of an application to be partially or wholly unavailable for use by users of the application. In another example, degraded performance of a specific service, which may include performance degradation from network latencies, non-responsive computing services, spinlock scenarios, or other scenarios in which a computing system is available but unresponsive, may cause time-out events or other failures in an application. In some cases, applications may include recovery measures that attempt to recover from system failures or degraded performance of various services used by an application. These recovery measures may include, for example, re-instantiating services on different servers (physical or virtual), migrating execution of services to different pools of servers, re-instantiating load balancers or other infrastructure components that orchestrate execution of the application, terminating and re-instantiating unresponsive services executing on a server, and the like.” From line 26 of column 12, “As discussed, block 260 may be reached, for example, if an assertion that the monitored outcome matches the expected outcome fails. In some embodiments, the system may proceed to take proactive or remedial action with respect to the application code being tested to prevent code in a development stage of the software development pipeline from being promoted or reverting a promotion of code to a production environment so that code that has been tested to respond in the expected manner to a failure scenario is made available in the production environment. Operations 200 may proceed to block 270, where the system reverts the distributed computing system to a state prior to the simulated system failure. Generally, reverting the distributed computing system to a state prior to the simulated system failure may include terminating an instance of the distributed computing system (e.g., in a cloud computing environment), restarting physical servers and other infrastructure components in the distributed computing system, terminating and restarting services executing on a computing service, or other actions that may be taken to reset the distributed computing environment to a known state.” From line 5 of column 10, “Application servers 140 generally host applications or components of an application that serve content to a user on an endpoint device and process user input received from the endpoint device. In some embodiments, the application components may be implemented and deployed across a number of application servers 140 in a distributed computing environment. These application components may be services or microservices that, together, expose the functionality of an application to users of the application. The application servers 140 may host components that may be shared across different applications. In some embodiments, the application servers 140 may additionally include infrastructure components used to manage the distributed computing environment in which an application executes.”. Suwandi discloses such a fault injection could be a crash point. See above.).
Referring to claims 14, 15, 20, see rejection of claims 6, 7 above.
Claim(s) 21-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arunachalam and Marini and Suwandi as applied to claims 7, 15, and 20 above, and further in view of US 20170024299 to Deng et al.
Referring to claim 21, Arunachalam and Marini and Suwandi discloses crash points are inserted into the microservice code that cause errors at specific points in order to validate that the microservice responds in an expected manner (See above.)
Although Arunachalam and Marini and Suwandi does not specifically disclose wherein the microservices perform operations in stages and the crash points are inserted in one or more of the stages, this is known in the art. In a related field of computing, an example of this is shown by Deng, from paragraph 23, “Sub-component 108 also carries out smart profiling of the target machine, system and/or application to identify one or more occasion points of known applications and/or middleware. For example, certain applications and/or middleware have different stages, and such knowledge for known applications and/or middleware can be leveraged by the fault injection service via sub-component 108. Further, in at least one embodiment of the invention an FI occasion determination can be linked with or to certain stages. Also, in a controlled environment (such as, for example, a Cloud environment), at least one embodiment of the invention can include leveraging monitoring infrastructure and known attributes and/or tags available for the target machine, system and/or application. For example, certain applications and/or middleware have different stages (for example, connecting, request received, metadata retrieved, etc.), wherein the stage information is encoded as an attribute or a tag available to the controlled environment. The knowledge for the known applications and/or middleware can be leveraged by the fault injection service.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to test by stages because, as shown by Deng from paragraph 2, “Fault injection (FI) is commonly used for evaluating the resilience of systems. Existing FI approaches, however, involve a significant amount of manual decision making, such as determining, for example, what type of errors should be injected, when a fault should be injected, which object, component, process, and/or software-stack-level should be the target of the fault injection, which value and/or variable in the target object, component, process, and/or software-stack-level should be injected with what erroneous value, and what workload should be used for fault injection trials. Such approaches, accordingly, are inefficient, costly and time-consuming to carry out.”
Referring to claims 22, 23, see rejection of claim 21.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-3, 6-11, 14-18, 20-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL L CHU whose telephone number is (571)272-3656. The examiner can normally be reached weekdays 8 am to 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ashish Thomas can be reached at (571)272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GABRIEL CHU/Primary Examiner, Art Unit 2114