Prosecution Insights
Last updated: April 19, 2026
Application No. 19/053,657

TEST SUPPORT DEVICE AND TEST SUPPORT METHOD

Non-Final OA §101§102
Filed
Feb 14, 2025
Examiner
GUSTAFSON, MATHEW DONALD
Art Unit
2113
Tech Center
2100 — Computer Architecture & Software
Assignee
Hitachi, Ltd.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
1y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
2 granted / 2 resolved
+45.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 10m
Avg Prosecution
19 currently pending
Career history
21
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
35.8%
-4.2% vs TC avg
§112
1.2%
-38.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §102
Detailed Action This action is in response to the application filed on 02/14/2025. Claims 1-8 are pending and have been fully examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-8 are rejected under 35 U.S.C. 101 Claims 1-8 are rejected under 35 U.S.C. 102 Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 are rejected under 35 U.S.C. 101 Regarding Claim 1, Step 2A Prong 1 Analysis: The Limitations: …selects the microservice and the reliability function, which are configured in the test target system and tested for a fault, on a basis of the microservice information, MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. …selects a fault type of a fault to be generated in the microservice in the test on a basis of the fault condition; MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. …selects the microservice to generate a fault on a basis of the microservice state information, MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. …determines a setting value of a fault setting item for the microservice and the fault type, MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. …creates fault setting information. MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. Step 2A Prong Two Analysis: Claim 1 additionally recites, A test support device comprising: a storage unit; and a processor, wherein the storage unit includes microservice information including information regarding a reliability function set in a microservice and a value of a setting item set in the reliability function, microservice state information including information regarding a state of the microservice, and a fault condition including information regarding a fault to be generated in the microservice configured in a test target system, MPEP 2106.05(f); This limitation recites additional elements that are mere instructions to apply an exception for the abstract ideas. a test execution unit that… MPEP 2106.05(f); This limitation recites additional elements that are mere instructions to apply an exception for the abstract ideas. a fault setting information creation unit that… MPEP 2106.05(f); This limitation recites additional elements that are mere instructions to apply an exception for the abstract ideas. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 2, Step 2A Prong 1 Analysis: The Limitations: … determines whether the state of the microservice satisfies the fault condition on a basis of the microservice state information, MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. …selects a fault type of a fault to be generated in the microservice in the test on a basis of the fault condition; MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. …waits until the state of the microservice satisfies the fault condition in a case where it is determined that the state of the microservice does not satisfy the fault condition. MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. Step 2A Prong Two Analysis: Claim 2 additionally recites, test execution unit… MPEP 2106.05(f); This limitation recites additional elements that are mere instructions to apply an exception for the abstract ideas. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 3, Step 2A Prong 1 Analysis: The Limitations: wherein in a case where it is determined that the state of the microservice satisfies the fault condition… determines whether a fault occurrence situation of the microservice satisfies the fault condition on a basis of the microservice state information, MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. in a case where it is determined that the fault occurrence situation of the microservice does not satisfy the fault condition… generates a fault in the microservice on a basis of a value of the fault setting information determined by using the microservice information, the microservice state information, and the fault condition, and the test execution unit determines again whether the state of the microservice satisfies the fault condition. MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. Step 2A Prong Two Analysis: Claim 3 additionally recites, …the test execution unit… MPEP 2106.05(f); This limitation recites additional elements that are mere instructions to apply an exception for the abstract ideas. …the fault setting information creation unit… MPEP 2106.05(f); This limitation recites additional elements that are mere instructions to apply an exception for the abstract ideas. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 4, Step 2A Prong 1 Analysis: The Limitations: …executes a test covering all fault types for all the reliability functions of all the microservices, or executes a test covering a specific fault type for a specific reliability function of a specific microservice selected in advance. MPEP 2106.04(a)(2); This limitation is a step that covers performance in the mind in the form of evaluation and judgement with the assistance of pen and paper. Therefore, this limitation recites a mental process. Step 2A Prong Two Analysis: Claim 4 additionally recites, …the test execution unit… MPEP 2106.05(f); This limitation recites additional elements that are mere instructions to apply an exception for the abstract ideas. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 5, Step 2A Prong 1 Analysis: See corresponding analysis of Claim 4 Step 2A Prong 2 Analysis: Claim 5 additionally recites, wherein the microservice information includes a reliability function set for each microservice, a setting item set to realize a reliability function, and a setting value set in the setting item, the microservice state information includes an identifier of a computer resource deployed as a microservice, an operation rate of each of the computer resources, and a utilization rate of a processor used in the computer resource, and the fault condition includes a fault type indicating a fault to be generated in the microservice for each of the reliability functions, a fault setting item related to a fault, and a setting value set in the fault setting item. MPEP 2106.05(e); This limitation recites additional elements that do not apply an exception for the abstract ideas in a meaningful way. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements do not apply the exception in a meaningful way. The claim is not patent eligible. Regarding Claim 6, Step 2A Prong 1 Analysis: See corresponding analysis of Claim 5 Step 2A Prong 2 Analysis: Claim 6 additionally recites, wherein the fault setting information is associated with the reliability function, the fault type, and the fault setting item, and an identifier of the computer resource of the microservice that generates the fault is set as a setting value of the fault setting item. MPEP 2106.05(e); This limitation recites additional elements that do not apply an exception for the abstract ideas in a meaningful way. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements do not apply the exception in a meaningful way. The claim is not patent eligible. Regarding Claim 7, Step 2A Prong 1 Analysis: See corresponding analysis of Claim 5 Step 2A Prong 2 Analysis: Claim 7 additionally recites, wherein in a case where the reliability function of the fault setting information is autoscale, a computer resource kill or a processor load is associated with the fault type, and in a case where the reliability function is timeout, an HTTP status is associated with the fault type. MPEP 2106.05(e); This limitation recites additional elements that do not apply an exception for the abstract ideas in a meaningful way. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements do not apply the exception in a meaningful way. The claim is not patent eligible. Claim 8 is rejected under 35 U.S.C. 101 under the same grounds of rejection as claim 1. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-8 are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Baker et al. (U.S. Publication No. 2025/0077374 A1), hereinafter referred to as Baker. Regarding Claim 1, Baker teaches: A test support device comprising: a storage unit; and a processor, ([0043]; regarding, “the processing centers 210 include multiple computing resources, such one or more processors 212, memory devices 214, and data storage device 218. In one embodiment, the processing centers 210 are used for storing and retrieving digital information utilized by the service under test.”); wherein the storage unit includes microservice information including information regarding a reliability function set in a microservice and a value of a setting item set in the reliability function, microservice state information including information regarding a state of the microservice, and a fault condition including information regarding a fault to be generated in the microservice configured in a test target system, (Figs. 3-6, [0046]; regarding, “table 300 of a telemetry database illustrating a plurality of fault scenarios and telemetry data associated with each of the plurality of fault scenarios in accordance with one or more embodiments. As illustrated, table 300 includes a plurality of entries 302 that each include an identification of the service under test 304, an identification of the configuration 306 of the service under test, an identification of a fault scenario 308 applied to the configuration of the service under test, one or more telemetry data 310 collected during the application of the fault scenario to the configuration of the service under test, and one or more SLIs 312 calculated based on the telemetry data 310.”; Fig. 4, [0050]; regarding, “table 400 of a telemetry database illustrating a plurality of fault scenarios and a plurality of anomalies associated with each of the plurality of fault scenarios in accordance with one or more embodiments. As illustrated, table 400 includes a plurality of entries 402 that each include an identification of a fault scenario 404 and one or more anomalies 406 that comprise the fault scenario 404.”); and the processor includes: a test execution unit that selects the microservice and the reliability function, which are configured in the test target system and tested for a fault, on a basis of the microservice information, and selects a fault type of a fault to be generated in the microservice in the test on a basis of the fault condition; ([0061]; regarding, “the method 700 includes selecting, based on the telemetry data, a first fault scenario from the first plurality of fault scenarios. In some embodiments, multiple fault scenarios can be identified based on the first plurality of fault scenarios. In one embodiment, the first fault scenario is selected based on a determination that a service level indicator, calculated based on the recorded telemetry data, regarding the operation of the service under test corresponding to the first fault scenario deviates from an expected value by more than a threshold amount.”); and a fault setting information creation unit that selects the microservice to generate a fault on a basis of the microservice state information, determines a setting value of a fault setting item for the microservice and the fault type, and creates fault setting information. (Figs. 3-6, [0056]; regarding, “the chaos engine may include a database of previously generated fault scenarios that were applied to previous configurations and the chaos engine may be configured to identify a previous configuration that is similar to a current configuration. In one embodiment, a previous configuration may be determined to be similar to a current configuration based on the previous configuration including a threshold number, or threshold percentage, of computing resources that are the same as the current configuration.”; [0051]; regarding, “table 500 includes a plurality of entries 502 that each include an identification of an anomaly 504, an identification of a computing resource 506 that the anomaly will be applied to, an identification of a type 508 of the computing resource 506, an anomaly type 510 that will be applied…”). Regarding Claim 2, Baker teaches the device of claim 1 as referenced above. Baker further teaches: wherein the test execution unit determines whether the state of the microservice satisfies the fault condition on a basis of the microservice state information, and waits until the state of the microservice satisfies the fault condition in a case where it is determined that the state of the microservice does not satisfy the fault condition. ([0039]; regarding, “the performance analysis system 120 identifies one or more vulnerabilities of the service under test 112 based on one or more of the calculated SLIs of the service under test 112, the data stored in the telemetry database 122, and one or more Service Level Objectives (SLOs) of the service under test 112. In one embodiment, the SLOs are desired target values for each SLI. In another embodiment, the SLOs are acceptable ranges for each SLI. The SLOs of the service under test 112 may be set by an operator of the service under test 112.”; [0045]; regarding, “determination that an anomaly of a fault scenario that impacted the operation of the service under test can be based on identifying an anomaly related to the SLI that has deviation of greater than a threshold amount from its corresponding SLO. The chaos engine iteratively creates new additional fault scenarios by modifying anomalies to quantify the impact that each anomaly has on the service under test.”). Regarding Claim 3, Baker teaches the device of claim 2 as referenced above. Baker further teaches: wherein in a case where it is determined that the state of the microservice satisfies the fault condition, the test execution unit determines whether a fault occurrence situation of the microservice satisfies the fault condition on a basis of the microservice state information, ([0045]; regarding, “a determination that an anomaly of a fault scenario that impacted the operation of the service under test can be based on identifying an anomaly related to the SLI that has deviation of greater than a threshold amount from its corresponding SLO.”; [0065]; regarding, “the expected value of the SLI is obtained based on an analysis of telemetry data regarding the operation of the service under test under normal operating conditions. For example, the service under test may be executed without the application of any fault scenarios and telemetry data can be collected and analyzed to calculate the expected values for each SLI.”); and in a case where it is determined that the fault occurrence situation of the microservice does not satisfy the fault condition, the fault setting information creation unit generates a fault in the microservice on a basis of a value of the fault setting information determined by using the microservice information, the microservice state information, and the fault condition, and the test execution unit determines again whether the state of the microservice satisfies the fault condition. (Figs. 3-6, [0045]; regarding, “the chaos engine analyzes the data stored in the tables of the telemetry database to determine an impact that the applied fault scenarios had on the service under test. In addition, the chaos engine analyzes the data stored in the tables of the telemetry database to create additional fault scenarios that are applied to service under test.”). Regarding Claim 4, Baker teaches the device of claim 3 as referenced above. Baker further teaches: wherein the test execution unit executes a test covering all fault types for all the reliability functions of all the microservices, or executes a test covering a specific fault type for a specific reliability function of a specific microservice selected in advance. (Figs. 3-6, [0057]; regarding, “the first set of fault scenarios can include one or more fault scenarios that were previously run on the service under test and that were identified as important (e.g., because those previous fault scenarios result in the service under test crossing its SLO thresholds). In another embodiment, the first set of fault scenarios can include one or more fault scenarios that are known to be impactful to other services that have been tested, when the configuration of those services is similar to the current service under test (e.g., both services might use virtual machine (VM) s, switches, and a structured query language (SQL) server). For example, in some cases a service operator may periodically test the service under test and may record fault scenarios that were identified as important. These recorded fault scenarios can be reapplied to the service under test each time a change is made to the configuration of the service under test.”; [0058]; regarding, “a set of anomalies from which the applied anomalies are randomly selected is determined based on the type of the computing resource being targeted.”; [0059]; regarding, “the method 700 includes applying each of the first plurality of fault scenarios to the service under test. In one embodiment, applying each of the first set of fault scenarios includes simulating the one or more anomalies specified by the fault scenario in the computing environment in which the service under test is executing.”). Regarding Claim 5, Baker teaches the device of claim 4 as referenced above. Baker further teaches: wherein the microservice information includes a reliability function set for each microservice, (Figs. 3-6, [0037]; regarding, “Capacity SLIs measure the resource utilization and capacity limits of the service under test 112… Capacity SLIs help identify when the service under test 112 is approaching its resource limits and may require scaling or optimization.”); a setting item set to realize a reliability function, and a setting value set in the setting item, (Figs. 3-6); the microservice state information includes an identifier of a computer resource deployed as a microservice, an operation rate of each of the computer resources, and a utilization rate of a processor used in the computer resource, (Figs. 3-6, [0030]; regarding, “The computing environment 110 includes monitoring and logging mechanisms that capture relevant telemetry data of the computing resources and the service under test 112 during the operation of the service under test 112. The telemetry data are stored in the telemetry metric database 122 of the performance analysis system 120.”; [0060]; regarding, “the telemetry data is collected by the computing environment and stored in a telemetry metric database. The telemetry data provides real-time information about the state and performance of the computing resources utilized by the service under test. As discussed in more detail above, the collected telemetry data can include CPU usage, memory usage, disk I/O, network traffic, system load, application performance, event logs, power and temperature, and other custom metrics.”); and the fault condition includes a fault type indicating a fault to be generated in the microservice for each of the reliability functions, a fault setting item related to a fault, and a setting value set in the fault setting item. (Figs. 3-6, [0059]; regarding, “method 700 includes applying each of the first plurality of fault scenarios to the service under test. In one embodiment, applying each of the first set of fault scenarios includes simulating the one or more anomalies specified by the fault scenario in the computing environment in which the service under test is executing.”). Regarding Claim 6, Baker teaches the device of claim 5 as referenced above. Baker further teaches: wherein the fault setting information is associated with the reliability function, the fault type, and the fault setting item, and an identifier of the computer resource of the microservice that generates the fault is set as a setting value of the fault setting item. (Figs. 3-6, [0030]; regarding, “The computing environment 110 includes monitoring and logging mechanisms that capture relevant telemetry data of the computing resources and the service under test 112 during the operation of the service under test 112. The telemetry data are stored in the telemetry metric database 122 of the performance analysis system 120.”; [0060]; regarding, “the telemetry data is collected by the computing environment and stored in a telemetry metric database. The telemetry data provides real-time information about the state and performance of the computing resources utilized by the service under test. As discussed in more detail above, the collected telemetry data can include CPU usage, memory usage, disk I/O, network traffic, system load, application performance, event logs, power and temperature, and other custom metrics.”; [0061]; regarding, “method 700 includes selecting, based on the telemetry data, a first fault scenario from the first plurality of fault scenarios. In some embodiments, multiple fault scenarios can be identified based on the first plurality of fault scenarios. In one embodiment, the first fault scenario is selected based on a determination that a service level indicator, calculated based on the recorded telemetry data, regarding the operation of the service under test corresponding to the first fault scenario deviates from an expected value by more than a threshold amount.”). Regarding Claim 7, Baker teaches the device of claim 5 as referenced above. Baker further teaches: wherein in a case where the reliability function of the fault setting information is autoscale, a computer resource kill or a processor load is associated with the fault type, and in a case where the reliability function is timeout, an HTTP status is associated with the fault type. ([0031]; regarding, “the chaos engine 130 is a computing system that is configured to create fault scenarios 132 that are applied to the service under test 112. Each fault scenario 132 includes one or more anomalies, such as injecting network latency, randomly terminating services, introducing a central processing unit (CPU) spikes, or simulating sudden increases in user traffic (e.g., creating and injecting artificial sure traffic to the service under test). The chaos engine 130 can generate fault scenarios 132 based on one or more of the configurations 114 of computing resources, data relating to the operation of the service under test 112 from the performance analysis system 120, and user input.”). Claim 8 is rejected under 35 U.S.C. 102 under the same grounds of rejection as claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATHEW GUSTAFSON whose telephone number is (571)272-5273. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at (571) 272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.D.G./Examiner, Art Unit 2113 /BRYCE P BONZO/Supervisory Patent Examiner, Art Unit 2113
Read full office action

Prosecution Timeline

Feb 14, 2025
Application Filed
Mar 13, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572400
DATABASE SWITCHOVER IN A DISTRIBUTED DATABASE SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12461830
RESOURCE-AWARE WORKLOAD REALLOCATION ACROSS CLOUD ENVIRONMENTS
2y 5m to grant Granted Nov 04, 2025
Patent 12332719
POWER SUPPLY REDUNDANCY CONTROL SYSTEM AND METHOD FOR GPU SERVER AND MEDIUM
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
1y 10m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month