Prosecution Insights
Last updated: April 19, 2026
Application No. 18/775,079

SYSTEMS AND METHODS FOR DETECTING FAILURES OF COMPUTER APPLICATIONS

Final Rejection §102
Filed
Jul 17, 2024
Examiner
WILSON, YOLANDA L
Art Unit
2113
Tech Center
2100 — Computer Architecture & Software
Assignee
Fidelity Information Services LLC
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
90%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
882 granted / 1051 resolved
+28.9% vs TC avg
Moderate +6% lift
Without
With
+5.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
42 currently pending
Career history
1093
Total Applications
across all art units

Statute-Specific Performance

§101
22.0%
-18.0% vs TC avg
§103
27.5%
-12.5% vs TC avg
§102
31.4%
-8.6% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1051 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 21-40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Barbee et al. (USPN 20200089596A1). As per claim 21, Barbee et al. discloses a system for testing of at least one computer application, comprising: a non-transitory computer-readable medium configured to store instructions; and at least one processor configured to execute the instructions (paragraph 0071 - The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.) to perform operations comprising: creating a synthetic script configured to simulate feedback from at least one computer application; generating a synthetic test to simulate the feedback by executing the synthetic script to perform at least one automation test (paragraph 0059 - After selecting and defining a synthetic performance test (101), a first instance of the test is performed (102); paragraphs 0029-0047 - For example, imagine that we have an application which is being deployed in a datacenter in Washington, D.C., USA, and the developer wants to validate its performance around the world. So, the developer builds a synthetic performance measurement script with the following steps: [0030] 1. Open site homepage; [0031] 2. Log in as user ‘XYZ’ with password ‘Really_Secret’; [0032] 3. Select link “Account information”; and [0033] 4. Enter (some field information) and click “View details”. [0034] The synthetic performance test script may, for example, also indicate that it should be executed on the following city, browser, and device combinations: [0035] 1. New York, desktop IBM-compatible Personal Computer (PC): [0036] 1a: Microsoft Internet Explorer (IE) 11 browser [0037] 1b: Google Chrome browser [0038] 2. San Diego, emulated Google Android smartphone on a 4G mobile network: [0039] 2a: Mozilla Firefox mobile browser [0040] 2b: Google Chrome mobile browser [0041] 3. Sao Paulo, Apple iPad 4 on Wi-Fi network, Apple Safari browser [0042] 4. London, desktop IBM-compatible Personal Computer (PC): [0043] 4a: Microsoft Internet Explorer (IE) 11 browser [0044] 4b: Mozilla Firefox desktop browser [0045] 4c: Google Chrome browser [0046] 5. Sydney, Apple MacIntosh laptop computer with Apple Safari browser [0047] 6. . . . and other combinations as desired.); determining response data of the at least one computer application by inputting the synthetic test and analytic data to a comparison model configured to detect a failure by the at least one computer application (paragraph 0059 - which generates a “baseline” set of results that are stored (125) for later reference. Then, the status of a particular program product, whether in development or deployed, is monitored for changes, such as, but not limited to, changes in the position (103) within a continuous development pipeline, change of a performance objective (104) (e.g., delay time, maximum load capacity, addition of a supported browser and/or client device, etc.), expiration of a delay timer (105) (e.g., periodic re-test), or other status change criteria.; paragraph 0061 - As such, the test results storage (125), such as a database (centralized or distributed), accumulates the baseline results as well as a plurality of subsequent results from re-runs of the same synthetic performance tests, albeit not only on the same client conditions all the time, but also over a wide variety of client conditions, from both tests performed while a program code element is within the continuous development pipeline, as well as while that code element is deployed in the field.; paragraph 0063 - The stored results (125) are retrieved or accessed, and compared (108) against these relative limits (107). This comparison can be event-driven, such as a trigger occurring (e.g., 103, 104, 106). In some other embodiments, this may also be done periodically, continuously, or some combination of time-based and event-driven, the relative limits are disclosed in paragraph 0062, the relative limits are equivalent to failures if exceeded or below a value); and generating a report if a failure is detected (paragraph 0063 - Depending on the results of the comparison, one or more quality of service machine logic rule(s) (110) are executed (111), which may include but are not limited to generating automatic reports, the reports include if he relative limits are equivalent to failures if exceeded or below a value). As per claim 22, Barbee et al. discloses wherein creating the synthetic script is based at least in part on open-source software (paragraph 0023 - a Synthetic Performance Test execution stage which executes the synthetic performance test scripts associated with the new or modified program code. – the program code is included in applications developed as open source). As per claim 23, Barbee et al. discloses wherein creating the synthetic script involves using a functional automation testing framework (paragraph 0023 - a Synthetic Performance Test execution stage which executes the synthetic performance test scripts associated with the new or modified program code. – the program code in inclusive of being developed in an automation testing framework). As per claim 24, Barbee et al. discloses wherein the at least one processor is further configured to perform operations comprising: receiving a monitoring request (paragraph 0059 - After selecting and defining a synthetic performance test). As per claim 25, Barbee et al. discloses wherein the monitoring requests are received at periodic scheduled time intervals (paragraph 0060 - When such a change is detected or a period of time since the last execution of the synthetic performance test has elapsed, a synthetic performance test is selected and re-executed (106)). As per claim 26, Barbee et al. discloses wherein the analytic data comprises information regarding the at least one computer application or a response time of the at least one computer application (paragraph 0059 - change of a performance objective (104) (e.g., delay time, maximum load capacity, addition of a supported browser and/or client device, etc.)). As per claim 27, Barbee et al. discloses wherein generating the report is based on an output of an analysis technique configured to compare the analytic data with an anticipated result (paragraph 0063 - The stored results (125) are retrieved or accessed, and compared (108) against these relative limits (107). This comparison can be event-driven, such as a trigger occurring (e.g., 103, 104, 106)…Depending on the results of the comparison, one or more quality of service machine logic rule(s) (110) are executed (111), which may include but are not limited to generating automatic reports). As per claim 28, Barbee et al. discloses wherein the report comprises information regarding the at least one computer application or a response time of the at least one computer application (paragraph 0063 - The stored results (125) are retrieved or accessed, and compared (108) against these relative limits (107). This comparison can be event-driven, such as a trigger occurring (e.g., 103, 104, 106)…Depending on the results of the comparison, one or more quality of service machine logic rule(s) (110) are executed (111), which may include but are not limited to generating automatic reports, which includes reporting any type of information). As per claim 29, Barbee et al. discloses a computer-implemented method for autonomous testing of a computer application, comprising: creating a synthetic script configured to simulate feedback from at least one computer application; generating a synthetic test to simulate the feedback by executing the synthetic script to perform at least one automation test (paragraph 0059 - After selecting and defining a synthetic performance test (101), a first instance of the test is performed (102); paragraphs 0029-0047 - For example, imagine that we have an application which is being deployed in a datacenter in Washington, D.C., USA, and the developer wants to validate its performance around the world. So, the developer builds a synthetic performance measurement script with the following steps: [0030] 1. Open site homepage; [0031] 2. Log in as user ‘XYZ’ with password ‘Really_Secret’; [0032] 3. Select link “Account information”; and [0033] 4. Enter (some field information) and click “View details”. [0034] The synthetic performance test script may, for example, also indicate that it should be executed on the following city, browser, and device combinations: [0035] 1. New York, desktop IBM-compatible Personal Computer (PC): [0036] 1a: Microsoft Internet Explorer (IE) 11 browser [0037] 1b: Google Chrome browser [0038] 2. San Diego, emulated Google Android smartphone on a 4G mobile network: [0039] 2a: Mozilla Firefox mobile browser [0040] 2b: Google Chrome mobile browser [0041] 3. Sao Paulo, Apple iPad 4 on Wi-Fi network, Apple Safari browser [0042] 4. London, desktop IBM-compatible Personal Computer (PC): [0043] 4a: Microsoft Internet Explorer (IE) 11 browser [0044] 4b: Mozilla Firefox desktop browser [0045] 4c: Google Chrome browser [0046] 5. Sydney, Apple MacIntosh laptop computer with Apple Safari browser [0047] 6. . . . and other combinations as desired.); determining response data of the at least one computer application by inputting the synthetic test and analytic data to a comparison model configured to detect a failure by the at least one computer application (paragraph 0059 - which generates a “baseline” set of results that are stored (125) for later reference. Then, the status of a particular program product, whether in development or deployed, is monitored for changes, such as, but not limited to, changes in the position (103) within a continuous development pipeline, change of a performance objective (104) (e.g., delay time, maximum load capacity, addition of a supported browser and/or client device, etc.), expiration of a delay timer (105) (e.g., periodic re-test), or other status change criteria.; paragraph 0061 - As such, the test results storage (125), such as a database (centralized or distributed), accumulates the baseline results as well as a plurality of subsequent results from re-runs of the same synthetic performance tests, albeit not only on the same client conditions all the time, but also over a wide variety of client conditions, from both tests performed while a program code element is within the continuous development pipeline, as well as while that code element is deployed in the field.; paragraph 0063 - The stored results (125) are retrieved or accessed, and compared (108) against these relative limits (107). This comparison can be event-driven, such as a trigger occurring (e.g., 103, 104, 106). In some other embodiments, this may also be done periodically, continuously, or some combination of time-based and event-driven, the relative limits are disclosed in paragraph 0062, the relative limits are equivalent to failures if exceeded or below a value); and generating a report if a failure is detected (paragraph 0063 - Depending on the results of the comparison, one or more quality of service machine logic rule(s) (110) are executed (111), which may include but are not limited to generating automatic reports, the reports include if he relative limits are equivalent to failures if exceeded or below a value). As per claim 30, Barbee et al. discloses wherein creating the synthetic script based at least in part on open-source software (paragraph 0023 - a Synthetic Performance Test execution stage which executes the synthetic performance test scripts associated with the new or modified program code. – the program code is included in applications developed as open source). As per claim 31, Barbee et al. discloses wherein creating the synthetic script involves using a functional automation testing framework (paragraph 0023 - a Synthetic Performance Test execution stage which executes the synthetic performance test scripts associated with the new or modified program code. – the program code in inclusive of being developed in an automation testing framework). As per claim 32, Barbee et al. discloses further comprising: receiving a monitoring request (paragraph 0059 - After selecting and defining a synthetic performance test). As per claim 33, Barbee et al. discloses wherein the monitoring requests are received at periodic scheduled time intervals (paragraph 0060 - When such a change is detected or a period of time since the last execution of the synthetic performance test has elapsed, a synthetic performance test is selected and re-executed (106)). As per claim 34, Barbee et al. discloses wherein the analytic data comprises information regarding the at least one computer application or a response time of the at least one computer application (paragraph 0059 - change of a performance objective (104) (e.g., delay time, maximum load capacity, addition of a supported browser and/or client device, etc.)). As per claim 35, Barbee et al. discloses wherein generating the report is based on an output of an analysis technique configured to compare the analytic data with an anticipated result (paragraph 0063 - The stored results (125) are retrieved or accessed, and compared (108) against these relative limits (107). This comparison can be event-driven, such as a trigger occurring (e.g., 103, 104, 106)…Depending on the results of the comparison, one or more quality of service machine logic rule(s) (110) are executed (111), which may include but are not limited to generating automatic reports). As per claim 36, Barbee et al. discloses wherein the report comprises information regarding the at least one computer application or a response time of the at least one computer application (paragraph 0063 - The stored results (125) are retrieved or accessed, and compared (108) against these relative limits (107). This comparison can be event-driven, such as a trigger occurring (e.g., 103, 104, 106)…Depending on the results of the comparison, one or more quality of service machine logic rule(s) (110) are executed (111), which may include but are not limited to generating automatic reports, which includes reporting any type of information). As per claim 37, Barbee et al. discloses a non-transitory computer-readable medium configured to store instructions configured to be executed by at least one processor to cause the at least one processor to perform operations (paragraph 0071 - The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.), the operations comprising: creating a synthetic script configured to simulate feedback from at least one computer application; generating a synthetic test to simulate the feedback by executing the synthetic script to perform at least one automation test (paragraph 0059 - After selecting and defining a synthetic performance test (101), a first instance of the test is performed (102); paragraphs 0029-0047 - For example, imagine that we have an application which is being deployed in a datacenter in Washington, D.C., USA, and the developer wants to validate its performance around the world. So, the developer builds a synthetic performance measurement script with the following steps: [0030] 1. Open site homepage; [0031] 2. Log in as user ‘XYZ’ with password ‘Really_Secret’; [0032] 3. Select link “Account information”; and [0033] 4. Enter (some field information) and click “View details”. [0034] The synthetic performance test script may, for example, also indicate that it should be executed on the following city, browser, and device combinations: [0035] 1. New York, desktop IBM-compatible Personal Computer (PC): [0036] 1a: Microsoft Internet Explorer (IE) 11 browser [0037] 1b: Google Chrome browser [0038] 2. San Diego, emulated Google Android smartphone on a 4G mobile network: [0039] 2a: Mozilla Firefox mobile browser [0040] 2b: Google Chrome mobile browser [0041] 3. Sao Paulo, Apple iPad 4 on Wi-Fi network, Apple Safari browser [0042] 4. London, desktop IBM-compatible Personal Computer (PC): [0043] 4a: Microsoft Internet Explorer (IE) 11 browser [0044] 4b: Mozilla Firefox desktop browser [0045] 4c: Google Chrome browser [0046] 5. Sydney, Apple MacIntosh laptop computer with Apple Safari browser [0047] 6. . . . and other combinations as desired.); determining response data of the at least one computer application by inputting the synthetic test and analytic data to a comparison model configured to detect a failure by the at least one computer application (paragraph 0059 - which generates a “baseline” set of results that are stored (125) for later reference. Then, the status of a particular program product, whether in development or deployed, is monitored for changes, such as, but not limited to, changes in the position (103) within a continuous development pipeline, change of a performance objective (104) (e.g., delay time, maximum load capacity, addition of a supported browser and/or client device, etc.), expiration of a delay timer (105) (e.g., periodic re-test), or other status change criteria.; paragraph 0061 - As such, the test results storage (125), such as a database (centralized or distributed), accumulates the baseline results as well as a plurality of subsequent results from re-runs of the same synthetic performance tests, albeit not only on the same client conditions all the time, but also over a wide variety of client conditions, from both tests performed while a program code element is within the continuous development pipeline, as well as while that code element is deployed in the field.; paragraph 0063 - The stored results (125) are retrieved or accessed, and compared (108) against these relative limits (107). This comparison can be event-driven, such as a trigger occurring (e.g., 103, 104, 106). In some other embodiments, this may also be done periodically, continuously, or some combination of time-based and event-driven, the relative limits are disclosed in paragraph 0062, the relative limits are equivalent to failures if exceeded or below a value); and generating a report if a failure is detected (paragraph 0063 - Depending on the results of the comparison, one or more quality of service machine logic rule(s) (110) are executed (111), which may include but are not limited to generating automatic reports, the reports include if he relative limits are equivalent to failures if exceeded or below a value). As per claim 38, Barbee et al. discloses wherein the at least one processor is further configured to perform operations comprising: selecting the synthetic script based at least in part on open-source software (paragraph 0023 - a Synthetic Performance Test execution stage which executes the synthetic performance test scripts associated with the new or modified program code. – the program code is included in applications developed as open source). As per claim 39, Barbee et al. discloses wherein the at least one processor is further configured to perform operations comprising: receiving analytic data associated with the at least one computer application (paragraph 0059 - which generates a “baseline” set of results that are stored (125) for later reference. Then, the status of a particular program product, whether in development or deployed, is monitored for changes, such as, but not limited to, changes in the position (103) within a continuous development pipeline, change of a performance objective (104) (e.g., delay time, maximum load capacity, addition of a supported browser and/or client device, etc.), expiration of a delay timer (105) (e.g., periodic re-test), or other status change criteria.; paragraph 0061 - As such, the test results storage (125), such as a database (centralized or distributed), accumulates the baseline results as well as a plurality of subsequent results from re-runs of the same synthetic performance tests, albeit not only on the same client conditions all the time, but also over a wide variety of client conditions, from both tests performed while a program code element is within the continuous development pipeline, as well as while that code element is deployed in the field.; paragraph 0063 - The stored results (125) are retrieved or accessed, and compared (108) against these relative limits (107). This comparison can be event-driven, such as a trigger occurring (e.g., 103, 104, 106). In some other embodiments, this may also be done periodically, continuously, or some combination of time-based and event-driven, the relative limits are disclosed in paragraph 0062, the relative limits are equivalent to failures if exceeded or below a value). As per claim 40, Barbee et al. discloses wherein the comparison model is based on an analysis technique configured to compare the analytic data with an anticipated result (paragraph 0063 - The stored results (125) are retrieved or accessed, and compared (108) against these relative limits (107). This comparison can be event-driven, such as a trigger occurring (e.g., 103, 104, 106)…Depending on the results of the comparison, one or more quality of service machine logic rule(s) (110) are executed (111), which may include but are not limited to generating automatic reports). Response to Arguments Applicant's arguments filed 11/26/2025 have been fully considered but they are not persuasive. Concerning Applicant’s arguments of the rejection of the prior art rejection under 35 USC 102(a)(1), a human builds a script using a program on computer with a processor and a human can select a script with the help of a software development platform on a computer with a processor. It is inherent that the creating a synthetic script involves a computer with a processor. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yolanda L Wilson whose telephone number is (571)272-3653. The examiner can normally be reached M-F (7:30 am - 4 pm). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at 571-272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Yolanda L Wilson/Primary Examiner, Art Unit 2113
Read full office action

Prosecution Timeline

Jul 17, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §102
Nov 26, 2025
Response Filed
Mar 21, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602279
SYSTEMS AND METHODS FOR DEBUGGING MULTI-CORE PROCESSORS WITH CONFIGURABLE ISOLATED PARTITIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12602293
MANAGEMENT OF LOGS IN ASSET GROUPS
2y 5m to grant Granted Apr 14, 2026
Patent 12554699
METHOD AND SYSTEM FOR IMPLEMENTING A DATA CORRUPTION DETECTION TEST
2y 5m to grant Granted Feb 17, 2026
Patent 12547488
SELF DIAGNOSTIC AND HEALING OF ENTERPRISE NODES THROUGH A SOCIAL MEDIA FABRIC
2y 5m to grant Granted Feb 10, 2026
Patent 12524342
MEMORY WITH POST-PACKAGING MASTER DIE SELECTION
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
90%
With Interview (+5.7%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 1051 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month