Prosecution Insights
Last updated: April 19, 2026
Application No. 18/449,120

DETERMINING RELEVANT TESTS THROUGH CONTINUOUS PRODUCTION-STATE ANALYSIS

Non-Final OA §103
Filed
Aug 14, 2023
Examiner
BOURZIK, BRAHIM
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Cisco Technology Inc.
OA Round
3 (Non-Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
245 granted / 376 resolved
+10.2% vs TC avg
Strong +45% interview lift
Without
With
+45.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
34 currently pending
Career history
410
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
4.3%
-35.7% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 376 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending in this office action. Claims 1, 12-13 and 17 are amended Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s argument: Regarding claim 1, Petrescu and Tsoukalas, either alone or in combination, fail to disclose "monitoring operation of the production computing environment to obtain a history of operational states and a plurality of changes made to the production computing environment over time, wherein one or more test cases relevant to the plurality of changes are generated by a testing configuration intelligence model based on an accuracy rate or a variance associated with the plurality of changes." Petrescu is generally directed to creating software tests matching production personas. Tsoukalas is generally directed to executing ordered software testing. Priyanka is generally directed to automatic selection of tests for a software system. Petrescu, in Col. 4 lines 22-24, disclose "machine learning techniques may be applied to runtime observations to identify usage characteristics that correspond to distinct personas" (emphasis added). However, Petrescu fails to disclose using machine learning techniques to generate one or more test cases. Therefore, Petrescu also fails to disclose "one or more test cases relevant to the plurality of changes are generated by a testing configuration intelligence model based on an accuracy rate or a variance associated with the plurality of changes. Examiner response: First let address the argument in view of the specification and specifically fig.2: 210 is a production environment, any change for that production environment is collected and is tested in a test environment Lab/test 250 or digital twin 252 to replicate the production computing environment. Tsoukalas uses a continuous integration system or continuous deployment system (CI/CD) and any change to the code triggers a generation of tests. The applicant’s argued that there is no collection or detection of changes to the production environment, but acknowledges that the collected scenarios is based on prerecorded request in a production environment. Any changes to the code or services of the production environment, triggers a testing system based on that changes: Col 14 lines 4-11 “As shown in 620, change data associated with updated program code may be received or generated. The updated program code may represent a new version of the program code with include additions, deletions, and/or modifications with respect to the earlier version of the program code tested in 600. The change data may represent data indicative of one or more modified or new portions of an updated version of the program code.”; Any performance issue in the production environment or SLA such as performance degradation: Col 10 lines 12-20 “For example, the performance metrics may relate to aspects of processor usage, memory usage, disk or storage usage, network usage, and/or the usage of any other measurable resource. The performance metrics may be collected using any suitable techniques, e.g., the instrumentation of various software modules and/or the use of data gathered by an operating system. The performance metrics may be used by various heuristics to determine whether the build passes or fails a particular test”; And the performance also can be determined based on “For example, the performance metrics collected for the tests may indicate the number of transactions processed and the pass/fail ratio: And because of a number of test to execute, the CI/CD select only a subset of test associated with the change thus optimizing resources usage for testing. The combination of the art discloses a system where changes can occur during the lifecycle of a deployed product. Monitoring for those changes in code update of performance triggers a test to ensure the integration and the SLA agreement is maintained in the cloud. In the other hands in order to select test cases based on the accuracy rate Yang discloses the following: page 7(step 301-303 of fig. 3) “In a particular implementation, in order to ensure the accuracy of the final test recommendation result,the developer can be set according to the practical situation one is used for detecting whether the target test case can be used as the judging standard recommended test, such as setting a preset threshold value (generally 80%, adjustable value), and then detecting the code coverage is higher than the pre-set threshold……In a particular implementation, the test host can detect code coverage to the read is higher than 80%,if it is higher, it indicates that the target test current execution of said modified code coverage degree is high, can be used as recommended test, if it is lower, it shows target currently executed test case for the changing extent of coverage of code, it is necessary to set the other selecting writes the new test case or supplement for improving code coverage for example from the test.”; based on the accuracy rate the test are selected. Tsoukalas based on the changes select the test cases that can cover such changes: Col 5 lines 5-15 “In one embodiment, the mapping 130 may indicate which methods, classes, packages, and/or groups were exercised by each test. The mapping 130 may be stored in a data store for reference at a later time. The system 100 may also maintain other test-related metadata 135, such as a history of test execution runtimes, test successes and/or failures, user feedback regarding tests, and so on. Priyanka in the other hand discloses: [0060] “Such embodiments provide techniques and systems that automatically select tests from a system test suite for regression testing using machine learning to enable maximum code coverage while only executing required tests based on a detected code change (e.g., code check-in, commit, file change, pushed change, code integration, and/or the like) in a software repository.”; Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5-8, 10-14, 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Petrescu US 11416379 B1 in view of Tsoukalas US 10678678 B1 and Yang et al CN110413506A. As per claim 1, Petrescu discloses a computer-implemented method comprising: obtaining configuration information describing a configuration of a production computing environment, the production computing environment including one or more computing devices and associated software, one or more networking devices and associated software and one or more data storage devices and associated software: col 6 lines 40-50 “In some embodiments, the production environment may be implemented using resources of a provider network. The provider network may include numerous data centers hosting various services and resource pools of computing resources, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like, that are used to implement and distribute the infrastructure and services offered by the provider. Resources of the provider network may be hosted in “the cloud,” and the provider network may be termed a cloud provider network. The provider network may offer some resource pools and services to multiple clients simultaneously and may thus be termed “multi-tenant.” The computing resources may, in some embodiments, be offered to clients in units called “instances,” such as virtual or physical compute instances or storage instances”; obtaining testing information relating to a particular testing scenario to be performed for the production computing environment: col 9 lines 58-63 “An integration test may include testing a service 110A in combination with other services (e.g., service 110N), resources, and components. A regression test may be used to verify that a change to the program code of the service 110A has not adversely affected existing functionality.”; monitoring operation of the production computing environment to obtain a history of operational states and a plurality of changes made to the production computing environment over time: Col 3 lines 35-45 “For example, the runtime observations may include service logs 125 acquired via logging components of various services, e.g., logging 120A at service 110A and logging 120N at service 110N. Service logs 125 may indicate application programming interfaces (APIs) that were invoked by service calls, parameter values (inputs) of calls, responses (outputs) to calls, failure or success of calls, timestamps of calls, and so on. As another example, the runtime observations may include call traces 135 acquired from a call tracing component 130.”; Col 7 lines 30-34 “In one embodiment, aspects of the software testing system 100 may be performed continuously and/or repeatedly, e.g., to adapt to changing conditions in the production environment 195.”; and determining, based on the operational and testing log, one or more particular changes of the plurality of changes to the production computing environment that should be validated for the particular testing scenario: Fig. 5 and col 14 lines 5-10 ‘As shown in 510, the method (via the software testing system) may analyze the observations to identify one or more personas. A particular persona may represent a set of usage characteristics or behaviors that are observed in the runtime data for a subset of clients of a service.”; Fig. 5 and col 14 lines 22-32 “ As shown in 520, the method (via the software testing system) may generate one or more persona-specific tests corresponding to one or more of the identified personas. For a given persona, the persona-specific test creation may generate one or more tests that seek to match the usage characteristics or the real-world behavior of that persona's subset of clients with respect to the service. A persona-specific test may include a set of input values, a call order, a call volume or call throughput, and so on. Input values in persona-specific tests may be cleaned up by the software testing system, e.g., to replace confidential values observed in production traffic with valid and non-confidential values.”; col 7lines 37-42 “ As another example, runtime observations for a particular service may be updated when the program code for the service is updated. In one embodiment, the software testing system 100 may be used in a deployment pipeline for new software (including new versions of software) such that personas are identified or updated based on the latest version of the program code.”; But not explicitly: wherein one or more test cases relevant to the plurality of changes are generated by a testing configuration intelligence model based on an accuracy rate or a variance associated with the plurality of changes; obtaining operational and testing data collected over time for a plurality of other production computing environments that have undergone a plurality of testing scenarios and configuration changes that impact the plurality of testing scenarios: Tsoukalas discloses: obtaining operational and testing data collected over time for a plurality of other production computing environments that have undergone a plurality of testing scenarios and configuration changes that impact the plurality of testing scenarios: Col 5 lines 5-15 “In one embodiment, the code coverage data may also indicate additional metrics, such as the percentage of code of a particular file or module that was exercised by a particular test. Based (at least in part) on the code coverage data, a mapping 130 of the tests to the program code may be generated. The mapping 130 may indicate what portions of the code 170 (if any) were exercised (e.g., encountered, executed, or otherwise performed) by each test in the suite of tests 180. The affected portions of the code may be indicated by line numbers within particular source files. In one embodiment, the mapping 130 may indicate which methods, classes, packages, and/or groups were exercised by each test. The mapping 130 may be stored in a data store for reference at a later time. The system 100 may also maintain other test-related metadata 135, such as a history of test execution runtimes, test successes and/or failures, user feedback regarding tests, and so on. Examiner interpretation: The data collected is a result of testing scenarios and their corresponding output based on any update or change to the production environment used by CI/CD. the changes for example may be between successive versions such as adding, modifying or deleting objects. And selecting tests is based on the mapping 130 and any changed data associated with the updated program. wherein one or more test cases relevant to the plurality of changes are generated by a testing configuration intelligence model based on a variance associated with the plurality of changes; Col 5 lines 45-53” The subset 181 of the tests may be selected based (at least in part) on the mapping 130 and on the change data associated with the updated program code 171. In one embodiment, the locations of the changed portions of the updated program code 171 (e.g., line numbers in particular files) may be used with the mapping 130 to determine which tests have previously exercised those locations of the code”; It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Tsoukalas into teachings of Petrescu for analyzing code coverage differences across environments e.g. test environments, and production environments, for a software product to represent an application or service. Using the relevant test selection module , a subset of the tests may be selected from the full suite of tests.(Tsoukalas Col 5 lines 40-45). But not explicitly: wherein one or more test cases relevant to the plurality of changes are generated by a testing configuration intelligence model based on an accuracy rate ; Yang discloses: wherein one or more test cases relevant to the plurality of changes are generated by a testing configuration intelligence model based on an accuracy rate: page 7(step 301-303 of fig. 3) “In a particular implementation, in order to ensure the accuracy of the final test recommendation result,the developer can be set according to the practical situation one is used for detecting whether the target test case can be used as the judging standard recommended test, such as setting a preset threshold value (generally 80%, adjustable value), and then detecting the code coverage is higher than the pre-set threshold……In a particular implementation, the test host can detect code coverage to the read is higher than 80%,if it is higher, it indicates that the target test current execution of said modified code coverage degree is high, can be used as recommended test, if it is lower, it shows target currently executed test case for the changing extent of coverage of code, it is necessary to set the other selecting writes the new test case or supplement for improving code coverage for example from the test.”; It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Yang into teachings of Petrescu and Tsoukalas to judge whether the target test case is a recommended test case according to the code changes to reduce the workload of developing and testing personnel. An associated mapping code and testing case pre-constructed searching related example correlation between code corresponding to the associated test case and recommended standard test case corresponding to recommend as current version code so as to help test people effectively locate the regression test range, and help influence point for developers to understand and find changing code relation in reducing code bug .(Yang page 8) As per claim 2, the rejection of claim 1 is incorporated and furthermore Petrescu , Tsoukalas and Yang disclose: wherein determining the one or more particular changes comprises performing machine learning analysis of the operational and testing data and on the history of the operational states of the production computing environment and the plurality of changes made to the production computing environment over time: Petrescu Col 4 lines 22-27 “In one embodiment, machine learning techniques may be applied to runtime observations to identify usage characteristics that correspond to distinct personas. For example, a neural network may be trained with sets of usage characteristics for known personas in order to identify new personas not found in the training set. “; Examiner interpretation: See also Tsoukalas col 5 lines 59-67 for using ML to select test for changes in code development. As per claim 5, the rejection of claim 1 is incorporated and furthermore Petrescu , Tsoukalas and Yang disclose: wherein determining further comprises determining at least one test case of the one or more test cases for the particular testing scenario that is specific to the one or more particular changes and known errors the one or more particular changes can potentially cause in the production computing environment. Petrescu Col 14 lines 22-32 “As shown in 520, the method (via the software testing system) may generate one or more persona-specific tests corresponding to one or more of the identified personas. For a given persona, the persona-specific test creation may generate one or more tests that seek to match the usage characteristics or the real-world behavior of that persona's subset of clients with respect to the service. A persona-specific test may include a set of input values, a call order, a call volume or call throughput, and so on. Input values in persona-specific tests may be cleaned up by the software testing system, e.g., to replace confidential values observed in production traffic with valid and non-confidential values.”; Examiner interpretation: See also Tsoukalas for test selection based on changes Col 4 lines 6-25). As per claim 6, the rejection of claim 5 is incorporated and furthermore Petrescu does not explicitly disclose: determining further includes defining metadata details that outline a context as to why each of the one or more test cases is relevant. Tsoukalas discloses: determining further includes defining metadata details that outline a context as to why each of the one or more test cases is relevant Col 2 lines 32-42 “. When a version of the program code with updated or new portions of code is sought to be tested, the portions that were changed or added (e.g., the source files and lines of code) may be determined, and the mapping may be used to determine which tests are relevant to (e.g., likely to be exercised by) these changed or new portions. A test selection system or service may produce an ordered sequence of tests that are likely to be exercised by the updated program code. In one embodiment, a confidence score may be determined for each test in a suite of tests, where the confidence score represents the likelihood that a test will exercise the updated or new portions of code, and tests whose confidences scores meet a predetermined confidence threshold may be included in the selection of tests while tests that fail to meet the confidence threshold may be excluded.”; It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Tsoukalas into teachings of Petrescu and Yang for analyzing code coverage differences across environments e.g. test environments, and production environments, for a software product to represent an application or service. Using the relevant test selection module, a subset of the tests may be selected from the full suite of tests.(Tsoukalas Col 5 lines 40-45). As per claim 7, the rejection of claim 1 is incorporated and furthermore Petrescu does not explicitly disclose: wherein determining the one or more particular changes further includes determining a variance representing a deviation range of one or more parameters for the one or more particular changes. Tsoukalas discloses: wherein determining the one or more particular changes further includes determining the variance representing a deviation range of one or more parameters for the one or more particular changes. Col 7 lines 26-32 “ In one embodiment, the changes to portion(s) 170B and 170N may be determined based (at least in part) on change data associated with the updated code 171. For example, the change data may indicate one or more-line numbers within particular source files. In one embodiment, the change data may indicate which methods, classes, packages, and/or groups were modified relative to a previous version of the program code (e.g., the version that was tested using the full suite of tests).” It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Tsoukalas into teachings of Petrescu and Yang for analyzing code coverage differences across environments e.g. test environments, and production environments, for a software product to represent an application or service. Using the relevant test selection module , a subset of the tests may be selected from the full suite of tests.(Tsoukalas Col 5 lines 40-45). As per claim 8, the rejection of claim 7 is incorporated and furthermore Petrescu Tsoukalas and Yang disclose: executing the particular testing scenario in a test computing environment or a digital twin model of the production computing environment with the one or more particular changes in place to replicate the production computing environment: Petrescu Col 10 lines 23-35 “Using a component 180 for persona-specific test execution, the service 110A may be subjected to automated testing using the persona-specific test(s) 171 in an execution environment 395. The execution environment 395 may represent a test environment, a development environment, a pre-production environment, or a production environment. The execution environment 395 may represent a different environment than the production environment 195 from which the runtime observations were collected. In one embodiment, the tests may be executed in a test environment in which the software product may be insulated from real-time interaction with real-world clients, e.g., by processing only synthetic requests or prerecorded client requests that were previously captured in a production environment.” As per claim 10, the rejection of claim 8 is incorporated and furthermore Petrescu Tsoukalas and Yang disclose: wherein executing further comprises executing the particular testing scenario in the production computing environment: Petrescu Col 10 lines 23-35 “Using a component 180 for persona-specific test execution, the service 110A may be subjected to automated testing using the persona-specific test(s) 171 in an execution environment 395. The execution environment 395 may represent a test environment, a development environment, a pre-production environment, or a production environment As per claim 11, the rejection of claim 1 is incorporated and furthermore Petrescu and Tsoukalas disclose: wherein the one or more particular changes comprise an ordered list of changes for executing the particular testing scenario: Petrescu col 7 lines 37-42 “As another example, runtime observations for a particular service may be updated when the program code for the service is updated. In one embodiment, the software testing system 100 may be used in a deployment pipeline for new software (including new versions of software) such that personas are identified or updated based on the latest version of the program code”; Examiner interpretation: changes in previous version are earlier than new version and all tested in the pipelines CI/CD. But not explicitly: and wherein the one or more particular changes are selected by the testing configuration intelligence model based on a strength of the variance: Yang discloses: wherein the one or more particular changes are selected by the testing configuration intelligence model based on a strength of the variance: page 7(step 301-303 of fig. 3) “In a particular implementation, in order to ensure the accuracy of the final test recommendation result,the developer can be set according to the practical situation one is used for detecting whether the target test case can be used as the judging standard recommended test, such as setting a preset threshold value (generally 80%, adjustable value), and then detecting the code coverage is higher than the pre-set threshold……In a particular implementation, the test host can detect code coverage to the read is higher than 80%,if it is higher, it indicates that the target test current execution of said modified code coverage degree is high, can be used as recommended test, if it is lower, it shows target currently executed test case for the changing extent of coverage of code, it is necessary to set the other selecting writes the new test case or supplement for improving code coverage for example from the test.”; It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Yang into teachings of Petrescu and Tsoukalas to judge whether the target test case is a recommended test case according to the code changes to reduce the workload of developing and testing personnel. An associated mapping code and testing case pre-constructed searching related example correlation between code corresponding to the associated test case and recommended standard test case corresponding to recommend as current version code so as to help test people effectively locate the regression test range, and help influence point for developers to understand and find changing code relation in reducing code bug .(Yang page 8) As per claim 12, the rejection of claim 1 is incorporated and furthermore Petrescu Tsoukalas and Yang disclose: wherein determining is performed based further on one or more state changes performed through a continuation integration/continuous delivery (CI/CD) pipeline. Petrescu Col 10 lines 52-62 “In one embodiment, aspects of the persona-specific test execution 180 may be part of or invoked by a continuous integration system or continuous deployment system. For example, program code for the software product may be stored by a managed source-control service that hosts repositories. Using an automated pipeline management service, the program code may be built, tested, and deployed for every code change or according to other triggers. The tests and/or deployment to production may be performed automatically as part of such a pipeline.”; wherein the production computing environment is different from each of the plurality of other production computing environments: Petrescu Col 7 lines 30-34 “In one embodiment, aspects of the software testing system 100 may be performed continuously and/or repeatedly, e.g., to adapt to changing conditions in the production environment 195. For example, a dependency graph involving a particular set of services may be kept up to date based on the latest service call traces 135, e.g., by revising the dependency graph periodically. As another example, runtime observations for a particular service may be updated when the program code for the service is updated. In one embodiment, the software testing system 100 may be used in a deployment pipeline deployment pipeline for new software (including new versions of software) such that personas are identified or updated based on the latest version of the program code..”; Claims 13, 14, 16 are the apparatus claim corresponding to method claims 1, 2, (7 and 8) and rejected under the same rational set forth in connection with the rejection of claims 1, 2, (7 and 8) above. Claims 17, 18, 19 is the non-transitory computer readable storage media claim corresponding to method 1, 2, (7 and 8) and rejected under the same rational set forth in connection with the rejection of claims 1, 2, (7 and 8) above. Claims 3-4, 9, 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Petrescu US 11416379 in view of Tsoukalas US 10678678 B1 and furthermore in view of Yang et al CN110413506A and Priyanka US 20230222051 A1 As per claim 3, the rejection of claim 1 is incorporated and furthermore Petrescu does not explicitly discloses: assigning a respective weight to one or more of the plurality of changes made to the production computing environment, wherein the respective weight represents a relative impact of an associated change among the plurality of changes. Priyanka discloses: assigning a respective weight to one or more of the plurality of changes made to the production computing environment, wherein the respective weight represents a relative impact of an associated change among the plurality of changes. [0082] “For example, the test selection system 100, 200 may receive a respective component criticality score based on the source file 206 or plurality of source files 206 corresponding to a detected code change. The component criticality score may be a score assigned to the component (e.g., software component, software module, software package, and/or the like) of which the source file 206 belongs to. [0083] “The component criticality score may be based on the importance of the source file 206 or component by considering any relevant factors such as business use case, system availability, cost to develop the component, the requirements of the component, and/or the like. For example, a component or source file 206 may be assigned a component criticality score of 5 (where 5 may indicate a high importance of the component) by a user where that component or source file is part of the user login component of an application. “; It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Priyanka into teachings of Petrescu, Tsoukalas and Yang to reduce the amount of time required to execute regression tests within a software repository. Such embodiments may allow for training a machine learning model to continuously improve the test selection based on previous test selections and previous test executions. Such embodiments may select tests corresponding to the detected code change to be integrated and allow for selecting the most relevant tests for each detected code change. Such embodiments may enable improved product quality in a continuous integration pipeline where changes to a software repository are integrated frequently. [Priyanka 0060]. As per claim 4, the rejection of claim 3 is incorporated and furthermore Petrescu does not explicitly disclose: wherein assigning is performed in response to input from an administrative user or is automatically performed based on a software process based on the operational and testing data. Priyanka discloses: wherein assigning is performed in response to input from an administrative user or is automatically performed based on a software process based on the operational and testing data: [0082] “ In some non-limiting embodiments or aspects, the component criticality score may be received from a user of the test selection system 100, 200 via input through an input component. Additionally, or alternatively, the component criticality score may be received from a database 216 or other storage component storing component criticality scores.” It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Priyanka into teachings of Petrescu, Tsoukalas and Yang to reduce the amount of time required to execute regression tests within a software repository. Such embodiments may allow for training a machine learning model to continuously improve the test selection based on previous test selections and previous test executions. Such embodiments may select tests corresponding to the detected code change to be integrated and allow for selecting the most relevant tests for each detected code change. Such embodiments may enable improved product quality in a continuous integration pipeline where changes to a software repository are integrated frequently. [Priyanka 0060]. As per claim 9, the rejection of claim 8 is incorporated and furthermore Petrescu discloses: wherein executing the particular testing scenario produces test results: Col 9 lines 55-59 “Tests may be monitored (e.g., by the system 100) to detect errors, flaws, unexpected results, performance not meeting service-level agreements (SLAs), and/or other problems surfaced by the testing.”; But not explicitly: further comprising generating, using the testing configuration intelligence model, adjustments to the variance based on the variance. Priyanka discloses: and further comprising generating adjustments to the variance based on the variance: [0013] “determine a defective score for the at least one test based on historical test data of the at least one test; receive a component criticality score and a defect definition corresponding to the at least one source file; generate a key value corresponding to the at least one test based on the defective score, the component criticality score, and the defect definition;”; [0083] The component criticality score may be based on a scoring algorithm. The component criticality score may be based on the importance of the source file 206 or component by considering any relevant factors such as business use case, system availability, cost to develop the component, the requirements of the component, and/or the like. For example, a component or source file 206 may be assigned a component criticality score of 5 (where 5 may indicate a high importance of the component) by a user where that component or source file is part of the user login component of an application. By contrast, a component or source file may be assigned a component criticality score of 1 (where 1 may indicate a low importance) by a user where that component or source file is part of the settings component used to change the color of a user interface of an application. It should be appreciated that the scoring algorithms provided herein are exemplary non-limiting embodiments provided for illustration and that the scoring algorithms may be any feasible scoring algorithm to determine the component criticality score of a component. A scoring algorithm may include any such scale, calculation, formula, or representation of the measure of the component criticality. It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Priyanka into teachings of Petrescu, Tsoukalas and Yang to reduce the amount of time required to execute regression tests within a software repository. Such embodiments may allow for training a machine learning model to continuously improve the test selection based on previous test selections and previous test executions. Such embodiments may select tests corresponding to the detected code change to be integrated and allow for selecting the most relevant tests for each detected code change. Such embodiments may enable improved product quality in a continuous integration pipeline where changes to a software repository are integrated frequently. [Priyanka 0060]. Claim 15 is the apparatus claim corresponding to method 3 and rejected under the same rational set forth in connection with the rejection of claim 3 above. Claim 20 is the non-transitory computer readable storage media claim corresponding to method 9 and rejected under the same rational set forth in connection with the rejection of claim 9 above. Pertinent arts: US10334058B2 Advantageously, live pipeline templates allow an enterprise to ensure that operational best practices for availability, security, testing, performance, deployment, and monitoring are followed in continuous deployment pipelines used to push applications, services, and upgrades into production. In addition, live pipeline templates combine enforcement and validation with tools to bring applications or services into compliance with deployment best practices, keep deployment pipelines for applications or services up-to-date as operational guidelines or best practices evolve, and help developers set up new services correctly. US 20200285565A1: Testing system re-plays each production activity in virtual testing environment for testing virtual process and virtual product. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRAHIM BOURZIK whose telephone number is (571)270-7155. The examiner can normally be reached Monday-Friday (8-4:30). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Y Mui can be reached at 571-270-2738. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRAHIM BOURZIK/Examiner, Art Unit 2191 /WEI Y MUI/Supervisory Patent Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Aug 14, 2023
Application Filed
May 17, 2025
Non-Final Rejection — §103
Aug 06, 2025
Applicant Interview (Telephonic)
Aug 07, 2025
Examiner Interview Summary
Aug 14, 2025
Response Filed
Oct 24, 2025
Final Rejection — §103
Jan 14, 2026
Examiner Interview Summary
Jan 14, 2026
Applicant Interview (Telephonic)
Jan 22, 2026
Request for Continued Examination
Jan 29, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585459
UPDATING SYSTEM, ELECTRONIC CONTROL UNIT, UPDATING MANAGEMENT DEVICE, AND UPDATING MANAGEMENT METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12578931
INTELLIGENT AND EFFICIENT PIPELINE MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12566600
LIMITED USE LINKS FOR DATA ITEM DISTRIBUTION
2y 5m to grant Granted Mar 03, 2026
Patent 12561228
Optimal Just-In-Time Trace Sizing for Virtual Machines
2y 5m to grant Granted Feb 24, 2026
Patent 12554625
TESTING CONTINUOUS INTEGRATION AND CONTINUOUS DEPLOYMENT (CI/CD) PIPELINE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+45.0%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 376 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month