DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to applicant’s election of a restriction requirement filed on 01/26/2026.
Applicant elected claims 1-6 and 11-16 for examination.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 and 11-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, mathematical relationship or an abstract idea) without significantly more.
Statutory Category: Claim 1 recites a method, comprising: executing a quality assurance test on each software part of the plurality of software parts to receive result data; processing the result data for each software part to extract metrics; receiving a quality gate configuration for the software part type, wherein the quality gate configuration comprises at least one metric gate; comparing the metrics for each software part based on the at least one metric gate; and based on the comparison and the quality gate configuration, outputting a result of the build and the metrics.
Step 2A – Prong 1: Claim 1 recites: processing the result data for each software part to extract metrics (a user can mentally analyze result data to extract metrics); comparing the metrics for each software part based on the at least one metric gate (a mental step of comparison). That is, nothing in the claim elements precludes the steps from practically being performed mentally or using pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the mental process grouping of abstract idea. Accordingly, the claim recites an abstract idea under step 2A prong 1.
This judicial exception is not integrated into a practical application. In particular, the claim 1 recites additional elements such as “executing a quality assurance test on each software part of the plurality of software parts to receive result data”. Examiner would like to point out that with the broad reasonable interpretation, these elements amount to mere executing a test on software components, which do not impose any meaningful limits on practicing the mental process (insignificant additional element and an extra-solution activity, as evidenced in Wan, paragraphs [0024][0025][0032][0018]; executing a quality assurance test on an application which comprises of a plurality of components). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2B.
This judicial exception is not integrated into a practical application. In particular, the claim 1 recites additional elements such as “receiving a quality gate configuration for the software part type, wherein the quality gate configuration comprises at least one metric gate”. Examiner would like to point out that with the broad reasonable interpretation, these elements amount to mere data gathering for a mental process, which do not impose any meaningful limits on practicing the mental process (insignificant additional element and an extra-solution activity). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2B.
This judicial exception is not integrated into a practical application. In particular, the claim 1 recites additional elements such as “based on the comparison and the quality gate configuration, outputting a result of the build and the metrics”. Examiner would like to point out that with the broad reasonable interpretation, these elements amount to mere outputting a comparison result, which do not impose any meaningful limits on practicing the mental process (insignificant additional element and an extra-solution activity, as evidenced in Wan, paragraphs [0106][0107][0025][0027]; determining a plurality of component scores comprises comparing the component performance metrics with respective baseline values (metric gates); output if performance test passes or fails). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2B.
Dependent claims 2-6 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the dependent claims 2-6 recite more steps of mental process (such as comparison), and also recite extra-solution activities (such as outputting results of comparisons, visualizing test results), which do not impose any meaningful limits on practicing the mental process (insignificant additional element). Therefore, these claims are not patent eligible.
Independent claim 11 (a system with memory to perform the method of claim 1) and its dependent claims 12-16 are rejected under the similar rational as claims 1-6. The additional elements in the claim amounts to no more than generic software/hardware components with instructions to apply the exception, which cannot integrate a judicial exception into a practical application or provide an inventive concept.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Wan et al. (US PGPUB 2023/0086361) hereinafter Wan, in view of Reyes et al. (US PGPUB 2018/0121322) hereinafter Reyes.
Per claim 1, Wan discloses a method for verifying quality assurance on a plurality of software parts, the method comprising: executing a quality assurance test on each software part of the plurality of software parts to receive result data; processing the result data for each software part to extract metrics (paragraphs [0024][0025][0032][0018]; executing a quality assurance test on an application which comprises of a plurality of components (software parts) to receive test results (data logs); extracting performance metrics from the test results); receiving a quality gate configuration, wherein the quality gate configuration comprises at least one metric gate; comparing the metrics for each software part based on the at least one metric gate; and based on the comparison and the quality gate configuration, outputting a result of the build and the metrics (Fig. 2; paragraphs [0106][0107][0025][0027]; receiving predefined threshold values (metric gates) in a configuration file (quality gate configuration) corresponding to the application; determining a plurality of component scores comprises comparing the component performance metrics with respective baseline values (metric gates); output if the performance test passes or fails).
Wan does not explicitly teach the plurality of software parts having a software part type, receiving a quality gate configuration for the software part type. However, Reyes suggests the above (paragraphs [0005][0044][0067]; testing applications, each application having an application type (software part type); comparing application test performances to threshold values; receiving threshold values (quality gate configuration), which for each respective performance indicator can differ based on the type of application program). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wan and Reyes that the software parts having a software part type, receiving a quality gate configuration for the software part type for comparison purpose; this gives users more flexibility to set different metric gates for different software part types.
Per claim 2, Wan further discloses wherein if the comparison of the metrics with the metric gate has a negative result, the result of the build is an indication that the build failed (paragraphs [0016][0027]; based on the comparison, output if performance test passes or fails; whenever there is a test failure, the CI/CD pipeline can pause and wait for a fix before resuming running; test result can be kept in the CI/CD pipeline as a status indicator (e.g., a red flag)).
Per claim 3, Wan further discloses wherein if the comparison of the metrics with the metric gate has a positive result, the result of the build is an indication that the build succeeded (paragraphs [0016][0027]; based on the comparison, output if performance test passes or fails; successfully passes the software test, the status indicator can be updated to a green flag).
Per claim 4, Wan further discloses wherein the quality gate configuration comprises: a stage name identifier which may be used to identify the development stage where a quality gate is being enforced; and an enforcement level parameter which may be output as the result of the build if the metric gate has a negative result (Fig. 1; paragraphs [0011][0016][0027]; the CI/CD pipeline comprises of a plurality of stages, the configuration for the software test corresponds to the testing stage; successfully passes the software test allows the software to move to the next stage; whenever there is a test failure, the CI/CD pipeline can pause and wait for a fix before resuming running; test result can be kept in the CI/CD pipeline as a status indicator (a red flag/enforcement level parameter)).
Claims 11-14 recite similar limitations as claim 1-4. Therefore, claims 11-14 are rejected under similar rationales as claims 1-4.
Claims 5-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Wan, in view of Reyes, in view of Gardner et al. (US PGPUB 2021/0089438) hereinafter Gardner.
Per claim 5, Wan further discloses wherein the at least one metric gate comprises: a metric name identifier which may be used to identify the type of metric being compared, and a comparison type parameter which indicates how the values of the metrics should be compared with the threshold parameter (paragraphs [0106][0107]; determining a plurality of component scores comprises comparing the component performance metrics with respective baseline values (metric gates), i.e. which metric is compared to which baseline value). Wan does not explicitly teach a threshold parameter comprising a warning threshold value and an error threshold value, wherein if the warning threshold value is met by the comparison, the result of the build includes specifying a warning, and wherein if the error threshold is met by the comparison, the result of the build includes specifying an error, wherein if the result of the build is neither a warning or an error, the result of the build instead includes specifying a success. However, Gardner suggests the above (paragraphs [0069]-[0071][0075][0108][0139]; a performance monitoring module compares a monitored performance metric to a performance threshold, if the monitored performance metric deviates by 25%, a warning is issue, if it deviates by 200%, an error is issued; a success is defined as when the deviation is below 20%). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wan, Reyes and Gardner to have separate definitions for test success, test warning and test error when comparing test performances to threshold values, this gives a user more details of a performance testing.
Per claim 6, Gardner further suggests wherein outputting the result of the build and the metrics further comprises: rendering a visualization of the result of the build, wherein if the result includes specifying a warning, a first style is rendered; or if the result includes specifying an error, a second style is rendered, or if it the result instead includes specifying a success, a third style is rendered, wherein the first style, the second style, and the third style are different from each other (paragraphs [0069]-[0071][0075][0108]; if a performance metric is determined to be below a warning threshold but above an error threshold, it may be presented with a warning color (e.g., yellow) or shade that is different from a healthy color (e.g., green) or shade (or absence of shade) and different from an error color (e.g., red) or shade).
Claims 15-16 are rejected under similar rationales as claims 5-6.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HANG PAN whose telephone number is (571)270-7667. The examiner can normally be reached 9 AM to 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HANG PAN/Primary Examiner, Art Unit 2193