DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The instant application having application No. 18/377,739 filed on October 6, 2023, presents claims 1-20 for examination.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Objections
Claims 1-20 are objected to because of the following informalities:
With respect to claim 1, line 12 recites “to identity a first set of failure reasons”, which is a typographical error that should recite -- to [[identity]] identify a first set of failure reasons --.
With respect to claims 16 and 19, each recites the same typographical error identified above with respect to claim 1.
With respect to all dependent claims, each inherits the deficiency of its respective base claim (see the objects to claims 1, 16, and 19 above).
With respect to claim 9, line 2 recites “a plurality execution instances”, which is a typographical error that should recite “a plurality of execution instances”.
Claims 10-12 inherit this deficiency.
With respect to claim 12, line 2 recites “a log associated a test failure of the test case”, which is a typographical error that should recite “a log associated with a test failure of the test case”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-5, 11, 17, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With respect to claim 2, lines 11-15 recite, with emphasis added, “displaying on a user interface at least a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases, wherein, for one or more test results associated with at least one failure reason of the second set of failure reasons, the displaying on a user interface comprises displaying an identifier of the respective test case and displaying the respective at least one failure reason.” This is ambiguous because parent claim 1 recites at lines 14-18, with emphasis added, “displaying on a user interface at least a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases, wherein, for one or more test results associated with at least one failure reason of the first set of failure reasons, the displaying on a user interface comprises displaying an identifier of a respective test case and displaying a respective at least one failure reason.” Specifically, the following is unclear: (1) whether “a user interface” as recited in claim 2 is the same as “a user interface” as recited in claim 1; (2) whether “a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases” as recited in claim 2 is the same as “a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases” as recited in claim 1; (3) whether “the displaying on a user interface” as recited in claim 2 refers to “displaying on a user interface”, as previously recited in claim 2, or “displaying on a user interface” and “the displaying on a user interface”, as previously recited in claim 1; and (4) whether “an identifier” as recited in claim 2 is the same as “an identifier” as recited in claim 1.
Furthermore, the last two lines of claim 2 recite “displaying an identifier of the respective test case and displaying the respective at least one failure reason” (emphasis added), which appears to refer to “a respective test case” and “a respective at least one failure reason” as recited claim 1. However, in claim 1, “an identifier of a respective test case” and “a respective at least one failure reason” are displayed “for one or more test results associated with at least one failure reason of the first set of failure reasons” (emphasis added) as opposed to claim 2 where “an identifier of the respective test case” and “the respective at least one failure reason,” are displayed “for one or more test results associated with at least one failure reason of the second set of failure reasons” (emphasis added). Thus, it is unclear whether “the respective test case” and “the respective at least one failure reason”, as recited in claim 2, are the same as “a respective test case” and “a respective at least one failure reason”, as recited in claim 1.
For the reasons set forth above the claim is indefinite. For purposes of compact prosecution only, Examiner has interpreted lines 11-15 of claim 2 consistent with Applicant’s specification1 to mean “displaying on the user interface at least the portion of the test results of the execution instances of the at least a portion of the first plurality of test cases, wherein, for one or more test results associated with at least one failure reason of the second set of failure reasons, the displaying on the user interface comprises displaying an identifier of a second respective test case and displaying a second respective at least one failure reason”.
With respect to claims 3-5, each inherits the deficiency of claim 2 above.
With respect to claim 17, lines 11-15 recite, with emphasis added, “displaying on a user interface at least a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases, wherein, for one or more test results associated with at least one failure reason of the second set of failure reasons, the displaying on a user interface comprises displaying an identifier of a respective test case and displaying a respective at least one failure reason.” This is ambiguous because parent claim 16 recites at lines 11-15, with emphasis added, “displaying on a user interface at least a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases, wherein, for one or more test results associated with at least one failure reason of the first set of failure reasons, the displaying on a user interface comprises displaying an identifier of a respective test case and displaying a respective at least one failure reason.” Specifically, the following is unclear: (1) whether “a user interface” as recited in claim 17 is the same as “a user interface” as recited in claim 16; (2) whether “a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases” as recited in claim 17 is the same as “a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases” as previously recited in claim 16; (3) whether “the displaying on a user interface” as recited in claim 17 refers to “displaying on a user interface”, as previously recited in claim 17, or “displaying on a user interface” and “the displaying on a user interface”, as previously recited in claim 16; (4) whether “an identifier of a respective test case” as recited in claim 17 is the same as “an identifier of a respective test case” as recited in claim 16; and (5) whether “a respective at least one failure reason” as recited in claim 17 is the same as “a respective at least one failure reason” as recited in claim 16.
For purposes of compact prosecution only, Examiner has interpreted lines 11-15 of claim 17 consistent with Applicant’s specification2 to mean “displaying on the user interface at least the portion of the test results of the execution instances of the at least a portion of the first plurality of test cases, wherein, for one or more test results associated with at least one failure reason of the second set of failure reasons, the displaying on the user interface comprises displaying an identifier of a second respective test case and displaying a second respective at least one failure reason”.
With respect to claim 20, the claim recites limitations similar to claim 17 and is likewise indefinite and has been interpreted similarly by Examiner for purposes of compact prosecution only (see the rejection and interpretation of claim 17 of above).
With respect to claim 11, lines 1-3 recite “wherein the at least one selection criterion comprises is not based on selecting a log associated with a most recent execution of the test case” (emphasis added). There appears to be words missing from this limitation, which renders the scope of the claim indefinite. For purposes of compact prosecution only, Examiner has interpreted claim 11 consistent with Applicant’s specification3 to mean “wherein the at least one selection criterion comprises a selection criterion that is not based on selecting a log associated with a most recent execution of the test case.”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7, 8, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Seenivasan (WO 2023277802 A2, hereinafter Seenivasan) in view of Rakhmilevich et al. (US 20190065351 A1, hereinafter Rakhmilevich).
With respect to claim 1, Seenivasan discloses A computing system comprising: at least one memory; one or more hardware processor units coupled to the at least one memory; and one or more computer readable storage media storing computer-executable instructions that, when executed, cause the computing system to perform operations comprising (e.g., Figs. 1 and 9, particularly processing unit 902 and memory 903, along with associated text, e.g., [0025], a server computer is provided comprising a communication interface, a memory and a processing unit configured to perform the method according to one of the embodiments described above; [0047], The testing tool 206 and the report parser 201 may … be distributed over multiple machines.):
receiving a first plurality of test cases executed at a first test management system (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0042], the user further runs a testing tool 108 on the CPU 101, for example XCTest. By means of the testing tool 108, the user can run multiple test cases; [0047], a testing tool 206 testing a software application on a certain device.);
using a first connector, connecting to the first test management system (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0047] The report parser 201 [first connector] receives test result data 205 (e.g. an XCResult file) which is generated by a testing tool 206 testing a software application on a certain device … The testing tool 206 and the report parser 201 may … be distributed over multiple machines.);
retrieving a first plurality of test logs corresponding to test results of execution instances of at least a portion of the first plurality of test cases (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0053], As explained with reference to FIG. 2, the report parser 300 receives test result files 301. The test result files 301 may be the result files for tests of software applications on different devices; [0054] Each test result file 301 may include one or more logs for test cases. From a test result file 301 (e.g. an XCResult file) the report parser 300 extracts logs for the test cases and parses the log in 302.);
parsing the first plurality of test logs to identity a first set of failure reasons, the first plurality of test logs being in a first format (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0054], From a test result file 301 (e.g. an XCResult [first format] file) the report parser 300 extracts logs for the test cases and parses the log in 302; [0055], The report parser 300 further reads the log line by line for each test case and to find out the type of error in the application (in case of a failure of the application for the test case) and the operation where the test case encountered failure (to help identification of the possible causes of failures).); and
displaying on a user interface at least a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases (e.g., Fig. 4, particularly the depiction of the Test Case 402 column and the Status 402 column with “FAIL”, along with associated text, e.g., [0061], The report detail screen shows key information for each test case such as class name 401, method name 402, status 403 and duration 404.), wherein, for one or more test results associated with at least one failure reason of the first set of failure reasons, the displaying on a user interface comprises displaying an identifier of a respective test case and displaying a respective at least one failure reason (e.g., Fig. 4, particularly the Test Case 402 column with method names [identifier of a respective test case], along with associated text, e.g., [0060-62,] FIG. 4 shows an example of a report detail screen 400 displayed by the RCA FE App 202 … The report detail screen shows key information for each test case such as class name 401, method name 402 [identifier of a respective test case], status 403 and duration 404 … The user may hover with the cursor over the status to cause the RCA FE App 202 to show a failure reason.).
Seenivasan does not appear to disclose the following, which is taught in analogous art, Rakhmilevich: a first definition of a first software test plan comprising (e.g., [0106], a test plan is a logical container for multiple test cases, which can be used to organize testing based on subsystems or other logical parts of the application.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Seenivasan with the invention of Rakhmilevich, such that a test plan defines the test cases, because it “can facilitate the testing,” as suggested by Rakhmilevich (see [0027]).
With respect to claim 16, Seenivasan discloses A method, implemented in a computing system comprising at least one hardware processor and at least one memory coupled to the at least one hardware processor (e.g., Figs. 1 and 9, particularly processing unit 902 and memory 903, along with associated text, e.g., [0025], a server computer is provided comprising a communication interface, a memory and a processing unit configured to perform the method according to one of the embodiments described above; [0047], The testing tool 206 and the report parser 201 may … be distributed over multiple machines.), the method comprising:
receiving comprising a first plurality of test cases executed at a first test management system (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0042], the user further runs a testing tool 108 on the CPU 101, for example XCTest. By means of the testing tool 108, the user can run multiple test cases; [0047], a testing tool 206 testing a software application on a certain device.);
using a first connector, connecting to the first test management system (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0047] The report parser 201 [first connector] receives test result data 205 (e.g. an XCResult file) which is generated by a testing tool 206 testing a software application on a certain device … The testing tool 206 and the report parser 201 may … be distributed over multiple machines.);
retrieving a first plurality of test logs corresponding to test results of execution instances of at least a portion of the first plurality of test cases (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0053], As explained with reference to FIG. 2, the report parser 300 receives test result files 301. The test result files 301 may be the result files for tests of software applications on different devices; [0054] Each test result file 301 may include one or more logs for test cases. From a test result file 301 (e.g. an XCResult file) the report parser 300 extracts logs for the test cases and parses the log in 302.);
parsing the first plurality of test logs to identity a first set of failure reasons, the first plurality of test logs being in a first format (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0054], From a test result file 301 (e.g. an XCResult [first format] file) the report parser 300 extracts logs for the test cases and parses the log in 302; [0055], The report parser 300 further reads the log line by line for each test case and to find out the type of error in the application (in case of a failure of the application for the test case) and the operation where the test case encountered failure (to help identification of the possible causes of failures).); and
displaying on a user interface at least a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases (e.g., Fig. 4, particularly the depiction of the Test Case 402 column and the Status 402 column with “FAIL”, along with associated text, e.g., [0061], The report detail screen shows key information for each test case such as class name 401, method name 402, status 403 and duration 404.), wherein, for one or more test results associated with at least one failure reason of the first set of failure reasons, the displaying on a user interface comprises displaying an identifier of a respective test case and displaying a respective at least one failure reason (e.g., Fig. 4, particularly the Test Case 402 column with method names [identifier of a respective test case], along with associated text, e.g., [0060-62,] FIG. 4 shows an example of a report detail screen 400 displayed by the RCA FE App 202 … The report detail screen shows key information for each test case such as class name 401, method name 402 [identifier of a respective test case], status 403 and duration 404 … The user may hover with the cursor over the status to cause the RCA FE App 202 to show a failure reason.).
Seenivasan does not appear to disclose the following, which is taught in analogous art, Rakhmilevich: a first definition of a first software test plan comprising (e.g., [0106], a test plan is a logical container for multiple test cases, which can be used to organize testing based on subsystems or other logical parts of the application.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Seenivasan with the invention of Rakhmilevich, such that a test plan defines the test cases, because it “can facilitate the testing,” as suggested by Rakhmilevich (see [0027]).
With respect to claim 19, Seenivasan discloses One or more computer-readable storage media (e.g., Figs. 1 and 9, particularly processing unit 902 and memory 903, along with associated text, e.g., [0027], a computer-readable medium is provided comprising program instructions, which, when executed by one or more processors, cause the one or more processors to perform the method according to one of the embodiments described above; [0047], The testing tool 206 and the report parser 201 may … be distributed over multiple machines.) comprising:
computer-executable instructions that, when executed by a computing system comprising at least one hardware processor and at least one memory coupled to the at least on hardware processor, cause the computing system to receive comprising a first plurality of test cases executed at a first test management system (e.g., Figs. 1-3 and 7-9 along with associated text, e.g., [0027] a computer-readable medium is provided comprising program instructions, which, when executed by one or more processors, cause the one or more processors to perform the method according to one of the embodiments described above; [0042], the user further runs a testing tool 108 on the CPU 101, for example XCTest. By means of the testing tool 108, the user can run multiple test cases; [0047], a testing tool 206 testing a software application on a certain device.);
computer-executable instructions that, when executed by the computing system, cause the computing system to, using a first connector, connect to the first test management system (e.g., Figs. 1-3 and 7-9 along with associated text, e.g., [0047] The report parser 201 [first connector] receives test result data 205 (e.g. an XCResult file) which is generated by a testing tool 206 testing a software application on a certain device … The testing tool 206 and the report parser 201 may … be distributed over multiple machines.);
computer-executable instructions that, when executed by the computing system, cause the computing system to retrieve a first plurality of test logs corresponding to test results of execution instances of at least a portion of the first plurality of test cases (e.g., Figs. 1-3 and 7-9 along with associated text, e.g., [0053], As explained with reference to FIG. 2, the report parser 300 receives test result files 301. The test result files 301 may be the result files for tests of software applications on different devices; [0054] Each test result file 301 may include one or more logs for test cases. From a test result file 301 (e.g. an XCResult file) the report parser 300 extracts logs for the test cases and parses the log in 302.);
computer-executable instructions that, when executed by the computing system, cause the computing system to parse the first plurality of test logs to identity a first set of failure reasons, the first plurality of test logs being in a first format (e.g., Figs. 1-3 and 7-9 along with associated text, e.g., [0054], From a test result file 301 (e.g. an XCResult [first format] file) the report parser 300 extracts logs for the test cases and parses the log in 302; [0055], The report parser 300 further reads the log line by line for each test case and to find out the type of error in the application (in case of a failure of the application for the test case) and the operation where the test case encountered failure (to help identification of the possible causes of failures).); and
computer-executable instructions that, when executed by the computing system, cause the computing system to display on a user interface at least a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases (e.g., Fig. 4, particularly the depiction of the Test Case 402 column and the Status 402 column with “FAIL”, along with associated text, e.g., [0061], The report detail screen shows key information for each test case such as class name 401, method name 402, status 403 and duration 404.), wherein, for one or more test results associated with at least one failure reason of the first set of failure reasons, the displaying on a user interface comprises displaying an identifier of a respective test case and displaying a respective at least one failure reason (e.g., Fig. 4, particularly the Test Case 402 column with method names [identifier of a respective test case], along with associated text, e.g., [0060-62,] FIG. 4 shows an example of a report detail screen 400 displayed by the RCA FE App 202 … The report detail screen shows key information for each test case such as class name 401, method name 402 [identifier of a respective test case], status 403 and duration 404 … The user may hover with the cursor over the status to cause the RCA FE App 202 to show a failure reason.).
Seenivasan does not appear to disclose the following, which is taught in analogous art, Rakhmilevich: a first definition of a first software test plan comprising (e.g., [0106], a test plan is a logical container for multiple test cases, which can be used to organize testing based on subsystems or other logical parts of the application.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Seenivasan with the invention of Rakhmilevich, such that a test plan defines the test cases, because it “can facilitate the testing,” as suggested by Rakhmilevich (see [0027]).
With respect to claim 7, Seenivasan also discloses displaying log identifiers on the user interface for at least a portion of the one or more test results for logs comprising test results for respective tests results of the at least a portion of the one or more test results (e.g., Fig. 4 and associated text, e.g., [0065] The report detail screen 400… has a button 408 to show the log for the test case.).
With respect to claim 8, Seenivasan also discloses receiving user input selecting a displayed log identifier (e.g., Fig. 4 and associated text, e.g., [0065], The report detail screen 400 … has a button 408 to show the log for the test case.); and in response to the receiving user input selecting a displayed log identifier, causing a log corresponding to the displayed log identifier to be displayed (Id..).
Claims 2-5, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Seenivasan in view of Rakhmilevich, as applied to claims 1, 16, and 19 above, and further in view of Yadav et al. (US 20170344467 A1, hereinafter Yadav).
With respect to claims 2, 17, and 20, Seenivasan also discloses receiving a second plurality of test cases executed at a test management system (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., e.g., [0042], the user further runs a testing tool 108 on the CPU 101, for example XCTest. By means of the testing tool 108, the user can run multiple test cases; [0047], a testing tool 206 testing a software application on a certain device; [0053] As explained with reference to FIG. 2, the report parser 300 receives test result files 301. The test result files 301 may be the result files for tests of software applications on different devices [first and second plurality of test cases].);
using a connector, connecting to the test management system, ;
retrieving a second plurality of test logs corresponding to execution instances of at least a portion of the second plurality of test cases (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0053], As explained with reference to FIG. 2, the report parser 300 receives test result files 301. The test result files 301 may be the result files for tests of software applications on different devices; [0054] Each test result file 301 may include one or more logs for test cases.);
parsing the second plurality of test logs to identify a second set of failure reasons, wherein the second plurality of test logs (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0054], From a test result file 301 (e.g. an XCResult file) the report parser 300 extracts logs for the test cases and parses the log in 302; [0055], The report parser 300 further reads the log line by line for each test case and to find out the type of error in the application (in case of a failure of the application for the test case) and the operation where the test case encountered failure (to help identification of the possible causes of failures).); and
displaying on a user interface at least a portion of the test results of the execution instances of the at least a portion of the first plurality of test cases (please note the 35 USC 112(b) rejection and interpretation above; e.g., Fig. 4, particularly the depiction of the Test Case 402 column and the Status 402 column with “FAIL”, along with associated text, e.g., [0061], The report detail screen shows key information for each test case such as class name 401, method name 402, status 403 and duration 404.), wherein, for one or more test results associated with at least one failure reason of the second set of failure reasons, the displaying on a user interface comprises displaying an identifier of the respective test case and displaying the respective at least one failure reason4 (please note the 35 USC 112(b) rejections and interpretations above; e.g., Fig. 4, particularly the Test Case 402 column with method names [identifier], along with associated text, e.g., [0060-62,] FIG. 4 shows an example of a report detail screen 400 displayed by the RCA FE App 202 … The report detail screen shows key information for each test case such as class name 401, method name 402, status 403 and duration 404 … The user may hover with the cursor over the status to cause the RCA FE App 202 to show a failure reason.).
Rakhmilevich further teaches a second definition of a second software test plan comprising (e.g., [0106], a test plan is a logical container for multiple test cases, which can be used to organize testing based on subsystems or other logical parts of the application; [0064], Execute test plans; [0071], The discovered units can subsequently be organized into various test plans.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Seenivasan with the invention of Rakhmilevich for the same reason set forth above.
Although Seenivasan discloses a test management system, a connector, and test logs in a format (see above), it does not appear to disclose the following, which is taught in analogous art, Yadav: second (e.g., Figs. 1-2 and 4-5 along with associated text, e.g., [0027], The drivers 116 may include a driver for each of the software testing tools 101 [second test management system]) … second (e.g., Figs. 1-2 and 4-5 along with associated text, e.g., [0027] The drivers 116 [second connector] may include a driver for each of the software testing tools 101. The drivers 116 may extract test results from the software testing tools 101) … second (e.g., Figs. 1-2 and 4-5 along with associated text, e.g., [0027], The drivers 116 may include a driver for each of the software testing tools 101 [second test management system]) … wherein the second connector is different than the first connector (e.g., Figs. 1-2 and 4-5 along with associated text, e.g., [0027] The drivers 116 [second connector] may include a driver for each of the software testing tools 101. The drivers 116 may extract test results from the software testing tools 101) … are in a second format, the second format being different than the first format (e.g., Figs. 1-2 and 4-5 along with associated text, e.g., [0032], The software testing tools 101 generate test results 201 for example from executed test cases. The test results for the software testing tools 101 may be stored in files, and the files may be in different formats for different software testing tools.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Yadav, such that failure reasons are identified from logs that are collected from different test tools that produce logs in different formats, because “different users may use different automated software testing tools,” as suggested by Yadav (see [0013]).
With respect to claim 3, Seenivasan also discloses wherein parsing the first plurality of test logs uses a first set of one or more tokens (e.g., Figs. 7 and 8 along with associated text, e.g., [0076-77], When the report parser 300 has found a log, it processes the log line-by4ine, i.e. starts an (outer) loop over the lines in 703. In each iteration of the (outer) loop, it uses regular expressions in the following manner to process a current line. In the present example, it uses the following regular expressions, referred to as Regex 1 to Regex 7 in the following. Regular Expressions: 1. test_start_regex = r'Test [Cc]ase Y\-\[(.+?)\s(.+?)\]Y started'.) tokens in the first set of one or more tokens (Id.,) and Yadav further teaches and parsing the second plurality of test logs uses a second set of one or more tokens, at least a portion of the second set of one or more tokens being different than (e.g., Figs. 1-2 and 4-5 along with associated text, e.g., [0038], The driver determines the file format for the test result file and parses the file according to the file format to identify the test case results for extraction … a regex operation may be executed on the file to extract test results for a particular test case or to extract test results for multiple test cases.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Yadav for the same reason set forth above.
With respect to claim 4, Seenivasan also discloses wherein the first set of one or more tokens and are used to identify test failure of a test associated with a test log (e.g., Figs. 7 and 8 along with associated text, e.g., [0094] If the report parser 300 finds a match of Regex 3, it considers the test case as failed) and Yadav further teaches the second set of one or more tokens (e.g., Figs. 1-2 and 4-5 along with associated text, e.g., [0038], The driver determines the file format for the test result file and parses the file according to the file format to identify the test case results for extraction … a regex operation may be executed on the file to extract test results for a particular test case or to extract test results for multiple test cases. The extracted test results for each test case, which may include pass, fail, whether a test case was executed and/or other information.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Yadav for the same reason set forth above.
With respect to claim 5, Seenivasan also discloses wherein the first set of one or more tokens and are used to identify a test failure reason of a test associated with a test log (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0055], The report parser 300 further reads the log line by line for each test case and to find out the type of error in the application (in case of a failure of the application for the test case) and the operation where the test case encountered failure (to help identification of the possible causes of failures). The report parser 300 does this in 304 using a regex mechanism, i.e. finding matches of regular expressions in the log; [0062], The user may hover with the cursor over the status to cause the RCA FE App 202 to show a failure reason.) and Yadav further teaches the second set of one or more tokens (e.g., Figs. 1-2 and 4-5 along with associated text, e.g., [0038], The driver determines the file format for the test result file and parses the file according to the file format to identify the test case results for extraction … a regex operation may be executed on the file to extract test results for a particular test case or to extract test results for multiple test cases. The extracted test results for each test case, which may include pass, fail, whether a test case was executed and/or other information.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Yadav for the same reason set forth above.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Seenivasan in view of Rakhmilevich, as applied to claim 1 above, and further in view of Desai et al. (US 8639983 B1, hereinafter Desai).
With respect to claim 6, Seenivasan does not appear to disclose the following, which is taught in analogous art, Desai: receiving through the user interface a request to reexecute a test associated with a test case of the first plurality of test cases (e.g., Figs. 1-2 and 4 along with associated text, e.g., col. 7:8-11, The framework interface 212 may comprise … a graphical user interface (e.g., a command line tool); col. 8:10-16, the meta-testing framework may allow users to specify a custom retry strategy for failed tests. For instance, the framework may allow the users to instruct the framework to rerun a certain set of test cases that fail a certain number of times; col. 8:60-65, UI 302 includes a link 320 ("Advanced Option") that, when selected, may render another UI that allows the user 304 to make further selections regarding the custom run. For instance, these advanced options may allow the user 304 to …specify a retry strategy.); and in response to the receiving the request, causing the test to be reexecuted (Id.; see also col. 8:67-col. 9:2, Finally, the UI 302 includes an icon 322 ("Submit") that, when selected, sends the request to create and execute the custom run to the central testing platform 104; 10:1-2, At operation 410, the central testing platform re-executes failed tests in accordance with the retry strategy.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Desai, such that a UI allows users to rerun tests, because this would help ensure that test results are accurate.
Claims 9-10 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Seenivasan in view of Rakhmilevich, as applied to claim 1 above, and further in view of Colcord et al. (US 8185877 B1, hereinafter Colcord).
With respect to claim 9, Seenivasan also discloses wherein the first plurality of test logs comprises multiple logs for a plurality execution instances of of the first plurality of test cases (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0053], The test result files 301 may be the result files for tests of software applications on different devices; [0054] Each test result file 301 may include one or more logs for test cases.), the operations further comprising: with the first connector, selecting a log of the multiple logs (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0054], the report parser 300 … parses the log in 302; [0055], The report parser 300 further reads the log line by line for each test case and to find out the type of error in the application (in case of a failure of the application for the test case) and the operation where the test case encountered failure (to help identification of the possible causes of failures).)
Seenivasan does not appear to disclose the following, which is taught in analogous art, Colcord: a test case (e.g., Figs. 1-4c and associated text, e.g., col. 16:49-54, Baseline and regression results may comprise the logging of steps performed with test cases included. A test case may comprise, for example, the capturing the text of a .pdf document (e.g., WebCaptureText). During the baseline or regression run of the test, the .pdf test case is executed and the text is captured.) … according to at least one selection criterion (Id.; col. 16:61-17:3, The baseline results … may be captured … and stored for future use. Then, some time later, regression results may be generated by the scanning the web page and recording all textual presentations and objects associated with the test page. The regression results …may be captured … and stored for future use. The comparison module may then compare the regressions results with the baseline results and determine whether any has failed.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Colcord, such that runs of a test case are compared, because “a user may better monitor applications to more effectively identify flaws or failure in content or processing,” as suggested by Colcord (see col. 3:46-49).
With respect to claim 10, Colcord further teaches wherein the at least one selection criterion comprises selecting a log associated with a most recent execution of the test case (e.g., Figs. 1-4c and associated text, e.g., col. 16:49-55, Baseline and regression results may comprise the logging of steps performed with test cases included. A test case may comprise, for example, the capturing the text of a .pdf document (e.g., WebCaptureText). During the baseline or regression run of the test, the .pdf test case is executed and the text is captured. During comparison, the test case in the regression may be compared line-by-line to the baseline; col. 21:50-55, the figure shows the comparison of regression results sheet to the baseline sheet, thus giving the user/tester a "differences" analysis of the baseline to the latest results sheet [most recent execution of the test case].).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Colcord for the same reason set forth above.
With respect to claim 13, Seenivasan does not appear to disclose the following, which is taught in analogous art, Colcord: through the user interface, receiving a selection of a task (e.g., Figs. 1-2a, 6-8 along with associated text, e.g., col. 12:58-62, Test control module 105 may include a … interface (not shown) that may comprise the graphical user interface by which a tester communicates with test control module 105 and testing tool 132; col. 20:2-4, At step 805, test control module 105 may receive a signal from a user to initiate a test of an application.); and through the user interface, receiving a selection of a test plan of a plurality of test plans associated with the task to provide a selected test plan (e.g., Figs. 1-2a, 6-8, and 11-12 along with associated text, e.g., col. 20:6-8, At step 810, test control module 105 may receive a signal from the user identifying a test plan or script to use in conducting the test; col. 20:21-24, At step 820, test control module 105 may initiate a test of the application according to the test plan or script.), wherein the first plurality of test cases are associated with the selected test plan (e.g., col. 3:51-52, a test plan performs a test case action; col. 21:18-21, interface 110 may include various particulars of the test plan … such as … total test cases).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Colcord, such that test plans are selected to test an application, because “current systems and methods do not enable a tester to rapidly and dynamically define test plans … which can be used to dynamically capture results that likewise may be stored for subsequent access or use” and it can overcome this drawback, as suggested by Colcord (see col. 2:8-13 and col. 2:27-29).
Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Seenivasan in view of Rakhmilevich and Colcord, as applied to claim 9 above, and further in view of Ha et al (US 9311220 B1, hereinafter Ha)
With respect to claim 11, Seenivasan does not appear to disclose the following, which is taught in analogous art, Ha: wherein the at least one selection criterion comprises is not based on selecting a log associated with a most recent execution of the test case (please note the 35 USC 112(b) rejection and interpretation above; e.g., Fig. 1 and associated text, e.g., col. 1:48-54, running the test, enabling call trace collection to record call traces from the test until at least one execution has succeeded and one execution has failed; for each trace in the call trace collection, determining whether the trace was part of a passing test or a failing test and labeling the trace according to the determination; col. 3:6-8, For each execution of the test, the corresponding trace log file is labeled with either a “PASS” or “FAIL” to reflect the execution status; col 1:55-56, constructing a dynamic call tree for the failing traces [selection criterion that is not based on selecting a log associated with a most recent execution of the test case]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Ha, such that a test log for a failing execution of the test is selected in order to generate a call tree for the failing trace, because it can “help developers effectively and efficiently determine the root cause of non-deterministic tests,” as suggested by Ha (see col. 1:31-33).
With respect to claim 12, Seenivasan does not appear to disclose the following, which is taught in analogous art, Ha: wherein the at least one selection criterion comprises selecting a log associated a test failure of the test case (e.g., Fig. 1 and associated text, e.g., col. 1:48-54, running the test, enabling call trace collection to record call traces from the test until at least one execution has succeeded and one execution has failed; for each trace in the call trace collection, determining whether the trace was part of a passing test or a failing test and labeling the trace according to the determination; col. 3:6-8, For each execution of the test, the corresponding trace log file is labeled with either a “PASS” or “FAIL” to reflect the execution status; col 1:55-56, constructing a dynamic call tree for the failing traces [selecting a log associated a test failure of the test case]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Ha, such that a test log for a failing execution of the test is selected in order to generate a call tree for the failing trace, because it can “help developers effectively and efficiently determine the root cause of non-deterministic tests,” as suggested by Ha (see col. 1:31-33).
Claims 14-15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Seenivasan in view of Rakhmilevich, as applied to claims 1 and 16 above, and further in view of Markande et al. (US 20140026122 A1, hereinafter Markande).
With respect to claim 14, Seenivasan also discloses wherein the first connector comprises used to access the first plurality of test logs (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0053], As explained with reference to FIG. 2, the report parser 300 receives test result files 301. The test result files 301 may be the result files for tests of software applications on different devices; [0054] Each test result file 301 may include one or more logs for test cases. From a test result file 301 (e.g. an XCResult file) the report parser 300 extracts logs for the test cases and parses the log in 302.).
Seenivasan does not appear to disclose the following, which is taught in analogous art, Markande: access credentials (e.g., Figs. 1-3 and associated text, e.g., [0052-53], an admin logs into a cloud-based test system with their user credentials … At 370, the admin downloads and analyzes the test results.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Markande, such that access credentials are used to access test logs, because it would improve security and confidentiality by ensuring that only authorized access is permitted.
With respect to claim 15, Markande further teaches wherein the access credentials correspond to a superuser of the first test management system (e.g., Figs. 1-3 and associated text, e.g., [0052-53], an admin logs into a cloud-based test system with their user credentials … At 370, the admin downloads and analyzes the test results; [0036] Admins 180 have full access to the system 100.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Markande for the same reason set forth above.
With respect to claim 18, Seenivasan also discloses wherein the first connector comprises used to access the first plurality of test logs, (e.g., Figs. 1-3 and 7-8 along with associated text, e.g., [0053], As explained with reference to FIG. 2, the report parser 300 receives test result files 301. The test result files 301 may be the result files for tests of software applications on different devices; [0054] Each test result file 301 may include one or more logs for test cases. From a test result file 301 (e.g. an XCResult file) the report parser 300 extracts logs for the test cases and parses the log in 302.).
Seenivasan does not appear to disclose the following, which is taught in analogous art, Markande: access credentials (e.g., Figs. 1-3 and associated text, e.g., [0052-53], an admin logs into a cloud-based test system with their user credentials … At 370, the admin downloads and analyzes the test results.) … the access credentials corresponding to a superuser of the first test management system (e.g., Figs. 1-3 and associated text, e.g., [0052-53], an admin logs into a cloud-based test system with their user credentials … At 370, the admin downloads and analyzes the test results; [0036] Admins 180 have full access to the system 100.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Seenivasan with the invention of Markande, such that access credentials are used to access test logs, because it would improve security and confidentiality by ensuring that only authorized access is permitted.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Specifically, Zawawy et al., "Log filtering and interpretation for root cause analysis" teaches a framework that helps the analysis of log data.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN DAVID BERMAN whose telephone number is (571)272-7206. The examiner can normally be reached on M-F, 9-6 Eastern.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S. Sough can be reached on 571-272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEPHEN D BERMAN/Examiner, Art Unit 2192
1 See Applicant’s written description and figures, e.g., Figs. 4 and 8 along with associated text, e.g., paragraphs [0050-51], [0054], and [0108-109].
2 See Applicant’s written description and figures, e.g., Figs. 4 and 8 along with associated text, e.g., paragraphs [0050-51], [0054], and [0108-109].
3 See paragraph [0037].
4 Claims 17 and 20 recite “displaying an identifier of a respective test case and displaying a respective at least one failure reason” rather than “displaying an identifier of the respective test case and displaying the respective at least one failure reason”, as recited in claim 2. However, both of these limitations are indefinite and have been interpreted similarly for purposes of compact prosecution (see the Claim Rejections - 35 USC § 112 section above).