Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 have been examined.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation “test script” on line 5 of the claim, which renders the claim indefinite because it is unclear whether the test script on line 5 refers to the test script on line 2 of the claim or to another. Therefore, the limitation “test script” on line 5 of the claim is interpreted as “the test script”.
Claims 2-10 are rejected for dependency upon rejected base claim 1 above.
Claim 1 recites the limitation “test cases” on line 10 of the claim, which renders the claim indefinite because it is unclear whether the test cases on line 10 refers to the test cases on line 3 of the claim or to another. Therefore, the limitation “test cases” on line 10 of the claim is interpreted as “the test cases”.
Claims 2-10 are rejected for dependency upon rejected base claim 1 above.
Claim 1 recites the limitation “test cases” on line 12 of the claim, which renders the claim indefinite because it is unclear whether the test cases on line 12 refers to the test cases on line 3 of the claim or to another. Therefore, the limitation “test cases” on line 12 of the claim is interpreted as “the test cases”.
Claims 2-10 are rejected for dependency upon rejected base claim 1 above.
Claim 11 recites the limitation “test script” on line 9 of the claim, which renders the claim indefinite because it is unclear whether the test script on line 9 refers to the test script on line 5 of the claim or to another. Therefore, the limitation “test script” on line 9 of the claim is interpreted as “the test script”.
Claims 12-19 are rejected for dependency upon rejected base claim 11 above.
Claim 11 recites the limitation “test cases” on lines 14-15 of the claim, which renders the claim indefinite because it is unclear whether the test cases on lines 14-15 refers to the test cases on line 6 of the claim or to another. Therefore, the limitation “test cases” on lines 14-15 of the claim is interpreted as “the test cases”.
Claims 12-19 are rejected for dependency upon rejected base claim 11 above.
Claim 11 recites the limitation “test cases” on line 17 of the claim, which renders the claim indefinite because it is unclear whether the test cases on line 17 refers to the test cases on line 6 of the claim or to another. Therefore, the limitation “test cases” on line 17 of the claim is interpreted as “the test cases”.
Claims 12-19 are rejected for dependency upon rejected base claim 11 above.
Claim 20 recites the limitation “test script” on line 5 of the claim, which renders the claim indefinite because it is unclear whether the test script on line 5 refers to the test script on line 2 of the claim or to another. Therefore, the limitation “test script” on line 5 of the claim is interpreted as “the test script”.
Claim 20 recites the limitation “test cases” on line 10 of the claim, which renders the claim indefinite because it is unclear whether the test cases on line 10 refers to the test cases on line 3 of the claim or to another. Therefore, the limitation “test cases” on line 10 of the claim is interpreted as “the test cases”.
Claim 20 recites the limitation “test cases” on line 12 of the claim, which renders the claim indefinite because it is unclear whether the test cases on line 12 refers to the test cases on line 3 of the claim or to another. Therefore, the limitation “test cases” on line 12 of the claim is interpreted as “the test cases”.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1 and 5-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tangirala (US 2021/0232495).
Per Claim 1:
Tangirala teaches:
- receiving a user description of a test script, the user description comprising one or more test cases defining specific tests to be executed; processing the user description using a description processing component configured to: identify elements within the user description relevant to test script and test case generation, generate a structured representation of the user description based on the identified elements; utilizing the structured representation to generate code for the test script; providing the user with an option to review and update the generated code and test cases ([0036] At block 304, if the processed test case is not generated, the system 102 may receive a test file. The test file may be received in a predefined format from a Test Data Management (TDM) system such as a Quality Center (QC), an Application Life Cycle Management (ALM) and the like. The test file may comprise manual text written in plain English language by a user. The test file may be a received as an XLSX, a DOCX or a PDF document. The test file may comprise a test case. The predefined format for the test case may comprise a step number, one or more test steps, test data, and one or more expected results associated with an execution of the test case. Further, each test step from the one or more test steps may indicate an action to be performed for executing the test case. [0037] At block 306, a processing on the test case may be performed using a Natural Language Processing (NLP) technique. The NLP technique may utilize one or more algorithms such as a lemmatization, a Parts of Speech (POS) tagging, a Dependency Parser, a Named Entity Recognition (NER) for the processing. The processing may comprise generation of one or more tokens by classifying text associated with the one or more test steps. Further, the NLP technique may generate one or more keywords from the text associated with the one or more test steps. [0038] At block 308, the one or more test steps may be analysed using a Machine Learning technique to generate an output template. The output template may comprise one or more columns. The one or more columns may comprise a test case ID, a test step ID, one or more controls, a control text, an event, and data associated with each test step. The test case ID may indicate a unique ID of the test case. Further, the test step ID may indicate a unique ID of the test step. The one or more controls may be web controls such as button, link, and the like associated with each test step. The one or more controls may enhance user interaction. Further, the control text may indicate one or more actions associated with the one or more controls. The event may indicate one or more events associated with the test case to be implemented. The data may indicate data associated with each test step.)
- generating test data suitable for executing the generated test script; executing the generated test script and test cases using the test data; collecting test results associated with the execution; and presenting the test results to the user ([0041] At block 316, an excel sheet comprising one or more processed test case details of the output template may be generated by the Machine Learning technique. The excel sheet may be generated in a format understandable by an Automation Engine. [0042] At block 318, an Automation Under Test (AUT) may be scanned with the excel sheet to obtain one or more dynamic element identifiers. Further, the excel sheet may be updated with the one or more dynamic element identifiers. [0043] At block 320, the test case may be executed step by step by the Automation Engine. The Automation Engine may execute the test case based on the one or more controls associated with the test case in the output template. The one or more controls may be dynamically identified from the output template. Further, a current execution result associated with the test case based on the execution of the test case may be captured. [0044] At block 322, a step wise action may be performed to complete the test execution along with evaluating a result of the execution when necessary. Along with execution, evidence may also be collected and associated with each Test Step. [0045] At block 324, a summary report may be generated. The current execution result may be compared with the one or more expected results to generate an execution data. The summary report may be an overall report of the execution data. The execution data may indicate one or more executed test cases, one or more succeeded test cases, and one or more failed test cases. In one aspect, the system 102 may generate a succeeded test case when the current execution result may be same as the one or more expected results. In another aspect, the system 102 may generate a failed test case when the current execution result may be different from the one or more expected results. Further, a detailed report may be generated. The detailed report may be configured to generate one or more details of the execution of each test step from the one or more test steps. The detailed report may show a comparison of the one or more expected results with the current execution result. [0046] At block 326, a historical dashboard may be configured to display the summary report and the detailed report.).
Per Claim 5:
The rejection of claim 1 is incorporated, and Tangirala further teaches wherein the code generation step comprises employing pseudocode generation as an intermediate step (par. 0041-0042).
Per Claim 6:
The rejection of claim 1 is incorporated, and Tangirala further teaches performing source control management on the generated code and test results, the source control management comprising: preparing the generated code and test results for storage, and storing the prepared code and test results within a source control system (par. 0024-0026).
Per Claim 7:
The rejection of claim 6 is incorporated, and Tangirala further teaches wherein the preparing step for source control management comprises adding comments to the generated code and test results (par. 0045).
Per Claim 8:
The rejection of claim 1 is incorporated, and Tangirala further teaches wherein the user description is provided in a format selected from the group consisting of plain text, voice commands, and a graphical user interface selection (par. 0025-0026 and 0041-0042).
Per Claim 9:
The rejection of claim 1 is incorporated, and Tangirala further teaches wherein the description processing component further comprises a natural language processing module configured to identify the elements within the user description (par. 0037).
Per Claim 10:
The rejection of claim 9 is incorporated, and Tangirala further teaches wherein the wherein the natural language processing module is further configured to identify keywords and phrases associated with testing concepts (par. 0037).
Per Claim 11:
This is a system version of the claimed method discussed above (claim 1, respectively), wherein all claim limitations also have been addressed and/or covered in cited areas as set forth above. Thus, accordingly, this claim is also anticipated by Tangirala.
Per Claim 12:
The rejection of claim 11 is incorporated, and Tangirala further teaches wherein the wherein the structured representation comprises a format selected from the group consisting of a tree structure, a flow chart, and a state machine diagram (par. 0037).
Per Claim 13:
The rejection of claim 11 is incorporated, and Tangirala further teaches an integration module configured to interact with third-party testing tools for test execution (par. 0043-0044).
Per Claim 14:
The rejection of claim 11 is incorporated, and Tangirala further teaches wherein the user review and update step allows the user to modify the test script logic and test case parameters (par. 0025-0026).
Per Claim 15:
The rejection of claim 11 is incorporated, and Tangirala further teaches wherein the presenting step comprises highlighting failed test cases within the test results (par. 0045).
Per Claim 16:
The rejection of claim 11 is incorporated, and Tangirala further teaches wherein the structured representation is generated in a machine-readable format (par. 0042)0
Per Claim 17:
The rejection of claim 11 is incorporated, and Tangirala further teaches wherein the processing step further comprises identifying dependencies between the test cases within the user description, and the utilizing step generates code that accounts for the identified dependencies when executing the test script (par. 0028-0029).
Per Claim 18:
The rejection of claim 11 is incorporated, and Tangirala further teaches wherein the one or more processors cause the system to perform further operations comprising: Identifying one or more inconsistencies within the user description, and prompting the user to address the inconsistencies before proceeding with code generation (par. 0018 and 0025).
Per Claim 19:
The rejection of claim 11 is incorporated, and Tangirala further teaches wherein the presenting step provides the user with an interface to export the test results in a format selected from the group consisting of a report document, a graphical chart, and a data file suitable for further analysis (par. 0045).
Per Claim 20:
This is a medium version of the claimed method discussed above (claim 1, respectively), wherein all claim limitations also have been addressed and/or covered in cited areas as set forth above. Thus, accordingly, this claim is also anticipated by Tangirala.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tangirala (US 2021/0232495) in view of Almog (US 2013/0191689).
Per Claim 2:
The rejection of claim 1 is incorporated, and further, Tangirala does not explicitly teach wherein the code generation step comprises leveraging a library of existing solutions for commonly used testing functionalities. However, Almog teaches wherein the code generation step comprises leveraging a library of existing solutions for commonly used testing functionalities (par. 0012).
It would have been obvious to one having ordinary skill in the computer art before the effective filing date of the claimed invention to modify the method disclosed by Tangirala to include wherein the code generation step comprises leveraging a library of existing solutions for commonly used testing functionalities using the teaching of Almog. The modification would be obvious because one of ordinary skill in the art would be motivated to generate test templates from existing libraries (Almog, par. 0004).
Per Claim 3:
The rejection of claim 2 is incorporated, and Almog further teaches wherein the library of existing solutions comprises solutions pre-configured for specific testing functionalities (par. 0012).
Per Claim 4:
The rejection of claim 2 is incorporated, and Tangirala further teaches further comprising a selection interface configured to allow the user to select relevant solutions (par. 0025-0026). Tangirala does not explicitly teach selecting from the library during code generation.
However, Almog teaches selecting from the library during code generation (par. 0012).
It would have been obvious to one having ordinary skill in the computer art before the effective filing date of the claimed invention to modify the method disclosed by Tangirala to include selecting from the library during code generation using the teaching of Almog. The modification would be obvious because one of ordinary skill in the art would be motivated to generate test templates from existing libraries (Almog, par. 0004).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Pereira (US 11,474,934) teaches a method for creating and testing applications.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QAMRUN NAHAR whose telephone number is (571)272-3730. The examiner can normally be reached Monday - Friday 8-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached on (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/QAMRUN NAHAR/Primary Examiner, Art Unit 2199