Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1,4-8, 11-15, and 18-20 are pending in this office action.
Claims 2-3,9-10 and 16-17 are cancelled.
. Response to Arguments
Applicant's arguments filed 10/01/2025 have been fully considered but they are not persuasive.
The rejection of claims 8-14 under 35USC as being signal per se is withdrawn in view of the amendment.
The rejection of claims 1-20 under 35USCas being abstract idea is withdrawn in view of the amendment.
Applicant’s argument:
Thus, Raman does not describe human-based interaction with a probabilistic model enabled via a natural language interface, as captured via the claim amendments presented herein. More specifically, Raman fails to teach “receiving, via a natural language interface, a first user input including a first natural language description defining an expected functionality of a software
….
Like Raman, Tahvili does not describe human-based interaction with a probabilistic model enabled via a natural language interface, as captured via the claim amendments presented herein. Thus, Tahvili does not remedy the deficiencies in Raman.
Examiner response:
The issue in the argument is that the system of Raman and Tahvili does not describe human used model, but rather uses an accelerator automated mode.
By way of example, a test scenario or list of scenarios may be included in what is called a feature file, where a formatted language, such as Gherkin, is used to write the scenarios in a human readable way.
While those natural language are received, they are processed to determine the expected output and feature to test as a parsing step. While determined those are test scenario and are subject to selection through user interface:
(col 3 lines 7-12 ”receiving a test scenario and a context file selected through a user interface, the context file including an object map comprising objects that correlate to respective components of a display page for the code base,);
Once selected , it is provided to object correlator and a test script is generated:
(Col 11 lines 51-53 “The script generator module 340 generates the test scripts bases on the input from object correlator module 330”;
And finally, is executed to evaluate the feature and intended use either pass or fail:
(These reports provide details on the performance of the system during the execution of the automated testing scripts and may include processing time, response time, and any warning messages displayed as well as the information generated by the various engines 131-137 of the touchless testing platform module 120:
In relation to confirming the selection and approval, Bellepally discloses
[0042]“In one embodiment, in response to the command (and following successful checks) the parser operates to retrieve (or locate) a .pcrl file specified in the command, and place it in a memory location (or record a current location of the .pcrl file in storage) for further processing by the parser”;
Successful check of the teat implementation such as format and syntax:
[0056] “In other words, the parser accepts input test cases expressed in format 225 and does not or might not accept test case not expressed in format 225”;
Once checked/confirmed, the test is executed and the result is compared to expected result to determine pass or fail(this is classification):
[0077]“If all parameters match, then the test case is considered to be processed successfully. If not all parameters match, the expected response to the test case is not received, and the test case is considered to be processed unsuccessfully. “;
NB: the storage medium as disclosed in the claims 8 and 15 is as tatutory medium that exclude non-statutory medium as disclosed in [0082]:
“ As defined herein, computer storage media does not include communication media. That is, computer-readable storage media does not include communications media consisting solely of a modulated data signal, a carrier wave, or a propagated signal, per se.
“;
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1,4-8, 11-15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Raman et al US10073763B1 in view of Tahvili et al US20230308381A1 and Bollepally et al US20230205678A1.
As per claim 1, Raman discloses a method comprising:
Receiving, via a natural language interface, a first natural language description defining an expected functionality of a software feature and a second natural language description defining a process for using the software feature:
Col 5 lines 1-9 “ Beginning with requirements documentation, the described system generates test scenarios, which are used to create test cases once the application or system has been built. Requirements documentation (e.g., business requirements, functional requirements, use cases, user stories, and so forth) captures, for example, information related to the business process(es) that will be supported by the software, intended actions that will be performed through the software, managed data, rule sets, nonfunctional attributes (e.g., response time, accessibility, access and privilege), and so forth.”;
generating, based on the first natural language description and the second natural language description, a first probabilistic model query including a first contextual input defining an expected test case output format:
col 10 lines3-12 “Test scenario element classifier module 270 classifies the extracted terms 262 into process terms 272, operations terms 274, and data set terms 276 by employing techniques, such as, topic segmentation and so forth. Process terms 272 include terminology pertaining to, for example, a business process identified in the requirements documentation. Operations terms 274 include, for example, business operations applicable to the identified business processes. Data set terms 276 include information regarding requirements to perform the identified operations, such as what data a particular form requires or what type of data is needed for a parsing script to execute. Test scenario element classifier module 270 may employ a corpus of existing known terms to assist in identifying terms and classifying them accordingly”;
providing, the first probabilistic model query, to a probabilistic model, receiving from probabilistic model a first model output comprising a plurality of test cases for the software feature in accordance with the first contextual input defining the expected test case output format, an individual test case defining a process for evaluating an aspect of the expected functionality of the software feature:
Col 10 line 47-55 “Test scenario map builder 290 uses the generated semantics graphs and process maps to generate test scenarios maps 292 for the respective requirements documentation. The semantics graphs and process maps include, based on the requirements documentation, processes and functionality that may be tested for an application, valid and invalid operations for each functionality, expected outputs, and the relationships, such as a hierarchically relationship, between the various processes””
causing the plurality of test cases to be displayed via the natural language interface; receiving, via the natural language interface, a second user input selecting a test case from the plurality of test cases displayed via the natural language interface;
col 3 lines 7-12 ”receiving a test scenario and a context file selected through a user interface, the context file including an object map comprising objects that correlate to respective components of a display page for the code base, the test scenario describing one of the test cases involving an intended interaction with at least one of the components on the display page”:
Col 13 lines 3-7” In the depicted example, the NLP engine 310 receives a test scenario(s) from the data and artifact repository 130, and optionally, a context file from the control center 110. The received test scenarios may be generated by the test scenario and process map extractor 121 engine as described above.”; See also Col 8 lines 44-49 “
generating, based on the second user input selecting the test case, a second probabilistic model query and a second contextual input defining an expected software implementation output format:
col 11 lines 32-42 “The automation accelerator engine 122 extracts the intended interaction (intent) and relevant testing data from each test scenario through the employment of, for example, natural language processing (NLP) techniques. The intent is correlated to an appropriate test object(s) in the provided context file. For example, if the test scenario recites “Click on the Submit button,” the automation accelerator engine 122 parses the natural language and derives the context as “submit button,” which it then maps to the submit button object from the object map of the submitted context file. A template for the selected automation tool is applied to the extracted intent and data along with the correlated object(s) to generate the resulting automated testing script”
providing the second probabilistic model query, to the probabilistic model, receiving from the probabilistic model a second model output comprising a software implementation of the test case:
col 11 lines 51-53 “The script generator module 340 generates the test scripts bases on the input from object correlator module 330.
Col 14 lines 30-33” To generate the file, the script generator module 340 may employ an AI model trained through a series of machine-learning techniques applied to an algorithm using these elementary and business level steps.
Col 14 Lines 40-47 “Based on the AI model, the script generator module 340, determines the action(s) to perform the determined intent to the correlated objects in the respective page of the UI being tested. The script generator module 340 generates the automated script by applying the selected template to the determined actions for the intent and correlated objects, the data read from the provided test scenario, and the step definitions from the test scenario (e.g., the feature file)
executing, the software implementation of the test case to perform the process for evaluating the aspect of the expected functionality of the software feature;
Col 17 lines 1-10 “The control center 120 may provide this information to the users 112 through a reporting engine which provides analytics and access to the reporting features. The execution engine 160 may persist results from the execution of the generated automated testing scripts in a reporting database (not shown). The reporting engine may generate reports from the information stored in the reporting database, which can be reviewed by users 112. These reports provide details on the performance of the system during the execution of the automated testing scripts and may include processing time, response time, and any warning messages displayed as well as the information generated by the various engines 131-137 of the touchless testing platform module 120.
But not explicitly:
Receiving a first user input defining an expected functionality of a software feature and a second description defining a process for using the software feature.
causing the software implementation of the test case to be displayed via the natural language interface with a request to confirm the software implementation
receiving, via the natural language interface, a third user input confirming the software implementation;
executing, based on the third user input confirming the software implementation, the software implementation of the test case to perform the process for evaluating the aspect of the expected functionality of the software feature;
extracting log data defining an outcome of the execution of the software implementation;
comparing the log data against a success condition defining an expected outcome of the process for evaluating the aspect of the expected functionality of the software feature;
and classifying the execution of the software implementation as passed or not passed based on the comparison of the log data against the success condition;
Tahvili discloses:
Receiving a first user input defining an expected functionality of a software feature and a second description defining a process for using the software feature:
[0060] “The method 600 includes obtaining, at block 602, a test case specification that describes a test scenario associated with the network node. The test case specification may be written in a natural language, such as English. At block 604, the method extracts textual features from the test case specification.”;
It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Tahvili into teachings of Raman to provide accurate selection of relevant test scripts given a non-formal test case specification written in natural language. By automatically analyzing test case specifications using NLP, some embodiments may reduce/eliminate some of the manual work associated with software testing. Because the test case specifications may be written in natural language, formal test specifications may not be required.[Tahvilli 0022].
But not explicitly:
causing the software implementation of the test case to be displayed via the natural language interface with a request to confirm the software implementation
receiving, via the natural language interface, a third user input confirming the software implementation;
executing, based on the third user input confirming the software implementation, the software implementation of the test case to perform the process for evaluating the aspect of the expected functionality of the software feature;
extracting log data defining an outcome of the execution of the software implementation;
comparing the log data against a success condition defining an expected outcome of the process for evaluating the aspect of the expected functionality of the software feature;
and classifying the execution of the software implementation as passed or not passed based on the comparison of the log data against the success condition;
Bollepally discloses:
causing the software implementation of the test case to be displayed via the natural language interface with a request to confirm the software implementation:
[0095]“The user interface then prompts the user to indicate if an additional test case is to be added to the .pcrl file, and accepts the user's response. If the user's response is yes, the user interface will return to showing the identified REST APIs/endpoints and methods for selection for the next test case, and repeats the above process from that point”;
[0056] “At decision block 232, the parser determines whether the current test case is in the expected format 225 in .pcrl file 215. In one embodiment, the parser expects test cases in .pcrl file 215 to be provided in the pre-defined format 225, consistent with that shown in Table 2. In other words, the parser accepts input test cases expressed in format 225 and does not or might not accept test case not expressed in format 225”;
receiving, via the natural language interface, a third user input confirming the software implementation;
[0042]“In one embodiment, in response to the command (and following successful checks) the parser operates to retrieve (or locate) a .pcrl file specified in the command, and place it in a memory location (or record a current location of the .pcrl file in storage) for further processing by the parser”;
executing, based on the third user input confirming the software implementation, the software implementation of the test case to perform the process for evaluating the aspect of the expected functionality of the software feature:
[0047] During execution, the parser processes the test cases in a .pcrl file one-by-one internally and automatically. In one embodiment, the parser parses the test case, generates and executes the cURL requests in JSON format, validates the response, and generates detailed logs for each test case. After validation where all the tests in the file are successfully executed and validated, the parser generates a .suc file (meaning that the test file ran successfully), otherwise the parser generates a .dif file (meaning that there is some issue with one or more test execution). In one embodiment, the parser invokes various modules to perform the processing of test cases.”;
extracting log data defining an outcome of the execution of the software implementation:
[0081]”In one embodiment, the parser also generates detailed logs with the information of test cases run which can be referred to by an end user or other system components for debugging or other purposes. At process block 260, the parser generates a detailed log for the test case request.”;
comparing the log data against a success condition defining an expected outcome of the process for evaluating the aspect of the expected functionality of the software feature:
[0111]”The systems, methods, and other embodiments shown and described herein for a REST API parser for test automation can do autonomous test verification where expected and actual outcomes can be compared without user's intervention. The user need not even maintain any references of expected behavior to compare against actual behavior”.
and classifying the execution of the software implementation as passed or not passed based on the comparison of the log data against the success condition:
[0077]“If all parameters match, then the test case is considered to be processed successfully. If not all parameters match, the expected response to the test case is not received, and the test case is considered to be processed unsuccessfully. “;
[0101] “In one embodiment, the processor compares expected results (from the validate_params field) with observed results of the REST API request to determine whether all observed parameter values match their corresponding expected parameter values. This may be performed, for example, as shown and described with reference to process block 255. Where a match is found (Block 330: MATCH), processing at decision block 330 completes and processing continues to process block 335. Where no match is found (Block 330: NO MATCH), processing at decision block 330 completes and processing continues to process block 345.
It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Bollepally into teachings of Raman and Tahvili to reduce an amount of processor operations needed in order to interpret or compile test code, enabling improved computing performance on the same hardware when running REST API tests. Further, the simplicity and uniformity of the test case format eliminates a need for coding expertise on the part of the user. [Bollepally 0109].
As per claim 4, the rejection of claim 1 is incorporated and furthermore, Raman explicitly discloses:
detecting a preexisting software implementation of the test case in a software test bank:
Col 13 lines 3-10 “In the depicted example, the NLP engine 310 receives a test scenario(s) from the data and artifact repository 130, and optionally, a context file from the control center 110. The received test scenarios may be generated by the test scenario and process map extractor 121 engine as described above. The test scenarios may also include existing test cases, feature files, API definition files, such as Web Services Description Language (WSDL), Web Application Description Language (WADL), Swagger, and so forth. The NLP engine 310 receives the input and parses the test scenarios”.
But not explicitly:
extracting the preexisting software implementation from the software test bank; and including the preexisting software implementation in the second model output:
Tahvili discloses:
extracting the preexisting software implementation from the software test bank:
[0036] Often, when testing communication systems/devices, an existing library of test scripts exists for testing prior versions of the systems/devices. That is, a test specification may describe test steps or test activities that can be mapped to existing test scripts from previous or similar products.
and including the preexisting software implementation in the second model output:
[0039] “Integration testing of communication systems/devices may be particularly suitable for a system/method that selects existing test scripts for execution from a library of test scripts based on a test case specification, because there may be a need to test an interface every time a software version of any of the interconnected components changes.”;
It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Tahvili into teachings of Raman and Bollepally to provide accurate selection of relevant test scripts given a non-formal test case specification written in natural language. By automatically analyzing test case specifications using NLP, some embodiments may reduce/eliminate some of the manual work associated with software testing. Because the test case specifications may be written in natural language, formal test specifications may not be required.[Tahvilli 0022].
As per claim 5, the rejection of claim 1 is incorporated and furthermore, Raman does not explicitly disclose:
processing a software syntax of the software implementation of the test case; determining that the software implementation is free of syntactical errors; and in response to determining that the software implementation is free of syntactical errors, enabling the execution of the software implementation.
Bollepally discloses:
processing a software syntax of the software implementation of the test case:
[0056] At decision block 232, the parser determines whether the current test case is in the expected format 225 in .pcrl file 215”;
determining that the software implementation is free of syntactical errors:
[0058] In one embodiment where the test case is not in the specified format 225, the parser records test case number in .pcrl file 215 at which unexpected or un-processable formatting occurs, and records the test case number for inclusion in an error message describing the error.
[0059] “Where the test case in .pcrl file is in the expected format 225 (block 232: YES), then processing at decision block 232 completes, and processing continues at processing block 245. Where the .pcrl file is not in the specified format 225 (block 232: NO), then the parser aborts execution of the test case and processing continues at decision block 230”;
and in response to determining that the software implementation is free of syntactical errors, enabling the execution of the software implementation.
[0059] “Where the test case in .pcrl file is in the expected format 225 (block 232: YES), then processing at decision block 232 completes, and processing continues at processing block 245. Where the .pcrl file is not in the specified format 225 (block 232: NO), then the parser aborts execution of the test case and processing continues at decision block 230.[0060] At process block 245, the parser executes a REST API request for a test case”.
It would have obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. One of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate the teachings of Bollepally into teachings of Raman and Tahvili to reduce an amount of processor operations needed in order to interpret or compile test code, enabling improved computing performance on the same hardware when running REST API tests. Further, the simplicity and uniformity of the test case format eliminates a need for coding expertise on the part of the user. [Bollepally 0109].
As per claim 6, the rejection of claim 1 is incorporated and furthermore, Raman explicitly discloses:
wherein the second contextual input defining the expected software implementation output format comprises generic software functions for performing an associated test case:
Col 3 lines 8-12 “the context file including an object map comprising objects that correlate to respective components of a display page for the code base, the test scenario describing one of the test cases involving an intended interaction with at least one of the components on the display page”;
As per claim 7, the rejection of claim 6 is incorporated and furthermore, Raman explicitly discloses:
wherein the generic software functions comprise automated commands for manipulating a graphical user interface.
Col 14 lines 41-45“Based on the AI model, the script generator module 340, determines the action(s) to perform the determined intent to the correlated objects in the respective page of the UI being tested“;
Claims 8, 11, 12, 13, 14 are the claim corresponding to method claims 1, 4, 5, 6, 7 and rejected under the same rational set forth in connection with the rejection of claims 1, 4, 5, 6, 7 above.
Claims 15, 18, 19, 20 are the computer readable storage medium claims corresponding to method claims 1, 4, 5, 6 and rejected under the same rational set forth in connection with the rejection of claims 1, 4, 5, 6 above.
Pertinent arts:
US20180349256A1:
Computer implemented methods and systems are provided for generating one or more test cases based on received one or more natural language strings.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRAHIM BOURZIK whose telephone number is (571)270-7155. The examiner can normally be reached Monday-Friday (8-4:30).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Y Mui can be reached at 571-270-2738. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRAHIM BOURZIK/ Examiner, Art Unit 2191