DETAILED ACTION
Claims 1-23 are pending. Claims 1, 22 and 23 have been amended. Claims 24-26 have been cancelled.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This final office action is in response to the applicant’s response received on 12/26/2025, for the non-final office action mailed on 10/01/2025.
Examiner’s Notes
Examiner has cited particular columns and line numbers, paragraph numbers, or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Response to Arguments
Applicant’s arguments filed 12/26/2025 for the rejection made under 35 U.S.C. § 101 have been considered and is withdrawn in view of applicant’s amendments.
Applicant’s arguments filed 12/26/2025 for the rejection made under 35 U.S.C. § 103 have been considered but is not persuasive.
Applicant argues Chen does not teach a confidence score that measures a confidence score that measures a confidence of an association of the test case specification with the selected test script, see applicant’s remarks pp. 10. Examiner respectfully disagrees as Chen comparison is being interpreted as an association being made between the test case/script (i.e., selected test script) and the new test requirement (i.e., test case specification) used to generate the confidence level using the RNN model for the test case/script.
Applicant’s arguments filed 12/26/2025 for the rejection made under 35 U.S.C. § 103 have been considered but is moot in view of new ground(s) rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 and 17-22 are rejected under 35 U.S.C. 103 as being unpatentable over Fong (US-PGPUB-NO: 2018/0349256 A1), in further view of Chenguttuvan et al. (US-PGPUB-NO: 2021/0303442 A1) hereinafter Chen and Thomson (US-PGPUB-NO: 2018/0373620 A1).
As per claim 1, Fong teaches a method of testing a communication computerized system, comprising: obtaining a natural language requirement specification that describes a testing requirement associated with the network node computerized system (see Fong paragraph [0069], “These state variables, parameter variables, user/account credentials may vary depending on the particular identified test (and/or dependencies identified within a test case), and depending on the context of the natural language description”); selecting one or more test case specifications from a library of test case specifications based on the natural language requirement specification (see Fong paragraph [0071], “A natural language classifier is a machine learning model that has the ability to parse “natural language” and classify that language to a high level topic. Various training models may be utilized in instantiating and refining the classifier. In the context of test case authoring, a properly trained natural language classifier would be able to take a natural language description of an action a tester would like to perform while testing an application and pick the appropriate script that would perform that action within an automation framework. Training data may be used to prepare a reinforcement learning model, where test descriptions are pruned and using a corpus of legacy data that may already exist”); extracting textual features from the one or more test case specifications (see Fong paragraph [0072], “Using existing legacy data for the application, the system is configured to process the natural language descriptions of a test step manually written by a quality assurance resource and the generalized “action” that the description maps to. A convolutional or a neural network can be configured to parse the descriptions and make an association between the “meaning” of the description and the test action selected. The method flow of FIG. 1 illustrates a portion of a solution, according to some embodiments”); generating a feature vector comprising the extracted textual features from the one or more test case specifications (see Fong paragraph [0074], “The system can include a token extraction engine configured to receive the one or more natural language strings and parse the one or more natural language strings to extract one or more word vectors representing extracted features of the one or more natural language strings”); mapping the feature vector to a plurality of available test scripts; selecting one or more of the plurality of available test scripts for execution in response to the mapping of the feature vector(see Fong paragraph [0095], “The system operates in conjunction with test automation agents which are instanced environments where the inputs take place and the state is observed for reward measurement. In some embodiments, the specialized reward function for test generation requires per application tuning in order to optimize the effectiveness. The system generates the one or more test automation scripts based on at least on the mapping of the vector space, the test automation script configured to, when executed, cause a processor to perform the pre-defined action in accordance with the one or more parameter values”).
Fong teaches an accuracy score, see Fong paragraph [0100] but does not explicitly teach for one or more of the selected test scripts, generating a confidence score that measures a confidence of an association of the test case specification with the selected test script. However, Chen teaches for one or more of the selected test scripts, generating a confidence score that measures a confidence of an association of the test case specification with the selected test script (see Chen paragraph [0028], “The test cases/script assessment module 204 may compare existing test cases (stored in the test cases and requirement repository 214) with the new testing requirement, based on a RNN model. Based on the comparison, the test cases/script assessment module 204 may identify the confidence level for the existing test cases. One or more RNN model parameters may be used to correlate and identify the confidence level. For example, the RNN module parameters may be retrieved from the model repository 212. The RNN module parameters may be used for identifying suitable test cases with corresponding confidence level”).
Fong and Chen are analogous art because they are in the same field of endeavor of software development. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to modify Fong’s teaching of generating test scenarios based on parsed natural language inputs with Chen’s teaching of selecting test cases from existing test cases for a new software testing requirement to incorporate the use of a confidence score in order to create a short list subset of test cases based on predicted defect slippage rate and an associated threshold.
Fong modified with Chen do not explicitly teach generating the selected one or more of the plurality of available test scripts into a plurality of executable test scripts, wherein at least two of the plurality of executable test scripts are in different programming language formats. However, Thomson teaches generating the selected one or more of the plurality of available test scripts into a plurality of executable test scripts, wherein at least two of the plurality of executable test scripts are in different programming language formats (see Thomson paragraph [0151], “At 904 of the computer-implemented method 900, one or more scripts can be generated (e.g., via the template tool component 104). The one or more scripts can be utilized to facilitate execution of tasks within different run-time environments that utilize different languages and syntax. According to an implementation, the one or more scripts can comprise an abstraction (e.g., a generic version) of the test data. In some implementations, the one or more scripts can be customizable for a defined test environment”).
Fong, Chen and Thomson are analogous art because they are in the same field of endeavor of software development. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to modify Fong’s teaching of generating test scenarios based on parsed natural language inputs and Chen’s teaching of selecting test cases from existing test cases for a new software testing requirement with Thomson’s teaching of templates that generate tool specific test files from generic test data and, more specifically, to generating scripts that facilitate execution of tasks within different run-time environments that utilize different languages and/or different syntax to incorporate the use of different languages and syntax when generating scripts to facilitate execution of the tasks being tested.
As per claim 17, Fong modified with Chen and Thomson teaches further comprising: generating a prediction score associated with each of the plurality of available test scripts, wherein selecting one or more of the plurality of available test scripts for execution in response to the mapping of the feature vector is performed based on the prediction score (see Fong paragraph [0169], “Functional actions 314 are identified by the system which are then implemented in test cases for test automation. These functional actions, for example, may include the mapping to specific functionality of an application under test, including, but not limited to, specific application function calls, input parameters, timing of function calls, the order of function calls, etc. Continuous refinement of the reward pathway is used to modify outcomes or parameter value predictions”).
As per claim 18, Fong modified with Chen and Thomson teaches wherein selection of the one or more of the plurality of available test scripts for execution in response to the mapping of the feature vector is performed based on whether an associated prediction score is greater than a threshold value (see Chen paragraph [0053], “Returning back to FIG. 3, at step 310, a defect slippage rate may be predicted based on the linear regression model. The defect slippage prediction module 206 may receive test scripts with confidence level (probability level) greater than the predetermined threshold confidence score”).
As per claim 19, Fong modified with Chen and Thomson teaches wherein the confidence score for each of the selected test scripts is generated based on the prediction score associated with each of the selected test scripts (see Chen paragraph [0056], “Referring now to FIG. 6, a flowchart of a process 600 for selecting test cases from existing test cases for a new software testing requirement is illustrated in greater detail, in accordance with an embodiment of the present disclosure. In some embodiments, an attempt may be made to identify a relevant test case from the existing test cases for the new software testing requirement by comparing the new software testing requirement with the existing test cases using a Jaccard index. If the attempt is unsuccessful, then at step 602, a confidence score associated with each of the existing test cases may be determined”).
As per claim 20, Fong modified with Chen and Thomson teaches further comprising: outputting an identification of the selected test scripts and associated confidence scores via a user interface; and executing the selected test scripts (see Chen paragraph [0031], “The test execution module 208 may execute the selected set of test scripts received from defect slippage prediction module 206 for the product under testing. Based on the execution, a test execution status may be generated. The test execution status may be either “successful” or “unsuccessful” based on the comparison of output of the test execution with a predetermined output. In some embodiments, the text execution status may be provided/outputted/displayed on the user interface, like a display screen, etc”).
As per claim 21, Fong modified with Chen and Thomson teaches further comprising: receiving a user input in response to outputting the identification of the selected test scripts (see Chen paragraph [0021], “The test case selecting device 102 may further include one or more input/output devices 114 through which the test case selecting device 102 may interact with a user and vice versa. By way of an example, the input/output device 114 may be used to display a status assigned to test case from a sub-set of test cases, as will be discussed later”); and executing the selected test scripts in response to the user inputs (see Chen paragraph [0031], “The test execution module 208 may execute the selected set of test scripts received from defect slippage prediction module 206 for the product under testing. Based on the execution, a test execution status may be generated. The test execution status may be either “successful” or “unsuccessful” based on the comparison of output of the test execution with a predetermined output. In some embodiments, the text execution status may be provided/outputted/displayed on the user interface, like a display screen, etc”).
As per claim 22, this is the system claim comprising a processor and memory coupled to the processor circuit, wherein the memory comprises computer program instructions that, when executed by the processor circuit cause the system to perform operations (see Fong paragraph [0193], “FIG. 15 is a schematic diagram of computing device 1500, exemplary of an embodiment. As depicted, computing device includes at least one processor 1502, memory 1504, at least one I/O interface 1506, and at least one network interface 1508”) to method claim 1. Therefore, it is rejected for the same reasons as above.
Claim(s) 2-5, 8 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Fong (US-PGPUB-NO: 2018/0349256 A1), Chen (US-PGPUB-NO: 2021/0303442 A1) and Thomson (US-PGPUB-NO: 2018/0373620 A1), in further view of Andrejko et al. (US-PGPUB-NO: 2018/0121332 A1) hereinafter Andrejko.
As per claim 2, Fong modified with Chen and Thomson teaches wherein selecting the one or more test case specification comprises: analyzing the natural language requirement specification using an automatic language processing technique (see Fong paragraph [0067], “In particular, as described in various embodiments, a computer-implemented natural language classifier solution is proposed that utilizes neural networks to implement a machine learning model whereby a natural language description (e.g., a natural language input string) of an action a tester is processed such that the appropriate script (e.g., a more granular application-based test action) that would perform that action within an automation framework can be selected (or in some embodiments, configured) while testing an application. The neural network, in some embodiments, is also configured to select parameters for conducting tests, and the selection of parameters is also evaluated and refined over time to improve accuracy”).
Fong modified with Chen and Thomson do not explicitly teach for a plurality of test case specifications in the library of test case specifications, generating a relevancy score that represents a relevance of the natural language requirement specification to the test case specification ; and selecting the test case specification based on the associated relevancy score. However, Andrejko teaches for a plurality of test case specifications in the library of test case specifications, generating a relevancy score that represents a relevance of the natural language requirement specification to the test case specification (see Andrejko paragraph [0071], “The identification of the test cases for re-execution by the test case dependency identification logic 156 may be performed based on a ranking of the matching test cases according to their dependency relationships, as determined by the test case ranking logic”); and selecting the test case specification based on the associated relevancy score (see Andrejko paragraph [0071], “The resulting ranked listing of affected test cases (matching test cases) be compared by the test case ranking logic 158 to one or more thresholds to select a subset of the ranked listing for use in re-executing test cases to test the SUD once the proposed requirements change is implemented, e.g., on a percentile scale, all of the affected test cases that have a score equal to or greater than 80%”).
Fong, Chen and Andrejko are analogous art because they are in the same field of endeavor of software development. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to modify Fong’s teaching of generating test scenarios based on parsed natural language inputs, Chen’s teaching of selecting test cases from existing test cases for a new software testing requirement and Thomson’s teaching of templates that generate tool specific test files from generic test data and, more specifically, to generating scripts that facilitate execution of tasks within different run-time environments that utilize different languages and/or different syntax with Andrejko’s teaching of performing proactive cognitive analysis for inferring test case dependencies to incorporate the use of a ranked (i.e., relevance) score in order to provide subset test cases, see Andrejko paragraph [0004], “a subset of test cases in the test case corpus affected by the proposed requirements change based on the identified test case relationships corresponding to the proposed requirements change. Furthermore, the method comprises generating, by the data processing system, an output specifying the identified subset of test cases.”
As per claim 3, Fong modified with Chen, Thomson and Andrejko teaches wherein selecting the one or more test case specifications based on the associated relevancy score comprises selecting the one or more test case specifications based on the associated relevancy score in response to the relevancy score being higher than a predetermined threshold (see Andrejko paragraph [0071], “The thresholds may be set so as to identify the test cases of most importance to the particular implementation for verifying proper operation of the SUD and thus, the values of the thresholds may vary from one implementation to another”).
As per claim 4, Fong modified with Chen, Thomson and Andrejko teaches wherein selecting the one or more test case specifications based on the associated relevancy score comprises selecting a test case specification from the plurality of test case specifications that has a highest relevancy score (see Andrejko paragraph [0092], “In such a case, exact matches will be given the highest scores, while synonyms may be given lower scores based on a relative ranking of the synonyms as may be specified by a subject matter expert (person with knowledge of the particular domain and terminology used) or automatically determined from frequency of use of the synonym in the corpus corresponding to the domain”).
As per claim 5, Fong modified with Chen, Thomson and Andrejko teaches wherein analyzing the natural language requirement specification using automatic language processing comprises identifying key terms in the natural language requirement specification (see Andrejko paragraph [0088], “In addition, the extracted major features include key words and phrases classified into question characteristics, such as the focus of the question, the lexical answer type (LAT) of the question, and the like. As referred to herein, a lexical answer type (LAT) is a word in, or a word inferred from, the input question that indicates the type of the answer, independent of assigning semantics to that word. For example, in the question “What maneuver was invented in the 1500s to speed up the game and involves two pieces of the same color?,” the LAT is the string “maneuver”).
As per claim 8, Fong modified with Chen, Thomson and Andrejko teaches wherein the relevancy scores are generated using a text semantic similarity metric relative to the natural language requirement specification and the plurality of test case specifications in the library of test case specifications (see Fong paragraph [0105], “The 3 layer CNN 150 is configured to classify semantic similarity in the description and expected results description, and can, in some embodiments, be implemented in Pytorch (A Python deep learning framework)”).
As per claim 23, this is the system claim to method claim 2. Therefore, it is rejected for the same reasons as above.
Claim(s) 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Fong (US-PGPUB-NO: 2018/0349256 A1), Chen (US-PGPUB-NO: 2021/0303442 A1), Thomson (US-PGPUB-NO: 2018/0373620 A1)and Andrejko (US-PGPUB-NO: 2018/0121332 A1), in further view of Balasubramanian et al. (US-PGPUB-NO: 2019/0377736 A1) hereinafter Bala.
As per claim 6, Fong modified with Chen, Thomson and Andrejko do not explicitly teach further comprising: obtaining user feedback associated with a training test case specification in the library of test case specifications; wherein the relevancy score associated with the training test case specification is based on the user feedback. However, Bala teaches further comprising: obtaining user feedback associated with a training test case specification in the library of test case specifications (see Bala paragraph [0051], “In another embodiment, the user or systems administrator may manually enter additional custom search terms to act as additional tokens for searching for test cases. The user-custom search tokens may be those tokens disregarded by the natural language processing module during the noise-reduction step (30, in FIG. 4). These user-added tokens may be converted to persistent terms and stored within the master library”); wherein the relevancy score associated with the training test case specification is based on the user feedback (see Bala paragraph [0052], “At 46, any associated quantification factors or historical metrics associated with matched persistent terms, as more fully described in FIG. 3, are retrieved. Any selected non-persistent tokens (now converted to persistent tokens) and any persistent tokens are then referenced to the test case repository 14 for identification of relevant test cases. The manually entered or custom tokens are also utilized to match with test cases through correlation of tokenized descriptions of test cases. Matched test cases are then retrieved from the test case repository 14 and associated with the search token, resulting in a cluster at 80 (see resulting clusters at 81 and 83). Each cluster also includes quantification factors, such as match percentage, utilization rates as well as any other weightage criteria from the master library for the relevant token. Once the token cluster is formed, the test cases are grouped together to remove redundant test cases from the cluster. As part of the removal process, the test cases are tagged with any relevant tokens. At 82, the grouped test cases may be outputted to a session-based test case repository 84”).
Fong, Chen, Thomson, Andrejko and Bala are analogous art because they are in the same field of endeavor of software development. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to modify Fong’s teaching of generating test scenarios based on parsed natural language inputs, Chen’s teaching of selecting test cases from existing test cases for a new software testing requirement, Thomson’s teaching of templates that generate tool specific test files from generic test data and, more specifically, to generating scripts that facilitate execution of tasks within different run-time environments that utilize different languages and/or different syntax and Andrejko’s teaching of performing proactive cognitive analysis for inferring test case dependencies with Bala’s teaching of identifying an appropriate test case from a test case repository comprising a plurality of stored test cases to incorporate the use of user feedback in order to better customize the test case selection, see Bala paragraph [0009], “An advantage of the embodiments described in this document is the identification of relevant test cases from normalized keywords that allows easy and efficient method of verifying the implementation of requirements in a software development effort. The foregoing and other aspects, features, details, utilities, and advantages of the present disclosure will be apparent from reading the following description and claims, and from reviewing the accompanying drawings.”
As per claim 7, Fong modified with Chen, Thomson, Andrejko and Bala teaches wherein the user feedback comprises an indication of whether the training test case specification was relevant, neutral, or irrelevant (see Bala paragraph [0052], “The manually entered or custom tokens are also utilized to match with test cases through correlation of tokenized descriptions of test cases. Matched test cases are then retrieved from the test case repository 14 and associated with the search token, resulting in a cluster at 80 (see resulting clusters at 81 and 83). Each cluster also includes quantification factors, such as match percentage, utilization rates as well as any other weightage criteria from the master library for the relevant token”).
Claim(s) 9-14 are rejected under 35 U.S.C. 103 as being unpatentable over Fong (US-PGPUB-NO: 2018/0349256 A1), Chen (US-PGPUB-NO: 2021/0303442 A1) and Thomson (US-PGPUB-NO: 2018/0373620 A1), in further view of Balasubramanian et al. (US-PGPUB-NO: 2019/0377736 A1) hereinafter Bala.
As per claim 9, Fong modified with Chen and Thomson do not explicitly teach wherein the test case specifications and/or the requirement specifications are written in different human languages. However, Bala teaches wherein the test case specifications and/or the requirement specifications are written in different human languages (see Bala paragraph [0005], “However, software development requirements can manifest in a variety of different forms. Requirements may be submitted by a variety of personnel, who may have different language patterns and varying amounts of knowledge of technical terms and nomenclature”).
Fong, Chen, Thomson and Bala are analogous art because they are in the same field of endeavor of software development. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to modify Fong’s teaching of generating test scenarios based on parsed natural language inputs, Chen’s teaching of selecting test cases from existing test cases for a new software testing requirement and Thomson’s teaching of templates that generate tool specific test files from generic test data and, more specifically, to generating scripts that facilitate execution of tasks within different run-time environments that utilize different languages and/or different syntax with Bala’s teaching of identifying an appropriate test case from a test case repository comprising a plurality of stored test cases to incorporate the use of user feedback in order to better customize the test case selection, see Bala paragraph [0009], “An advantage of the embodiments described in this document is the identification of relevant test cases from normalized keywords that allows easy and efficient method of verifying the implementation of requirements in a software development effort. The foregoing and other aspects, features, details, utilities, and advantages of the present disclosure will be apparent from reading the following description and claims, and from reviewing the accompanying drawings.”
As per claim 10, Fong modified with Chen, Thomson and Bala teaches wherein extracting the textual features from the test case specification comprises: splitting the test case specification into a set of specification words (see Bala paragraph [0031], “FIG. 3 demonstrates a more detailed embodiment of the smart test case mapper tool. The received inputs 18 (e.g., software requirements, change requests, defects) are received as text information or natural language. This text information may be initially parsed by a number of pre-processing techniques as shown in 20”); performing part of speech tagging on the words in the set of specification words (see Bala paragraph [0044], “As demonstrated above at 32, a token, in a non-limiting embodiment a monogram, is successfully matched with a persistent token within the master library 46. The matched persistent token may be a bigram, trigram or quadgram (or an optimal “N”-gram), due to prior supervised and unsupervised analytics efforts having utilized these more lengthy tokens and stored the related usage data in the master library 46. In an embodiment as shown at 50, the monogram may converted to the longer N-gram based upon the matching, and may be known as an optimal N-gram and stored on the optimal N-gram list”); and removing stop words from the set of specification words (see Bala paragraph [0045], “The resulting list of test cases will be more efficacious as they would not include all matches to the monograms [store] and [manager], but rather only the matches to the optimal bigram [store manager]. In another non-limiting example, instead of seeking to match the monograms [quantity], [each], [sku], where [for] is discarded as part of stop word removal, the optimal quadgram [quantity for each SKU] will identified and the optimal quadgram will be utilized to retrieve relevant test cases”).
As per claim 11, Fong modified with Chen, Thomson and Bala teaches wherein generating the feature vector comprises: selecting all verbs from the set of specification words; and selecting all nouns and adjectives from the set of specification words that satisfy a selection criterion (see Bala paragraph [0047], “FIG. 5 shows two non-exclusive embodiments for the identification of test cases in greater detail. These embodiments may be performed by an analytics module of the smart test case mapping system. In one embodiment (the “Unsupervised Embodiment”), in box 60 at 34, the non-persistent tokens (session-based) derived from the pre-processed input text are retrieved. The knowledge base may be referenced for the existence of any domain synonyms correlating to the non-persistent tokens, such as custom or domain synonyms entered or identified by a user during a supervised analytics session. Domain synonyms may be predetermined terms or identifiers that are stored in the master library. Domain synonyms may be a user-built library. The user may form a list of words or tokens relevant to a domain or context. They may also be taxonomic terms that have the same application as the tokens and are pre-identified as being associated with words or terms appearing in the tokens. These domain synonyms, custom terms or taxonomic terms may themselves be tokenized and stored in the master library for continuous reference and use. Hereinafter, any identified domain synonyms associated with tokens may be utilized concurrently with those respective tokens, even if not explicitly stated”).
As per claim 12, Fong modified with Chen, Thomson and Bala teaches wherein the selection criterion comprises a frequency of appearance within the set of specification words (see Bala paragraph [0054], “In one embodiment, the quantification logic normalizes the score of each token by applying the clustered metrics (e.g., at 92), such as, in one embodiment, quantification factors, usage frequency, acceptance frequency, number of tokens matched to a test case, or other form of weighting. The quantification logic scores the token by applying the normalized token score”).
As per claim 13, Fong modified with Chen, Thomson and Bala teaches further comprising: generating a plurality of augmented feature vectors from the feature vector, wherein the plurality of augmented feature vectors are generated as subsets of the feature vector (see Fong paragraph [0072], “Using existing legacy data for the application, the system is configured to process the natural language descriptions of a test step manually written by a quality assurance resource and the generalized “action” that the description maps to. A convolutional or a neural network can be configured to parse the descriptions and make an association between the “meaning” of the description and the test action selected. The method flow of FIG. 1 illustrates a portion of a solution, according to some embodiments”); and training a classifier that is used to map the feature vector to the plurality of available test scripts using the plurality of augmented feature vectors (see Fong paragraph [0078], “The classifier will be able to build word vectors out of the training data 102 that represent the sentiment and context of the original description and classify similar descriptions to the same action. The pre-trained neural network is a classifier that includes a series of computing nodes, each node representing a feature of the data set, and one or more weighted interconnections between various nodes. The nodes are stored, for example as objects in a data storage, and the weights associated with the interconnections may, for example, be stored as data records associated with the nodal objects in a data storage”).
As per claim 14, Fong modified with Chen, Thomson and Bala teaches wherein mapping the feature vector to plurality of available test scripts comprises: generating a label vector for each of the plurality of available test scripts (see Fong paragraph [0068], “Accordingly, with such a trained system, a business analyst or a project manager may be able to provide high level step descriptions and expected results, and the system may be automatically driven to map these to granular, application specific test actions, along with necessary parameters (e.g., user/account credentials, form field values). The system may be configured to identify and apply state variables, parameter variables, user/account credentials, as necessary based on its identification of the particular action and its required inputs for various test scenarios”); and classifying the label vector using a text classifier (see Fong paragraph [0073], “The training data input 102 for the classifier can include natural language descriptions and their associated test actions”).
Claim(s) 15 is rejected under 35 U.S.C. 103 as being unpatentable over Fong (US-PGPUB-NO: 2018/0349256 A1), Chen (US-PGPUB-NO: 2021/0303442 A1), Thomson (US-PGPUB-NO: 2018/0373620 A1) and Bala (US-PGPUB-NO: 2019/0377736 A1), in further view of Andrejko (US-PGPUB-NO: 2018/0121332 A1).
As per claim 15, Fong modified with Chen, Thomson and Bala do not explicitly teach wherein generating the label vector for each of the plurality of available test scripts comprises generating, for each test script in the plurality of available test scripts, a vector of terms in the test script. However, Andrejko teaches wherein generating the label vector for each of the plurality of available test scripts comprises generating, for each test script in the plurality of available test scripts, a vector of terms in the test script (see Andrejko paragraph [0106], “FIG. 4B illustrates an example of a test case script upon which NLP analysis is performed to extract test case element relationships from the inputs/outputs of the test case. As shown in FIG. 4B, the depicted test case script is for a TLS intermediate risk alert generation which may be a portion of the OEA lung cancer TLS prediction test case of FIG. 4A, for example. FIG. 4B illustrates the correlation between phrases and terms specified in the natural language descriptions, expected results, and steps of the test case itself with corresponding test case attributes”).
Fong, Chen, Thomson, Bala and Andrejko are analogous art because they are in the same field of endeavor of software development. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to modify Fong’s teaching of generating test scenarios based on parsed natural language inputs, Chen’s teaching of selecting test cases from existing test cases for a new software testing requirement, Thomson’s teaching of templates that generate tool specific test files from generic test data and, more specifically, to generating scripts that facilitate execution of tasks within different run-time environments that utilize different languages and/or different syntax and Bala’s teaching of identifying an appropriate test case from a test case repository comprising a plurality of stored test cases with Andrejko’s teaching of performing proactive cognitive analysis for inferring test case dependencies to incorporate the use of a ranked (i.e., relevance) score in order to provide subset test cases, see Andrejko paragraph [0004], “a subset of test cases in the test case corpus affected by the proposed requirements change based on the identified test case relationships corresponding to the proposed requirements change. Furthermore, the method comprises generating, by the data processing system, an output specifying the identified subset of test cases.”
Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over Fong (US-PGPUB-NO: 2018/0349256 A1), Chen (US-PGPUB-NO: 2021/0303442 A1), Thomson (US-PGPUB-NO: 2018/0373620 A1) and Bala (US-PGPUB-NO: 2019/0377736 A1), in further view of Kulkarni et al. (US-PGPUB-NO: 2016/0140425 A1).
As per claim 16, Fong modified with Chen, Thomson and Bala do not explicitly teach wherein classifying the label vector is performed using a one-vs-all classification strategy. However, Kulkarni teaches wherein classifying the label vector is performed using a one-vs-all classification strategy (see Kulkarni paragraph [0047],” The present principles solve the problem of learning K one-vs-all classifiers that, applied to a given image, indicate whether they visually represent the corresponding class”).
Fong, Chen, Thomson, Bala and Kulkarni are analogous art because they are in the same field of endeavor of software development. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to modify Fong’s teaching of generating test scenarios based on parsed natural language inputs, Chen’s teaching of selecting test cases from existing test cases for a new software testing requirement, Thomson’s teaching of templates that generate tool specific test files from generic test data and, more specifically, to generating scripts that facilitate execution of tasks within different run-time environments that utilize different languages and/or different syntax and Bala’s teaching of identifying an appropriate test case from a test case repository comprising a plurality of stored test cases with Kulkarni’s teaching of image classification with joint feature adaptation and classifier learning to incorporate the one-vs-all classification strategy in order to better train all vectors as taught in Fong.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
McGloin et al. (US-PGPUB-NO: 2020/0117573 A1) teaches linking source code with compliance requirements.
Bhat et al. (US-PGPUB-NO: 2020/0097388 A1) teaches learning based metrics prediction for software development.
Fei et al. (US-PGPUB-NO: 2020/0019492 A1) teaches generating executable test automation code automatically according to a test case.
Jayaraman et al. (US-PGPUB-NO: 2018/0095859 A1) teaches software testing system and a method for facilitating structured regressing planning and optimization.
Champlin et al. (US-PGPUB-NO: 2016/0210225 A1) teaches automatically generating testcases
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LENIN PAULINO whose telephone number is (571)270-1734. The examiner can normally be reached Week 1: Mon-Thu 7:30am - 5:00pm Week 2: Mon-Thu 7:30am - 5:00pm and Fri 7:30am - 4:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LENIN PAULINO/Examiner, Art Unit 2197
/BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197