Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are presented for examination.
Claim Objections
Claim 5 is objected to because of the following informalities: The “configure” in “wherein the at least one processor is configure to” makes the phrase grammatically incorrect. Examiner believes this is a typo. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1, 8 and 15 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “analyze the natural language text describing the software requirements to identify relationships between a plurality of different software requirements of the software requirements at least in part by: analyzing how data flows between the plurality of different software requirements; analyzing how the plurality of different software requirements influence path and decision points to achieve functionality identified by the software requirements; and identifying dependencies between the plurality of different software requirements; establish a sequence for the plurality of different software requirements based on the relationships identified between the plurality of different software requirements of the software requirements; group the plurality of different software requirements together into at least one logical group of software requirements based on the sequence for the plurality of different software requirements of the software requirements and the relationships between the plurality of different software requirements; generate test cases based on the at least one logical group of software requirements” and “analyze results of execution of the software program using the test cases to identify any defects in the software program” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas.
This judicial exception is not integrated into a practical application. The claims recite the following additional elements “a system,” “at least one processor,” “at least one memory communicatively coupled to the at least one processor,” “A non-transitory computer-readable medium,”, “execute the software program using the test cases” and “receive natural language text describing software requirements for a software program.” The additional elements “a system,” “at least one processor,” “at least one memory communicatively coupled to the at least one processor,” “A non-transitory computer-readable medium,”, “execute the software program using the test cases” are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). The additional element “receive natural language text describing software requirements for a software program” does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea, to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element “a computing system,” “an application installer,” “non-transitory computer readable media,” “program instructions stored on the” media, and “a processing system” are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). As to the additional element “receive natural language text describing software requirements for a software program” the courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible.
Claims 2, 9, and 16 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “analyze the natural language text describing the software requirements to identify the relationships between the plurality of different software requirements of the software requirements at least in part by: generating structured text requirements from unstructured text requirements before analyzing how the data flows between the plurality of different software requirement” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas.
Claim 3, 10 and 17 recites the additional elements “a large language model with the unstructured text requirements as input” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible.
Claim 4, 11 recites the additional elements “wherein the large language model was trained with a set of training unstructured text requirements and corresponding training structured text requirements” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible.
Claims 5, 12 and 18 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “analyze the natural language text describing the software requirements to identify the relationships between the plurality of different software requirements of the software requirements at least in part by: generating structured text requirements from unstructured text requirements before analyzing how the data flows between the plurality of different software requirements; analyzing how the data flows between the plurality of different software requirements by comparing inputs and outputs of the structured text requirements; and identifying the dependencies between the plurality of different software requirements by comparing the inputs and the outputs of the structured text requirement” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas.
Claims 6, 13 and 19 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “generate the test cases based on the at least one logical group of software requirements at least in part using a large language model with the at least one logical group of software requirements as an input” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas.
Claims 7, 14 and 20 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “use feedback from the results of the execution of the software program using the test cases to better: (1) analyze the natural language text describing the software requirements to identify the relationships between the plurality of different software requirements of the software requirements; (2) establish the sequence for the plurality of different software requirements based on the relationships identified between the plurality of different software requirements of the software requirements; and (3) group the plurality of different software requirements together into the at least one logical group of software requirements based on the sequence for the plurality of different software requirements of the software requirements and the relationships between the plurality of different software requirements” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 5, 8, 12, 15 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kumar (US 20220091968) in view of Kossatchev (US 6698012 B1).
Regarding Claim 1, Kumar (US 20220091968) teaches
A system, comprising: at least one processor; at least one memory communicatively coupled to the at least one processor; and wherein the at least one processor is configured to:
Receive natural language text describing software requirements for a software program;
([0045]: "The feature specification 200 may include a functional specification of the software application to be tested. The feature specification 200 may include the product requirements or the functional requirements of the software application. The feature specification 200 may be written in a natural language (e.g., English). The feature specification 200 may include one or more test steps that may be performed to validate the product requirements or the functional requirements of the software application."; [0051]: "The disclosed system can analyze the product specifications using machine learning models and image recognition algorithms. Typically, a specification may include a list of feature specifications and user interface specifications for one or more feature specifications. FIG. 1 shows an example of feature specification that may be used to generate test cases by the disclosed system. The feature specification can be text that includes the intended functionality, capabilities, and how the end-user is expected to interact with the product and its features) Examiner Comments: Kumar explicitly describes receiving feature specifications in natural language text that detail the software requirements for the program, enabling subsequent analysis for test generation.
Analyze the natural language text describing the software requirements to identify
relationships between a plurality of different software requirements of the software requirements at least in part by: analyzing how data flows between the plurality of different software requirements; analyzing how the plurality of different software requirements influence path and decision points to achieve functionality identified by the software requirements; and identifying dependencies between the plurality of different software requirements; ([0057]: “Text Processing ML Model 500 is a pre-trained model that parses the text of the functional specifications from the product specification documents 100 as well as any optionally provided test steps 200 and extracts all the relevant contextual information from them. This includes extracting entities, acceptable and unacceptable use cases, various user interactions, outcomes from these interactions, and screen navigation information. Text Processing ML Model 500 relies on techniques like NLP (Natural Language Processing), NER (Named Entity Recognition), and NLU (Natural Language Understanding) for extracting this information and translating it into a form that can be processed by the Test Case ML Model 600 . FIG. 3 c shows an example of data that may be generated by Text Processing ML Model 500 for the feature specification shown in FIG. 1.") Examiner Comments: Kumar's NLP-based extraction identifies relationships, data flows (e.g., entities and interactions), influence on paths and decisions (e.g., use cases and outcomes), and dependencies (e.g., screen navigation) between different requirements in the natural language text.
Generate test cases based on the at least one logical group of software requirements;
([0058]: "The Test Case ML Model 600 is a pre-trained model that accepts the generated data from Image Processing ML Model 400 and Text Processing ML Model 500 to build all the various possible test cases and the automated test steps needed to execute each test case. Each step in a test case is an action that manipulates the GUI objects while validating and asserting the values and behaviors seen on the GUI.") Examiner Comments: Kumar generates test cases directly from the grouped requirements using the processed data from NL analysis.
Execute the software program using the test cases; and ([0065]: "In this, the Automation Test Executor 900 uses the generated Automated Tests with Test Data 800 to run the tests and validate the software GUI Application 1000. The automated testing process involves navigating through the software GUI Application 1000 being tested using the GUI by
clicking on buttons, checking checkboxes, typing text into input boxes, etc., and validating and asserting the values and behaviors seen on the GUI. To do this the Automation Test Executor 900 must identify the GUI controls on the screen in order to manipulate them.") Examiner Comments: Kumar executes the software using the generated test cases to produce results.
Analyze results of execution of the software program using the test cases to identify
any defects in the software program. ([0066]: "Most GUI automation tools identify these GUI controls (buttons, checkboxes, input box, etc.) by certain properties of the object. These properties used for identifying the elements could be their class, their text, the order in which they appear, and their locations relative to parent objects. When these properties change, the
automation will break and give false-negative results.") Examiner Comments: Kumar analyzes execution results to detect defects in the software, such as false-negatives from property changes.
Kumar does not specifically teach
establish a sequence for the plurality of different software requirements based on the relationships identified between the plurality of different software requirements of the software requirements;
group the plurality of different software requirements together into at least one logical
group of software requirements based on the sequence for the plurality of different software requirements of the software requirements and the relationships between the plurality of different software requirements.
However, Kossatchev (US 6698012 B1) teaches
establish a sequence for the plurality of different software requirements based on the relationships identified between the plurality of different software requirements of the software requirements; (column 10, lines 17-37: "The KIND_5 script driver realizes a general algorithm for traversing an abstract Finite State Machine (FSM). This algorithm passes all states and all possible transitions between the states. Each transition corresponds to an execution of a procedure under test. The algorithm of a script driver is related to the specification and does not depend on the implementation details outside the specification. The verification system generator 100 avoids use of direct descriptions because direct specification of the FSM requires extra efforts to generate. Instead of a direct specification of FSM, the verification system generator 100 uses indirect, virtual representation of FSM. Such representation includes a function-observer and a function-iterator. The function-observer calculates on the fly the current state in the abstract FSM. The function-iterator selects a next procedure from the target procedure group, and generates a tuple of the input parameter values for this procedure.") Examiner Comments: Kossatchev establishes a sequence for procedures (requirements) using function-iterators that select next steps based on identified relationships and dependencies in the specifications.
group the plurality of different software requirements together into at least one logical
group of software requirements based on the sequence for the plurality of different software requirements of the software requirements and the relationships between the plurality of different software requirements; (column 16, lines 40-57: "The procedures are separated into two modes (200). Behaviour of one or more procedures is specified in a consecutive mode (202). A group of procedures may be defined for test suite. It means that when the procedures of the group deal with common data, for example, global variable, these procedures are tested together as a group. The behaviour in consecutive mode is defined by implicit specification, i.e., by the logical expression that is equal to “true” for any correct results of the procedure call. In existence of a parallel call of any procedure is considered. For example, consider for writing the specification of the procedure Wait_message ( ) in case of empty mailbox. The correct results could be the result “No messages” and the result “message”, because during the procedure Wait_message ( ) call, another parallel process could call the procedure Send_message ( ) and write the message into the mailbox.") Examiner Comments: Kossatchev groups requirements (procedures) into logical groups for parallel testing based on sequences and relationships like semantic dependencies.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kumar’s teaching with Kossatchev's grouping and sequencing of procedures based on dependencies for testing, as this would allow for efficient handling of parallel and sequential aspects in software requirements, ensuring comprehensive coverage (Kossatchev, column 16, lines 40-57: "The procedures are separated into two modes (200). Behaviour of one or more procedures is specified in a consecutive mode (202). A group of procedures may be defined for test suite. It means that when the procedures of the group deal with common data, for example, global variable, these procedures are tested together as a group. The behaviour in consecutive mode is defined by implicit specification, i.e., by the logical expression that is equal to “true” for any correct results of the procedure call. In existence of a parallel call of any procedure is considered. For example, consider for writing the specification of the procedure Wait_message ( ) in case of empty mailbox. The correct results could be the result “No messages” and the result “message”, because during the procedure Wait_message ( ) call, another parallel process could call the procedure Send_message ( ) and write the message into the mailbox.").
Regarding Claim 5, Kumar, and Kossatchev teach
The system of claim 1. Kumar teaches wherein the at least one processor is configure to analyze the natural language text describing the software requirements to identify the relationships between the plurality of different software requirements of the software requirements at least in part by: generating structured text requirements from unstructured text requirements before analyzing how the data flows between the plurality of different software requirements; analyzing how the data flows between the plurality of different software requirements by comparing inputs and outputs of the structured text requirements; and identifying the dependencies between the plurality of different software requirements by comparing the inputs and the outputs of the structured text requirements. ([0057]: "Text Processing ML Model 500 is a pre-trained model that parses the text of the functional specifications from the product specification documents 100 as well as any optionally provided test steps 200 and extracts all the relevant contextual information from them. This includes extracting entities, acceptable and unacceptable use cases, various user interactions, outcomes from these interactions, and screen navigation information. Text Processing ML Model 500 relies on techniques like NLP (Natural Language Processing), NER (Named Entity Recognition), and NLU (Natural Language Understanding) for extracting this information and translating it into a form that can be processed by the Test Case ML Model 600 . FIG. 3 c shows an example of data that may be generated by Text Processing ML Model 500 for the feature specification shown in FIG. 1 .") Examiner Comments: Kumar generates structured entities from unstructured text and analyzes data flows and dependencies by comparing inputs and expected outcomes in the structured data.
Regarding Claim 8, is a method claim corresponding to the system claim above (Claim 1) and, therefore, is rejected for the same reasons set forth in the rejection of claim 1.
Regarding Claim 12, is a method claim corresponding to the system claim above (Claim 5) and, therefore, is rejected for the same reasons set forth in the rejection of claim 5.
Regarding Claim 15, is a computer-readable medium claim corresponding to the system claim above (Claim 1) and, therefore, is rejected for the same reasons set forth in the rejection of claim 1.
Regarding Claim 18, is a computer-readable medium claim corresponding to the system claim above (Claim 5) and, therefore, is rejected for the same reasons set forth in the rejection of claim 5.
Claim(s) 2-4, 6, 9-11, 13, 16-17 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kumar (US 20220091968) in view of Kossatchev (US 6698012 B1) further in view of Bahrami (US 20230096325 A1).
Regarding Claim 2, Kumar and Kossatchev teach
The system of claim 1.
Kumar further teaches wherein the at least one processor is configured to analyze the natural language text describing the software requirements to identify the relationships between the plurality of different software requirements of the software requirements at least in part by: generating structured text requirements from unstructured text requirements before analyzing how the data flows between the plurality of different software requirements. ([0057]: "Text Processing ML Model 500 is a pre-trained model that parses the text of the functional specifications from the product specification documents 100 as well as any optionally provided test steps 200 and extracts all the relevant contextual information from them. This includes extracting entities, acceptable and unacceptable use cases, various user interactions, outcomes from these interactions, and screen navigation information. Text Processing ML Model 500 relies on techniques like NLP (Natural Language Processing), NER (Named Entity Recognition), and NLU (Natural Language Understanding) for extracting this information and translating it into a form that can be processed by the Test Case ML Model 600 . FIG. 3 c shows an example of data that may be generated by Text Processing ML Model 500 for the feature specification shown in FIG. 1.") Examiner Comments: Kumar generates structured entities and relationships from unstructured NL text before analyzing data flows and dependencies.
Bahrami (US 20230096325 A1) further teaches wherein the at least one processor is
configured to analyze the natural language text describing the software requirements to identify the relationships between the plurality of different software requirements of the software requirements at least in part by: generating structured text requirements from unstructured text requirements before analyzing how the data flows between the plurality of different software requirements. ([0039]: "The system 102 may be further configured to determine a set of functions (also referred as a set of procedures) from the computer- executable code 112 A. The set of functions may be determined based on the generated AST. Each function of the set of functions may be a self-contained module of the computer- executable code 112 A that may accomplish a specific task (e.g., addition of two numbers or concatenation of two strings). The system 102 may be further configured to extract metadata 112 B that may be associated with the set of functions from the computer- executable code 112 A. In an embodiment, the metadata 112 B may be extracted from docstrings and comments that may be associated with the set of functions.") Examiner Comments: Bahrami generates structured metadata (parameters, descriptions) from unstructured comments/docstrings in code, analogous to structuring requirements before analysis.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kumar and Kossatchev’s teaching to Bahrami’s, since modify Kumar's system for generating test cases from natural language requirements with Bahrami's technique for training language models on NL-code pairs to extract and generate from NL, as this would improve the accuracy of processing unstructured NL requirements into structured elements for test generation by leveraging fine-tuned language models (Bahrami [0023]: "The disclosure uses metadata associated with the code snippets in the training dataset for training the language model. The disclosure provides an enriched dataset that can be used to train the language model with function parameters, return value(s), and the corresponding natural language text associated with the code snippet.").
Regarding Claim 3, Kumar, Kossatchev and Bahrami teach
The system of claim 2.
Kumar, and Kossatchev did not specifically teach
wherein generating the structured text requirements from the unstructured text requirements occurs at least in part using a large language model with the unstructured text requirements as input.
However, Bahrami teaches
wherein generating the structured text requirements from the unstructured text requirements occurs at least in part using a large language model with the unstructured text requirements as input. ([0115]: "In the prediction phase, the language model 804 may be considered to be a trained model. The system 802 may be configured to receive the input 808 . The input 808 may be received from the user 116 via the user device 108 and may include a natural language query. As an example, the natural language query may include a text “Get a video from URL”. Based on the received input 808 , the system 802 may be configured to apply the trained language model 804 on the received input 808 . The system 802 may be further configured to control the language model 804 to generate the output 810 based the application of the language model 804 on the received input 808 . The generated output may include the lines of computer-executable code associated with the natural language query, as shown in FIG. 8 , for example.") Examiner Comments: Bahrami uses a trained language model (transformer-based) with unstructured NL input to generate structured output, applicable to structuring requirements.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kumar and Kossatchev’s teaching to Bahrami’s, since modify Kumar's system for generating test cases from natural language requirements with Bahrami's technique for training language models on NL-code pairs to extract and generate from NL, as this would improve the accuracy of processing unstructured NL requirements into structured elements for test generation by leveraging fine-tuned language models (Bahrami [0023]: "The disclosure uses metadata associated with the code snippets in the training dataset for training the language model. The disclosure provides an enriched dataset that can be used to train the language model with function parameters, return value(s), and the corresponding natural language text associated with the code snippet.").
Regarding Claim 4, Kumar, Kossatchev and Bahrami teach
The system of claim 3.
Kumar, and Kossatchev did not specifically teach
wherein the large language model was trained with a set of training unstructured text requirements and corresponding training structured text requirements.
However, Bahrami teaches
wherein the large language model was trained with a set of training unstructured text requirements and corresponding training structured text requirements. ([0023]: "The disclosure uses the code snippets that includes detailed information related to function metadata and existing natural language description (such as comments or docstrings) for training the language model. Specifically, the disclosure uses metadata associated with the code snippets in the training dataset for training the language model. The disclosure provides an enriched dataset that can be used to train the language model with function parameters, return value(s), and the corresponding natural language text associated with the code snippet. The disclosure also provides a method to filter out code snippets for which the natural language text is absent. Also, the disclosure provides a method to prune the code snippet for generation of the training dataset.") Examiner Comments: Bahrami trains the language model with pairs of unstructured NL descriptions (docstrings) and corresponding structured code/metadata, analogous to training on unstructured/structured requirements.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kumar and Kossatchev’s teaching to Bahrami’s, since modify Kumar's system for generating test cases from natural language requirements with Bahrami's technique for training language models on NL-code pairs to extract and generate from NL, as this would improve the accuracy of processing unstructured NL requirements into structured elements for test generation by leveraging fine-tuned language models (Bahrami [0023]: "The disclosure uses metadata associated with the code snippets in the training dataset for training the language model. The disclosure provides an enriched dataset that can be used to train the language model with function parameters, return value(s), and the corresponding natural language text associated with the code snippet.").
Regarding Claim 6, Kumar, and Kossatchev teach
The system of claim 1.
Kumar and Kossatchev did not specifically teach
wherein the at least one processor is configured to generate the test cases based on the at least one logical group of software requirements at least in part using a large language model with the at least one logical group of software requirements as an input.
However, Bahrami teaches
wherein the at least one processor is configured to generate the test cases based on the at least one logical group of software requirements at least in part using a large language model with the at least one logical group of software requirements as an input. ([0020]: "Language models are being used in a variety of sequence-to-sequence generation tasks such as a code synthesis task, a code retrieval task, or a software package analysis task. The code synthesis task corresponds to a task of generation of a source-code based on a natural language query.") Examiner Comments: Bahrami uses a language model with NL input (analogous to grouped requirements) to generate structured output, applicable to test case generation.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kumar and Kossatchev’s teaching to Bahrami’s, since modify Kumar's system for generating test cases from natural language requirements with Bahrami's technique for training language models on NL-code pairs to extract and generate from NL, as this would improve the accuracy of processing unstructured NL requirements into structured elements for test generation by leveraging fine-tuned language models (Bahrami [0023]: "The disclosure uses metadata associated with the code snippets in the training dataset for training the language model. The disclosure provides an enriched dataset that can be used to train the language model with function parameters, return value(s), and the corresponding natural language text associated with the code snippet.").
Regarding Claim 9, is a method claim corresponding to the system claim above (Claim 2) and, therefore, is rejected for the same reasons set forth in the rejection of claim 2.
Regarding Claim 10, is a method claim corresponding to the system claim above (Claim 3) and, therefore, is rejected for the same reasons set forth in the rejection of claim 3.
Regarding Claim 11, is a method claim corresponding to the system claim above (Claim 4) and, therefore, is rejected for the same reasons set forth in the rejection of claim 4.
Regarding Claim 13, is a method claim corresponding to the system claim above (Claim 6) and, therefore, is rejected for the same reasons set forth in the rejection of claim 6.
Regarding Claim 16, is a computer-readable medium claim corresponding to the system claim above (Claim 2) and, therefore, is rejected for the same reasons set forth in the rejection of claim 2.
Regarding Claim 17, is a computer-readable medium claim corresponding to the system claim above (Claim 3) and, therefore, is rejected for the same reasons set forth in the rejection of claim 3.
Regarding Claim 19, is a computer-readable medium claim corresponding to the system claim above (Claim 6) and, therefore, is rejected for the same reasons set forth in the rejection of claim 6.
Claim(s) 7, 14 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kumar (US 20220091968) in view of Kossatchev (US 6698012 B1) further in view of Chang (US 8924938 B2).
Regarding Claim 7, Kumar, and
Kossatchev teach The system of claim 1. Kumar teaches wherein the at least one processor is further configured to: use feedback from the results of the execution of the software program using the test cases to better: (1) analyze the natural language text describing the software requirements to identify the relationships between the plurality of different software requirements of the software requirements; (2) establish the sequence for the plurality of different software requirements based on the relationships identified between the plurality of different software requirements of the software requirements; and (3) group the plurality of different software requirements together into the at least one logical group of software requirements based on the sequence for the plurality of different software requirements of the software requirements and the relationships between the plurality of different software requirements. ([0067]: "FIG. 4 a illustrates a change in mockup where the button text is changed from ‘Save’ ( FIG. 2 a ) to ‘Create’ ( FIG. 4 a ). With even a small change like a text change, prior art GUI automation tools may fail to identify the button and require updated test automation scripts to reflect the changes. FIG. 4 b illustrates a change in functional specification highlighting the addition of another attribute—“Owner” for the entity “Task”. For this same enhancement, FIG. 4 c illustrates a change in the mockup with the new input box for the owner of the task which is a mandatory text for this screen. With even a small change like a new mandatory GUI element on the screen, prior art manual and automated tests may fail to verify the functionality correctly.") Examiner Comments: Kumar uses feedback from changes (analogous to execution results) to improve analysis and generation.
Chang (US 8924938 B2) further teaches wherein the at least one processor is further configured to: use feedback from the results of the execution of the software program using the test cases to better: (1) analyze the natural language text describing the software requirements to identify the relationships between the plurality of different software requirements of the software requirements; (2) establish the sequence for the plurality of different software requirements based on the relationships identified between the plurality of different software requirements of the software requirements; and (3) group the plurality of different software requirements together into the at least one logical group of software requirements based on the sequence for the plurality of different software requirements of the software requirements and the relationships between the plurality of different software requirements. (column 6, lines 28-38: "The simulator 106 enables control of certain aspects of the execution such as scheduling in the case of multi-threaded programs and the timing in case of real time systems through the test case generation process. As a result, the simulator 106 may need further inputs from the test generator 108 or the user to properly execute a program. The simulator 106 output 110 at the end of each iteration is joined to the earlier output data pool to be fed into a learning system 113 .") Examiner Comments: Chang uses feedback from execution results to improve learning for analysis, sequencing, and grouping in test generation.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Kumar, and Kossatchev’s teaching to Chang’s since modify the combination with Chang's feedback loop using machine learning from test results to improve analysis and generation, as this would enhance defect detection by iteratively discovering rare behaviors in the software based on execution results (Chang, column 3, lines 1-11: "The learning used in the framework directs the test case generation by capturing the essence of what has been observed so far in the testing process. As a result, the learning and the test case generator work in a feedback loop that isolates behaviors that may not surface otherwise in other approaches to testing.").
Regarding Claim 14, is a method claim corresponding to the system claim above (Claim 7) and, therefore, is rejected for the same reasons set forth in the rejection of claim 7.
Regarding Claim 20, is a computer-readable medium claim corresponding to the system claim above (Claim 7) and, therefore, is rejected for the same reasons set forth in the rejection of claim 7.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR SOLTANZADEH whose telephone number is (571)272-3451. The examiner can normally be reached M-F, 9am - 5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Mui can be reached at (571) 272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMIR SOLTANZADEH/Examiner, Art Unit 2191
/WEI Y MUI/Supervisory Patent Examiner, Art Unit 2191