DETAILED ACTION
Remarks
Applicant presents a communication dated 12 December 2025 in response to the 1 October 2025 non-final rejection (the “Previous Action”).
Claims 1-2, 6-7, 10-12, 14 and 17-19 are amended. Applicant also amends paragraph [0016] of the specification.
Claim 1-20 are pending. Claims 1, 12 and 19 are the independent claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
37 C.F.R. § 1.121
Applicant’s claim listing is not compliant with 37 C.F.R. § 1.121, which requires that all added subject matter be shown via underlining. In particular, Applicant appears to have added “on the software under test” to claim 10 without underlying them.
The claims are nonetheless examined in the interests of compact prosecution.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Arguments
Applicant argues that the claim amendments are believed to overcome the § 101 rejections. (Remarks, p. 10 pars. 3-4).
Examiner respectfully disagrees for the reasons set forth in the rejections below. Applicant does not provide any analysis to the contrary.
Applicant argues that the cited references do not disclose performing the sequence of actions on the software under test or providing a result of performing the sequence of actions on the software under test. (Remark, p. 11).
Examiner again respectfully disagrees for the reasons set forth in the rejections below. Applicant again does not provide any analysis to the contrary.
Applicant’s remaining arguments are moot in view of the withdrawn objections, withdrawn rejections and new ground(s) of rejection necessitated by Applicant’s amendments.
Specification
The Previous Action’s objection to the specification is withdrawn in view of Applicant’s specification amendments.
Claim Interpretation
The Previous Action’s interpretation of certain claim limitations in accordance with 35 U.S.C. § 112(f) is withdrawn in view of Applicant’s claim amendments.
Claim Rejections - 35 USC § 112
The Previous Action’s § 112 rejections are withdrawn in view of Applicant’s claim amendments.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6, 8-10, 12-17 and 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more.
As to claim 1, the claim recites:
a method, comprising:
receiving natural language content describing interactions with software under test;
identifying natural language instructions in the natural language content based on the interactions;
converting the natural language instructions into programming language-specific instructions that, when executed by a software testing tool, perform testing of the software under test, where the programming language-specific instructions are formatted in a programming language-specific language of the software testing tool and include key-value pairs comprising operative information for performing user interface element interactions for the software under test; and
storing the programming language-specific instructions as a sequence of actions for testing the software under test.
performing the sequence of actions on the software under test; and
providing a result of performing the sequence of actions on the software under test.
Under the broadest reasonable interpretation in light of the specification the above underlined elements recite a mental process because the elements are performable by the human mind with aid of pen and paper. For example, the human mind is capable of converting natural language instructions written on paper into programming language specific instructions also written on paper. The claim therefore recites an abstract idea.
None of the additional elements integrate the judicial exception into a practical application.
The “storing…”, “performing…” and “providing…” steps are insignificant post-solution activity at least because they only appear to be nominal or tangential additions to the claim, See M.P.E.P. § 2106.05(g). Note that performing the sequence of actions as claimed does not necessarily entail executing any programming language instructions.
Looking at the claim limitations as an ordered combination yields the same conclusion as that reached when looking at the elements individually. Their collective function is merely to apply the abstract idea in a generic computer along with insignificant post-solution activity.
The claim does not include additional elements that amount to significantly more than the judicial exception either, for substantially the same reasons discussed above with respect to a practical application. Note that reevaluation of the extra-solution activity per step 2B does not indicate that this element is anything more than what is well-understood, routine and conventional in the field. With regard to the “storing…”, courts have recognized that electronic recordkeeping and storing information in memory are well-understood, routine and conventional. See M.P.E.P. § 2106.05(d). With regard to the “performing…” and “providing…”, performing actions against software and providing a result of those actions is well-understood, routine and conventional per at least Juviler “What is GUI? Graphical User Interfaces, Explained”, already of record.
As to claims 2-6, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because generating the recited prompt only describes the abstract idea itself and because inputting a prompt to an artificial intelligence model and receiving output only amounts to using a generic computer to implement the abstract idea. Reference to an artificial intelligence model external to the software testing tool in claim 6 also only limits the abstract idea to a particular technological environment or field of use. See M.P.E.P. § 2106.05(h).
As to claim 8, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because they only further describe the abstract idea itself.
As to claim 9, the features of this claim does not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more at least because the addition of a messaging interface merely amounts to implementing the abstract idea on a generic computer.
As to claim 10, the features of this claim does not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more at least because these features only appear to be nominal or tangential additions to the claimed invention and because courts have recognized that electronic recordkeeping and storing information in memory are well-understood, routine and conventional.
As to claim 12, the claim recites an abstract idea without integrating the abstract idea into a practical application or amounting to significantly more for the same reasons as claim 1 and because the addition of “memory storing instructions that, when executed, cause the computing system to perform” the operations amounts to nothing more than implementing the abstract idea on a generic computer.
As to claim 13, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because they only further describe the abstract idea itself.
As to claims 14-17, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because generating the recited information and converting of instructions only describes the abstract idea itself and because inputting information to an artificial intelligence model and receiving output only amounts to using a generic computer component to implement the abstract idea.
As to claim 19, the claim recites an abstract idea without integrating the abstract idea into a practical application or amounting to significantly more for the same reasons as claim 1 and because the addition of a “testing tool”, “processing system” and “memory stoting instructions that, when executed, cause the software testing tool to” perform the operations amounts to nothing more than implementing the abstract idea on a generic computer.
As to claim 20, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because they only further describe the abstract idea itself. Note that the “training” is only described at a high-level of generality and the claim does not actually require training anything or recite how that training is performed.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 8-9, 12-15, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Deakin (US 2024/0411673) (art of record – hereinafter Deakin) in view of Osenkov et al. (US 2011/0258600) (art made of record – hereinafter Osenkov).
As to claim 1, Deakin discloses a method, comprising:
receiving natural language content describing interactions with software under test; (e.g., Deakin, par. [0040]: the AI could be instructed to: “Create some test scenarios, both positive and negative for this data for a Java function that takes an ICAO and/or IATA code and returns an airline name, and outputs a table of inputs and expected outputs for such a function”; par. [0059]: the AI may then be instructed to restate these scenarios by the instruction “Restate the earlier test scenarios as GIVEN…WHEN…THEN…statements”)
identifying, natural language instructions in the natural language content based on the interactions; (e.g., Deakin, par. [0059]: the AI may output the following in response: [0061] 1. GIVEN that the IATA/ICAO code is “BAB”, WHEN the function ‘getAirlineName’ is invoked with this code, THEN it should return “British Airways.” [0062] 2. GIVEN that the IATA/ICAO code is “LSE”, WHEN the function ‘getAirlineName’ is invoked with this code, THEN it should return “Jet2.com” [1 and 2 each being natural language instructions])
converting the natural language instructions into programming language-specific instructions that, when executed by a software testing tool, perform testing of the software under test, where the programming instructions are formatted in a programming language-specific language of the software testing tool; (e.g., Deakin, par. [0068]: the AI may be instructed to implement data-driven test cases that use the previously generated data to test the function previously described using the following input: “In Java code and using the Junit framework [software testing tool], write unit tests for the above scenarios that provide the inputs to the function and verify the return string is what is expected”. In response, the AI may return source code implementations [programming language-specific instructions] of the test cases it previously described) and
storing the programming language-specific instructions as a sequence of actions for testing the software under test (e.g., Deakin, par. [0068]: the AI may return source code implementations of the test case. An example is shown below [and see code in paragraphs [0069-0082], the code is a sequence of actions. Generated code is also necessarily stored])
performing the sequence of actions on the software under test; (e.g., Deakin, par. [0127]: to generate test code for executing the scenarios [and the test code comprises a sequence of actions, see above]; par. [0010]: testing the implementation source code [software under test] against the generated scenarios) and
providing a result of performing the sequence of actions on the software under test (e.g., Deakin, par. [0010]: a verification module for testing the implementation source code [software under test] against the generated scenarios [sequence of actions, see above]; claim 13: providing feedback to the Generative AI module based on the results output by the verification module)
Deakin does not explicitly disclose wherein the programming language-specific instructions include key-value pairs comprising operative information for performing user interface elements interactions for the software under test.
However, in an analogous art, Osenkov discloses:
wherein the programming language-specific instructions include key-value pairs comprising operative information for performing user interface elements interactions for the software under test (e.g., Osenkov, Fig. 2a and associated text, par. [0028]: the test case TestScenario is written in the declarative, domain specific, non-traditional programming language XML, though it will be appreciated that test case 200 can be written in any declarative, domain specific language, non-traditional programming language such as Json, Python. ButtonName is a parameter and refers to an attribute (name) of a button [user interface] to click [interact with. And see code, it includes key-value pairs at least because it includes pairs such as “ButtonName=’2’” and “ButtonName-‘+’” (ButtonName being a key, 2 or + being a value)]; par. [0029]: a test engine such as an interpreter can load the test case; par. [0026]: a script executing engine such as interpreter 106 can receive a test and execute the actions of the test; par. [0022]: tests described herein can be used to test software at the user interface level).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the programming language-specific instruction to include key-value pairs comprising operative information for performing user interface elements interactions for the software under test, as taught by Osenkov, as Osenkov would provide the advantages of a means of performing user interface testing and a means of providing test instructions that are easy to understand and maintain (see Osenkov, pars. [0022] and [0017).
As to claim 2, Deakin/Osenkov discloses the method of claim 1 (see rejection of claim 1 above), wherein identifying the natural language instructions comprises:
generating a prompt including the natural language content and a request to identify actions that can be performed against the software under test; (e.g., Deakin, par. [0040]: the AI could be instructed to: “Create some test scenarios, both positive and negative for this data for a Java function that takes an ICAO and/or IATA code and returns an airline name, and outputs a table of inputs and expected outputs for such a function”; par. [0059]: the AI may then be instructed to restate these scenarios by the instruction “Restate the earlier test scenarios as GIVEN…WHEN…THEN…statements” [The instructions are the prompt and an instruction to restate the scenarios is a request identify actions that can be performed against the software under test, note above that these statements include invoking a function with certain inputs multiple times]) and
providing the prompt as input to an artificial intelligence model (Deakin, claim 10: the generative AI module comprises a machine learning model [and see immediately above, the instructions are provided to the AI]).
As to claim 3, Deakin/Osenkov discloses the method of claim 2 (see rejection of claim 2 above), Deakin further discloses further comprising, including, in the prompt, at least one of:
examples of the actions that can be performed against the software under test; or
natural language metadata that describe the actions that can be performed against the software under test (e.g., Deakin, par. [0040]: the AI could be instructed to: “Create some test scenarios, both positive and negative for this data for a Java function that takes an ICAO and/or IATA code and returns an airline name, and outputs a table of inputs and expected outputs for such a function”; par. [0059]: the AI may then be instructed to restate these scenarios by the instruction “Restate the earlier test scenarios as GIVEN…WHEN…THEN…statements”)
As to claim 4, Deakin/Osenkov discloses the method of claim 2 (see rejection of claim 2 above), Deakin further discloses
wherein the prompt further includes a request to convert the identified natural language instructions into the programming language-specific instructions (e.g., Deakin, par. [0068]: the AI may be instructed to implement data-driven test cases that use the previously generated data to test the function previously described using the following input: “In Java code and using the Junit framework, write unit tests for the above scenarios that provide the inputs to the function and verify the return string is what is expected”. In response, the AI may return source code implementations [programming language-specific instructions] of the test cases it previously described).
As to claim 8, Deakin/Osenkov discloses the method of claim 1 (see rejection of claim 1 above), Deakin further discloses wherein receiving the natural language content comprises at least one of:
receiving a code update submission corresponding to a change made to source code of the software under test;
receiving a comment included in the source code;
receiving manually-authored instructions for testing the software under test;
a design document for the software under test; or
receiving a project management specification for the software under test (e.g., Deakin, par. [0010]: the system can include a user interface for receiving a natural language description [project management specification] of expected behaviors of machine-readable code functionality, a converter module configured to convert the description into test scenarios and a verification module for testing the source against the test scenarios)
As to claim 9, Deakin/Osenkov discloses the method of claim 1 (see rejection of claim 1 above), Deakin further discloses:
wherein receiving the natural language content comprises receiving a message via a messaging interface (e.g., Deakin, par. [0010]: the system can include a user interface for receiving a natural language description of expected behaviors of machine-readable code functionality)
As to claim 12, Deakin discloses:
a computing system, comprising: a processing system; (e.g., Deakin, Fig. 4 and associated text) and
memory storing instructions that, when executed, cause the computing system to perform operations (e.g., Deakin, Fig. 4 and associated text) comprising: (see rejection of claim 12 above)
receiving natural language content from a natural language source describing testing a software under test; (e.g., Deakin, par. [0040]: the AI could be instructed to: “Create some test scenarios, both positive and negative for this data for a Java function that takes an ICAO and/or IATA code and returns an airline name, and outputs a table of inputs and expected outputs for such a function”; par. [0059]: the AI may then be instructed to restate these scenarios by the instruction “Restate the earlier test scenarios as GIVEN…WHEN…THEN…statements”)
identifying, in the natural language content, natural language instructions that describe actions that can be performed against the software under test; (e.g., Deakin, par. [0059]: the AI may output the following in response: [0061] 1. GIVEN that the IATA/ICAO code is “BAB”, WHEN the function ‘getAirlineName’ is invoked with this code, THEN it should return “British Airways.” [0062] 2. GIVEN that the IATA/ICAO code is “LSE”, WHEN the function ‘getAirlineName’ is invoked with this code, THEN it should return “Jet2.com” [1 and 2 each being natural language instructions])
converting the identified natural language instructions into programming language-specific instructions that a software testing tool can perform against the software under test in a sequence, wherein the programming language-specific instructions are formatted in a programming language of the software testing tool; (e.g., Deakin, par. [0068]: the AI may be instructed to implement data-driven test cases that use the previously generated data to test the function previously described using the following input: “In Java code and using the Junit framework [software testing tool], write unit tests for the above scenarios that provide the inputs to the function and verify the return string is what is expected”. In response, the AI may return source code implementations [programming language-specific instructions] of the test cases it previously described [and see the code in paragraphs [0069-0082], the code is a sequence of actions]) and
storing the programming language-specific instructions as an action sequence (e.g., Deakin, par. [0068]: the AI may return source code implementations of the test case. An example is shown below [and see code in paragraphs [0069-0082], the code is a sequence of actions. Generated code is also necessarily stored]);
performing the sequence of actions on the software under test; (e.g., Deakin, par. [0127]: to generate test code for executing the scenarios [and the test code comprises a sequence of actions, see above]; par. [0010]: testing the implementation source code [software under test] against the generated scenarios) and
providing a result of performing the sequence of actions on the software under test; (e.g., Deakin, par. [0010]: a verification module for testing the implementation source code [software under test] against the generated scenarios [sequence of actions, see above]; claim 13: providing feedback to the Generative AI module based on the results output by the verification module).
Deakin does not explicitly disclose wherein the programming language-specific instructions include key-value pairs comprising operative information for performing user interface element interactions for the software under test.
However, in an analogous art, Osenkov discloses:
wherein the programming language-specific instructions include key-value pairs comprising operative information for performing user interface element interactions for the software under test (e.g., Osenkov, Fig. 2a and associated text, par. [0028]: the test case TestScenario is written in the declarative, domain specific, non-traditional programming language XML, though it will be appreciated that test case 200 can be written in any declarative, domain specific language, non-traditional programming language such as Json, Python. ButtonName is a parameter and refers to an attribute (name) of a button [user interface] to click [interact with. And see code, it includes key-value pairs at least because it includes pairs such as “ButtonName=’2’” and “ButtonName-‘+’” (ButtonName being a key, 2 or + being a value)]; par. [0029]: a test engine such as an interpreter can load the test case; par. [0026]: a script executing engine such as interpreter 106 can receive a test and execute the actions of the test; par. [0022]: tests described herein can be used to test software at the user interface level).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the programming language-specific instruction to include key-value pairs comprising operative information for performing user interface element interactions for the software under test, as taught by Osenkov, as Osenkov would provide the advantages of a means of performing user interface testing and a means of providing test instructions that are easy to understand and maintain (see Osenkov, pars. [0022] and [0017).
As to claim 13, Deakin/Osenkov discloses the computing system of claim 12 (see rejection of claim 12 above), wherein the natural language source includes at least one of: a message; (e.g., Deakin, par. [0040]: the AI could be instructed to: “Create some test scenarios, both positive and negative for this data for a Java function that takes an ICAO and/or IATA code and returns an airline name, and outputs a table of inputs and expected outputs for such a function”; par. [0059]: the AI may then be instructed to restate these scenarios by the instruction “Restate the earlier test scenarios as GIVEN…WHEN…THEN…statements” [these instructions being a message of message(s)])
a code update submission corresponding to a change made to source code of the software under test;
a comment included in the source code;
manually-authored instructions for testing the software under test;
a project management specification for the software under test; or
a design document.
As to claim 14, Deakin/Osenkov discloses the computing system of claim 12 (see rejection of claim 12 above), Deakin further discloses wherein identifying the natural language instructions that describe actions that can be performed against the software under test comprises generating an input for an artificial intelligence model, wherein the input includes the natural language content and a request for the artificial intelligence model to identify the natural language instructions describing the actions that can be performed against the software under test (e.g., Deakin, par. [0040]: the AI [artificial intelligence model] could be instructed to: “Create some test scenarios, both positive and negative for this data for a Java function that takes an ICAO and/or IATA code and returns an airline name, and outputs a table of inputs and expected outputs for such a function”; par. [0059]: the AI [artificial intelligence model] may then be instructed to restate these scenarios by the instruction “Restate the earlier test scenarios as GIVEN…WHEN…THEN…statements” [The instructions are input and an instruction to restate the scenarios is a request identify actions that can be performed against the software under test, note above that these statements include invoking a function with certain inputs multiple times]).
As to claim 15, Deakin/Osenkov discloses the computing system of claim 14 (see rejection of claim 14 above), Deakin further discloses wherein the input further comprises:
examples of the actions that can be performed against the software under test; (e.g., (e.g., Deakin, par. [0040]: the AI could be instructed to: “Create some test scenarios, both positive and negative for this data for a Java function that takes an ICAO and/or IATA code and returns an airline name, and outputs a table of inputs and expected outputs for such a function” [taking an ICAO code being an action, taking an IATA code being an action]) and
natural language metadata that describe the actions (see above, the input instruction is in natural language describes the action).
As to claim 17, Deakin/Osenkov discloses the computing system of claim 14 (see rejection of claim 14 above), Deakin further discloses the operations further comprising:
providing the input to the artificial intelligence model; (e.g., Deakin, par. [0040]: the AI [artificial intelligence model] could be instructed to: “Create some test scenarios, both positive and negative for this data for a Java function that takes an ICAO and/or IATA code and returns an airline name, and outputs a table of inputs and expected outputs for such a function”; par. [0059]: the AI [artificial intelligence model] may then be instructed to restate these scenarios by the instruction “Restate the earlier test scenarios as GIVEN…WHEN…THEN…statements”)
in response to the input, receiving an output from the artificial intelligence model, wherein the output includes the identified natural language instructions; (e.g., Deakin, par. [0059]: the AI may output the following in response: [0061] 1. GIVEN that the IATA/ICAO code is “BAB”, WHEN the function ‘getAirlineName’ is invoked with this code, THEN it should return “British Airways.” [0062] 2. GIVEN that the IATA/ICAO code is “LSE”, WHEN the function ‘getAirlineName’ is invoked with this code, THEN is should return “Jet2.com” [1 and 2 each being natural language instructions]) and
converting the identified natural language instructions into the programming language-specific instructions (e.g., Deakin, par. [0068]: the AI may be instructed to implement data-driven test cases that use the previously generated data to test the function previously described using the following input: “In Java code and using the Junit framework, write unit tests for the above scenarios that provide the inputs to the function and verify the return string is what is expected”. In response, the AI may return source code implementations [programming language-specific instructions] of the test cases it previously described).
As to claim 19, it is a computing system claim whose limitations are a subset of those of claim 12. Those limitations are taught by or obvious over the prior art for substantially the same reasons.
Claims 5 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Deakin (US 2024/0411673) in view of Osenkov (US 2011/0258600) in further view of Trummer (US 2024/028122) (art of record – hereinafter Trummer).
As to claim 5, Deakin/Osenkov discloses the method of claim 2 (see rejection of claim 2 above), but Deakin does not explicitly disclose wherein the prompt further includes examples of the programming language-specific instructions that correspond to the natural language instructions.
However, in an analogous art, Trummer discloses:
wherein the prompt further includes examples of the programming language-specific instructions that correspond to the natural language (e.g., Trummer, par. [0056]: receiving natural language input in association with a database query; par. [0057]: to decompose the at least one database query into a sequence of steps formulated using natural language; par. [0058]: system 102 generates one or more prompts for application to the AI system 110 by interleaving the processing steps with user-provided instructions of the natural language input; par. [0059]: applying the prompts to the AI system 110 for generation of the database code therefrom; par. [0116]: at run time, a specified number of samples is randomly selected and included in the prompt; par. [0170]: those samples consist of the prompt previously provided as well as the code generated in response to that prompt; par. [0073]: embodiments are configured to generate database code in various types of programming languages).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the prompt of Deakin to include examples of the programming language-specific instructions that correspond to the natural language as taught by Trummer, as Trummer would provide the advantage of a means of increasing the probability of successfully generating code. (See Trummer, pars. [0115-0116]).
As to claim 16, Deakin/Osenkov discloses the computing system of claim 15 (see rejection of claim 15 above), Deakin further discloses wherein the input further comprises:
a request to convert the identified natural language instructions into the programming language-specific instructions; (e.g., Deakin, par. [0068]: the AI may be instructed to implement data-driven test cases that use the previously generated data to test the function previously described using the following input: “In Java code and using the Junit framework, write unit tests for the above scenarios that provide the inputs to the function and verify the return string is what is expected”. In response, the AI may return source code implementations [programming language-specific instructions] of the test cases it previously described).
Deakin does not explicitly disclose wherein the input further comprises: example programming language-specific instructions that correspond to the natural language metadata.
However, in an analogous art, Trummer discloses wherein the input further comprises:
example programming language-specific instructions that correspond to the natural language metadata (e.g., Trummer, par. [0056]: receiving natural language input in association with a database query; par. [0057]: to decompose the at least one database query into a sequence of steps formulated using natural language; par. [0058]: system 102 generates one or more prompts for application to the AI system 110 by interleaving the processing steps with user-provided instructions of the natural language input; par. [0059]: applying the prompts to the AI system 110 for generation of the database code therefrom; par. [0116]: at run time, a specified number of samples is randomly selected and included in the prompt; par. [0170]: those samples consist of the prompt previously provided as well as the code generated in response to that prompt; par. [0073]: embodiments are configured to generate database code in various types of programming languages).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the prompt of Deakin to include example programming language-specific instructions that correspond to the natural language metadata as taught by Trummer, as Trummer would provide the advantage of a means of increasing the probability of successfully generating code. (See Trummer, pars. [0115-0116]).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Deakin (US 2024/0411673) in view of Osenkov (US 2011/0258600) in further view of Guttridge et al. (US 2025/0077682) (art of record – hereinafter Guttridge) and Hecking-Harbusch et al. (US 2025/0130928) (art of record – hereinafter Harbusch).
As to claim 6, Deakin/Osenkov discloses the method of claim 2 (see rejection of claim 2 above), Deakin further discloses wherein:
generating the prompt comprises generating a first prompt; (e.g., Deakin, par. [0040]: the AI could be instructed to: “Create some test scenarios, both positive and negative for this data for a Java function that takes an ICAO and/or IATA code and returns an airline name, and outputs a table of inputs and expected outputs for such a function”; par. [0059]: the AI may then be instructed to restate these scenarios by the instruction “Restate the earlier test scenarios as GIVEN…WHEN…THEN…statements”) and
converting the identified natural language instructions into the programming language-specific instructions (see below) comprises:
providing the first prompt as a first input into a first artificial intelligence model (see above, the instructions are provided to the AI [artificial intelligence model])
in response to the first input, receiving a first output from the first artificial intelligence model, wherein the first output includes identified natural language instructions; (e.g., Deakin, par. [0059]: the AI may output the following in response: [0061] 1. GIVEN that the IATA/ICAO code is “BAB”, WHEN the function ‘getAirlineName’ is invoked with this code, THEN it should return “British Airways.” [0062] 2. GIVEN that the IATA/ICAO code is “LSE”, WHEN the function ‘getAirlineName’ is invoked with this code, THEN is should return “Jet2.com” [1 and 2 each being natural language instructions])
generating a second prompt; (e.g., Deakin, par. [0068]: the AI may be instructed to implement data-driven test cases that use the previously generated data to test the function previously described using the following input: “In Java code and using the Junit framework, write unit tests for the above scenarios that provide the inputs to the function and verify the return string is what is expected”. In response, the AI may return source code implementations)
including, in the second prompt, the identified natural language instructions and a request to convert the identified natural language instructions into the programming language-specific instructions; (see above, the instruction [prompt] to generate unit tests in Java code refers to the scenarios and instructs [requests] the AI to “write unit tests” for them).
Deakin does not explicitly disclose a first artificial intelligence model that is external to the software testing tool, providing the second prompt as a second input for a second artificial intelligence model; and in response to the second input, receiving a second output from the second artificial intelligence model, wherein the second output includes the programming language-specific instructions.
However, in an analogous art, Guttridge discloses:
a first artificial intelligence model that is external to the software testing tool; (e.g., Guttridge, Fig. 2 and associated text AI engine 222 may retrieve the model from the repository 223 and deploy the model within a live runtime environment; Fig. 5A and associated text [see figure, GenAI model 524 is external to testing software 522]; Fig. 1A and associated text [see figure, GenAI model 120 is external to test execution service 130 and frameworks 141-144]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the artificial intelligence model and software testing tool of Deakin such that it is external to the software testing tool, as taught by Guttridge, as Guttridge would provide the advantage of a means of utilizing different models or test frameworks. (See Guttridge, par. [0055]).
Further, in an analogous art, Harbusch discloses:
providing the second prompt as a second input for a second artificial intelligence model; (e.g., Harbusch, par. [0097]: machine learning model 30 and/or the further machine learning model 31 [second artificial intelligence model]; par. [0060]: the other prompt can be a natural language text. The further prompt 21[second prompt] can comprise or be a natural language instruction to the machine learning model 31)
in response to the second input, receiving a second output from the second artificial intelligence model, wherein the second output includes the programming language-specific instructions (e.g., Harbusch, par [0060]: the further prompt can comprise a linguistic instruction to the large language model (LLM) directed to generate one or more test codes for testing the code; par. [0050]: the test code can be written in the same programming language as the code).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the first artificial intelligence model receiving the second prompt and generating programming language-specific instructions of Deakin with a second artificial intelligence model receiving the second prompt to generate such instructions, as taught by Harbusch. Harbusch suggests the combination because Harbusch discloses using a second machine learning model to generate the instructions is an alternative to using the first model. (See Harbusch, par. [0059]). Harbush also shows that a second model can be in place of a first and still successfully generate the instructions, i.e., the result would have been predictable. See M.P.E.P. § 2143(I)(B).
Claim 7, 10-11, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Deakin (US 2024/0411673) in view of Osenkov (US 2011/0258600) in further view of Guttridge (US 2025/0077682).
As to claim 7, Deakin/Osenkov discloses the method of claim 1 (see rejection of claim 1 above) but Deakin does not explicitly disclose wherein an artificial intelligence model is trained on training data including recorded actions performed against the software under test, the training data comprising: respective programming language-specific instructions for each recorded action; and natural language metadata describing the recorded action.
However, in an analogous art, Guttridge discloses:
wherein an artificial intelligence model is trained on training data including recorded actions performed against the software under test, the training data comprising: respective programming language-specific instructions for each recorded action; and natural language metadata describing the recorded action (e.g., Guttridge, par. [0032]: creating tests from natural language descriptions; par. [0070]: a user may input a software test description, such as the requirements; par. [0082]: the requirements may include a description of activities to be performed to carry out the test [natural language metadata]; par. [0083]: the GenAI model 524 may generate a sequence of steps [recorded actions]; par. [0092]: the steps may be used to build an automation script; par. [0091]: a generative artificial intelligence (GenAI) model 624 capable of generating an automation script [programming-language specific instructions for each recorded action] for execution of the software test based on inputs; par. [0063]: the training process may use results that have already been generated/output by the GenAI model [i.e., the automation script and steps] in a live environment to retrain the model; par. [0058]: the log may include an identifier of the input [the natural language metadata above]. This information may be used to subsequently retrain the model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the artificial intelligence model of Deakin such that it is trained on training data including recorded actions performed against the software under test, the training data comprising: respective programming language-specific instructions for each recorded action; and natural language metadata describing the recorded action, as taught by Guttridge, as Guttridge would provide the advantages of a means for the artificial intelligence model to learn correlations between text, test components and script components (see Guttridge, par. [0038]) and a means for the model to learn from test results. (See Guttridge, par. [0058]).
As to claim 10, Deakin/Osenkov discloses the method of claim 1 (see rejection of claim 1 above), but Deakin does not explicitly disclose further comprising in response to performing the sequence of actions on the software under test, receiving action telemetry data recorded about each action in the sequence of actions performed against the software under test, the action telemetry data including: respective programming language-specific instructions for each recorded action; and natural language metadata describing each recorded action.
However, in an analogous art, Guttridge discloses further comprising:
performing the sequence of actions against the software under test; (e.g., Guttridge, par. [0082]: activities to be performed to carry out the test; par. [0088]: automation script 564, which automates execution of the software test on the software program; par. [0094]: a step 642, a step 644, which are to be performed by the script [and those steps include actions, see figure]) and
in response to performing the sequence of actions on the software under test, receiving action telemetry data recorded about each action in the sequence of actions performed against the software under test, (see below) the action telemetry data including:
respective programming language-specific instructions for each recorded action; (e.g., Guttridge, par. [0048]: the GenAI model may generate automation scripts; par. [0129]: generating the automation script in the predefined programming language; par. [0110]: the testing software 822 may log the testing results in a log; par. [0058]: the output provided by the model. The log may include an identifier of the output. This information may be used to subsequently retrain the model; par. [0066]: the script 326 may input the additional training data sets into the GenAI model to continue to train the model [so the model receives the output as training data, and the output includes programming language-specific instructions that perform actions. This is in response to performing the actions, because the training data includes results of test execution (i.e., performing actions, see immediately above)]) and
natural language metadata describing each recorded action (e.g., Guttridge par. [0070]: a user may input a software test description, such as the requirements; par. par. 0074]: For example, if the query is “Describe the Requirements of the Test” and the response is “The test should check to make sure that the 16-digit input field on the GUOI can only accept 16 digits” [i.e., the requirements are natural language and describe the actions of the test]’ par. [0058]: the log may include an identifier of the input. This information may be used to subsequently retrain the model; par. [0066]: the script 326 may input the additional training data sets into the GenAI model to continue to train the model [so the model receives the input as training data, and the input includes natural language metadata describing the recorded action]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the artificial intelligence model of Deakin to include performing the sequence of actions against the software under test and in response to that performing, receiving action telemetry data recorded about each action in the sequence of actions performed against the software under test, the action telemetry data including respective programming language-specific instructions for each recorded action and natural language metadata describing each recorded action, as taught by Guttridge, as Guttridge would provide the advantage of a means of acquiring additional data for training the artificial intelligence model. (See Guttridge, par. [0058]).
As to claim 11, Deakin/Osenkov/Guttridge discloses the method of claim 10 (see rejection of claim 10 above), but Deakin does not explicitly disclose further comprising using the action telemetry data as training data for an artificial intelligence model.
However, in an analogous art, Guttridge discloses:
further comprising using the action telemetry data as training data for an artificial intelligence model e.g., Guttridge, par. [0063]: the training process may use results that have already been generated by the GenAI model [i.e., the automation script noted above with respect to claim 10] in a live environment to retrain the model; par. [0058]: the log may include an identifier of the input [the natural language metadata noted above with respect to claim 10]. This information may be used to subsequently retrain the model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the artificial intelligence model of Deakin to include using action telemetry data as training data for the artificial intelligence model, as taught by Guttridge, as Guttridge would provide the advantages of a means for the artificial intelligence model to learn correlations between text, test components and script components (see Guttridge, par. [0038]) and a means for the artificial intelligence model to learn from test results. (See Guttridge, par. [0069]).
As to claim 18, Deakin/Osenkov discloses the computing system of claim 12 (see rejection of claim 12 above), but does not explicitly disclose further comprising: receiving, in response to executing the programming language-specific instructions against the software under test, action telemetry data recorded for each action in the action sequence, the action telemetry data including: respective programming language-specific instructions for each recorded action; and natural language metadata describing each recorded action; and training an artificial intelligence model to identify the natural language instructions that describe the actions that can be performed against the software under test using training data including the action telemetry data.
However, in an analogous art, Guttridge discloses further comprising:
receiving, in response to executing the programming language-specific instructions against the software under test, action telemetry data recorded for each action in the action sequence, (e.g., Guttridge, par. [0082]: activities to be performed to carry out the test; par. [0137]: executing tests on a software application; par. [0058]: information may be added to the results of execution and stored within a log 22. The log may include an identifier of the input, an identifier of the output. This information may be used to subsequently retrain the model; par. [0063]: the training process may use results that have already been generated/output by the GenAI model in a live environment to retrain the model; par. [0066]: the script may input the training data sets into the GenAI model 322 ).
the action telemetry data including:
respective programming language-specific instructions for each recorded action; (e.g., Guttridge, par. [0092]: the steps may be used to build an automation script [programming-language specific instructions for each recorded action]; par. [0091]: a generative artificial intelligence (GenAI) model 624 capable of generating an automation script [i.e., the script is output of the model, which is used to retrain it as noted above] ) and
natural language metadata describing each recorded action; (e.g., Guttridge, par. [0070]: a user may input a software test description, such as the requirements; par. [0082]: the requirements may include a description of activities to be performed to carry out the test [natural language metadata]; par. [0058]: the log may include an identifier of the input. This information may be used to subsequently retrain the model) and
training an artificial intelligence model to identify the natural language instructions that describe the actions that can be performed against the software under test using training data including the action telemetry data (e.g., Guttridge, par. [0066]: the script may input the training data sets [which includes the telemetry data, see above] into the GenAI model 322; Fig. 5B and associated text, par. [0083]: the GenAI model 524 may generate a sequence of steps, including steps 542, 544, 546 and 548 [see figure, the steps are actions performed against the software under test]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the artificial intelligence model of Deakin to include receiving, in response to executing the programming language-specific instructions against the software under test, action telemetry data recorded for each action in the action sequence, the action telemetry data including: respective programming language-specific instructions for each recorded action; and natural language metadata describing each recorded action; and training the artificial intelligence model to identify the natural language instructions that describe the actions that can be performed against the software under test using training data including the action telemetry data, as taught by Guttridge, as Guttridge would provide the advantages of a means for the artificial intelligence model to learn correlations between text, test components and script components (see Guttridge, par. [0038]) and a means for the artificial intelligence model to learn from test results. (See Guttridge, par. [0058]).
As to claim 20, Deakin/Osenkov discloses the software testing tool of claim 19 (see rejection of claim 19 above), but Deakin does not explicitly disclose wherein identifying the natural language instructions and converting the natural language instructions into the programming language-specific instructions is based on training data including recorded actions performed against the software under test, the training data comprising: respective programming language-specific instructions for each recorded action and natural language metadata describing each recorded action.
However, in an analogous art, Guttridge discloses:
wherein identifying the natural language instructions (e.g., Guttridge, Fig. 5B and associated text, par. [0083]: the GenAI model 524 may generate a sequence of steps, including steps 542, 544, 546 and 548 [see figure, the steps are natural language instructions] and converting the natural language instructions into the programming language-specific instructions (e.g., Guttridge, par. [0092]: the steps may be used to build an automation script [converting the natural language instructions into the programming language-specific instructions]; par. [0091]: a generative artificial intelligence (GenAI) model 624 capable of generating an automation script) is based on training data including recorded actions performed against the software under test, the training data comprising: respective programming language-specific instructions for each recorded action (e.g., Guttridge, Fig. 5B and associated text, par. [0084]: the system inputs the steps into a new test 552. The host platform 520 may generate a document describing the steps [and see figure, the steps include actions. The are recorded because they are included in a test and described in the document]; par. [0063]: the training process may use results that have already been generated/output by the GenAI model in a live environment to retrain the model [and note immediately above that the GenAI model generates/outputs steps and the automation script, so the training data those elements]) and
natural language metadata describing each recorded action (e.g., Guttridge, par. [0070]: a user may input a software test description, such as the requirements; par. [0082]: the requirements may include a description of activities to be performed to carry out the test [natural language metadata describing each recorded action]; par. [0058]: the log may include an identifier of the input. This information may be used to subsequently retrain the model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the identifying and converting natural language instructions of Deakin such that those operations are based on training data including recorded actions performed against the software under test, the training data comprising: respective programming language-specific instructions for each recorded action; and natural language metadata describing each recorded action, as taught by Guttridge, as Guttridge would provide the advantages of a means for the artificial intelligence model to learn correlations between text, test components and script components (see Guttridge, par. [0038]) and a means for the artificial intelligence model to learn from test results. (See Guttridge, par. [0058]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TODD AGUILERA whose telephone number is (571)270-5186. The examiner can normally be reached M-F 11AM - 7:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S Sough can be reached at (571)272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TODD AGUILERA/Primary Examiner, Art Unit 2192