DETAILED ACTION
This Action is a response to the filing received 3 April 2024. Claims 1-10 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3 April 2024 is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 8 is rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because it is directed to software per se. Claim 7 recites “A machine learning model trained to …” A machine learning model is a software and/or data representation configured to process particular inputs in order to generate particular outputs. Accordingly, it is software and/or data per se, and is not a process, machine or article of manufacture. Examiner recommends amending claim 8 to recite, for example, “A non-transitory computer medium upon which a machine-learning model is stored, the machine-learning model configured to …” in order to direct the scope of the claim to statutory subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-5 and 7-10 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Coppa et al., U.S. 2021/0011837 A1 (“Coppa”).
Regarding claim 1, Coppa teaches: A method for generating at least one new test case for a fuzzing software test (Coppa, e.g., ¶16, “provide a first test signal … determine … a detected response … provide a second test signal based on the detected response and the expected response …”), the method comprising the following steps:
providing at least one existing test case for the fuzzing software test, wherein the fuzzing software test is provided for testing at least one of a plurality of different forms of a test target (Coppa, e.g., ¶74, “providing a test signal to a UUT. The test signal can be provided by a fuzzer … can be configured to cause the UUT to perform abnormal behavior …”);
generating representation information based on the at least one existing test case and based on an effect of training test cases on a plurality of the different forms of the test target (Coppa, e.g., ¶77, “UUT can be monitored for feedback for a predetermined duration of time subsequent to the test signal being provided … feedback that can include direct observables … or indirect observables …” See also, e.g., ¶78, “determining a response from the feedback …” See also, e.g., ¶83, “applying the test signal to a response model to determine an expected response of the UUT to the test signal … to include at least one parameter of the UUT that the fuzzer monitors using the feedback received from the UUT …” See also, e.g., ¶84, “response model includes one or more machine learning models … trained to generate expected (or predicted) responses of UUTs to test signals … using training data that includes a plurality of test signals each associated with one or more respective parameters … parameters regarding UUTs …” Examiner’s note: the representation information includes both parameters or other feedback obtained in response to applying the test signal and based on obtaining an expected response based on that test signal, the obtaining of an expected response being based on a model trained with training test cases on a plurality of UUTs); and
generating the at least one new test case for the fuzzing software test based on the representation information (Coppa, e.g., ¶85, “response model is updated by providing training data to the response model that includes the test signal and the corresponding feedback … providing input that includes one or more test signals and corresponding known responses … modifying the response model to reduce the difference between the model output and the known responses …” See also, e.g., ¶87, “test signal can be updated (e.g., a new test signal is generated) based on a difference between the detected response and the determined response …” See also, e.g., ¶88, “In some embodiments, the test signal is generated to identify corner cases, such as by increasing an amount of randomly generated data of the test signal …”).
Claims 9 and 10 are rejected for the reasons given in the rejection of claim 1 above. Examiner notes that with respect to claim 9, Coppa further teaches: A device for data processing, the device for generating at least one new test case for a fuzzing software test (Coppa, e.g., ¶16, “provide a first test signal … determine … a detected response … provide a second test signal based on the detected response and the expected response …” See also, e.g., ¶20, “fuzzer 104 can include a processing circuit 108 … includes a processor 112 and memory 116 …”), the method comprising the following steps: [[[the steps of the method of claim 1]]]; and with respect to claim 10, Coppa further teaches: A non-transitory machine-readable storage medium on which are stored commands for generating at least one new test case for a fuzzing software test, the commands, when executed by a computer (Coppa, e.g., ¶16, “provide a first test signal … determine … a detected response … provide a second test signal based on the detected response and the expected response …” See also, e.g., ¶20, “fuzzer 104 can include a processing circuit 108 … includes a processor 112 and memory 116 … memory 116 may be or include volatile memory or non-volatile memory and may include … code components …”), causing the computer to perform the following steps: [[[the steps of the method of claim 1]]].
Regarding claim 2, the rejection of claim 1 is incorporated, and Coppa further teaches: wherein the at least one new test case is generated based on the at least one existing test case and of the representation information, by a model being applied to generate the representation information, wherein the model is trained based on a prediction of the effect (Coppa, e.g., ¶85, “response model is updated by providing training data to the response model that includes the test signal and the corresponding feedback … providing input that includes one or more test signals and corresponding known responses … modifying the response model to reduce the difference between the model output and the known responses …” See also, e.g., ¶87, “test signal can be updated (e.g., a new test signal is generated) based on a difference between the detected response and the determined response …” See also, e.g., ¶88, “In some embodiments, the test signal is generated to identify corner cases, such as by increasing an amount of randomly generated data of the test signal …” Examiner’s note: the new test signal is generated based on the first test signal, and the representation information as described in the rejection of claim 1, the model being an evolutionary or genetic model (see ¶89), the model being trained based on predictions of the effect as produced by test signals used as training or live data).
Regarding claim 3, the rejection of claim 1 is incorporated, and Coppa further teaches: wherein the effect results from a fitness function and/or a performance metric, which quantifies a success of the training test cases, wherein the effect is a code coverage at the test target (Coppa, e.g., ¶93, “In some embodiments, a coverage metric, such as a metric of code coverage, can be determined using the feedback … mapping the test signals provided to the UUT to the response behaviors performed by the UUT measured during the feedback … evaluate how great a range of (potential) values of these parameters that can be induced in the operation of the UUT using the test signals.” See also, e.g., ¶32, “fuzzing system 200 can use the response model 204 and test signal generator 208 to perform such actions as … evaluating code coverage.” See also, e.g., ¶¶56-57, “map feedback 220 to a value or range of values of the coverage metric … determine how much of a latency space has been covered … generate the test signal 216 to increase a total coverage … compare the coverage metric to one or more threshold coverage metrics …”).
Regarding claim 4, the rejection of claim 1 is incorporated, and Coppa further teaches: wherein the existing test case is implemented as a seed, and the at least one new test case is generated based on the representation information by mutations of the seed being ascertained using the representation information (Coppa, e.g., ¶87, “test signal can be updated (e.g., a new test signal is generated) based on a difference between the detected response and the determined response … one or more parameters of the detected response can be compared to one or more corresponding parameters of the expected response, and the test signal can be generated based on the difference … a new test signal can be generated with different characteristics.” See also, e.g., ¶¶88-89, “In some embodiments, the test signal is generated to decrease a difference between the expected response and the detected response, such as if the expected response is a desired response … [or] to identify corner cases, such as by increasing an amount of randomly generated data of the test signal … genetic algorithm can be applied to one or more characteristics of the test signal (e.g., an amount of randomness …”).
Regarding claim 5, the rejection of claim 1 is incorporated, and Coppa further teaches: wherein the different forms of the test target include different target programs and/or different versions of a target program, which have an identical input format for an input resulting from the test cases (Coppa, e.g., ¶42, “CNN 304 can be trained using training data that includes at least one of … feedback 220 received from the electronic device 128, to generate a message format … can include known or expected message format features mapped to respective messages, enabling the CNN 304 to identify relationships between the identifiable features of the messages (e.g., input to the CNN 304) and the message format (e.g., output). The CNN 304 can provide the message format to the fuzzer 104, such as to enable the test signal generator 208 of the fuzzer 104 to generate the test signal 216 in accordance with the message format …” See also, e.g., ¶34, “machine learning models can be trained using training data that includes a plurality of test signals 216 each associated with one or more respective parameters … various parameters regarding electronic devices 128 … determined based on empirical measurement of parameters corresponding to specific electronic devices 128 responsive to receiving test signals 216.”).
Regarding claim 7, Coppa teaches: A training method for training a machine-learning model for generating at least one new test case for a fuzzing software test (Coppa, e.g., ¶16, “provide a first test signal … determine … a detected response … provide a second test signal based on the detected response and the expected response …” See also, e.g., ¶42, “CNN 304 can be trained using training data that includes at least one of … feedback 220 received from the electronic device 128, to generate a message format …”), comprising the following steps:
providing training test cases; providing different forms of a test target; training the machine-learning model for outputting representation information and for predicting an effect of the training test cases on the different forms of the test target (Coppa, e.g., ¶42, “CNN 304 can be trained using training data that includes at least one of … feedback 220 received from the electronic device 128, to generate a message format … can include known or expected message format features mapped to respective messages, enabling the CNN 304 to identify relationships between the identifiable features of the messages (e.g., input to the CNN 304) and the message format (e.g., output). The CNN 304 can provide the message format to the fuzzer 104, such as to enable the test signal generator 208 of the fuzzer 104 to generate the test signal 216 in accordance with the message format …” See also, e.g., ¶34, “machine learning models can be trained using training data that includes a plurality of test signals 216 each associated with one or more respective parameters … various parameters regarding electronic devices 128 … determined based on empirical measurement of parameters corresponding to specific electronic devices 128 responsive to receiving test signals 216.”),
wherein the prediction is performed on the basis of the output representation information (Coppa, e.g., ¶77, “UUT can be monitored for feedback for a predetermined duration of time subsequent to the test signal being provided … feedback that can include direct observables … or indirect observables …” See also, e.g., ¶78, “determining a response from the feedback …” See also, e.g., ¶83, “applying the test signal to a response model to determine an expected response of the UUT to the test signal … to include at least one parameter of the UUT that the fuzzer monitors using the feedback received from the UUT …” See also, e.g., ¶84, “response model includes one or more machine learning models … trained to generate expected (or predicted) responses of UUTs to test signals … using training data that includes a plurality of test signals each associated with one or more respective parameters … parameters regarding UUTs …” Examiner’s note: the representation information includes both parameters or other feedback obtained in response to applying the test signal and based on obtaining an expected response based on that test signal, the obtaining of an expected response being based on a model trained with training test cases on a plurality of UUTs); and
providing the trained machine-learning model for use in generating the at least one new test case (Coppa, e.g., ¶85, “response model is updated by providing training data to the response model that includes the test signal and the corresponding feedback … providing input that includes one or more test signals and corresponding known responses … modifying the response model to reduce the difference between the model output and the known responses …” See also, e.g., ¶87, “test signal can be updated (e.g., a new test signal is generated) based on a difference between the detected response and the determined response …” See also, e.g., ¶88, “In some embodiments, the test signal is generated to identify corner cases, such as by increasing an amount of randomly generated data of the test signal …”).
Claim 8 is rejected for the reasons given in the rejection of claim 7 above. Examiner notes that with respect to claim 8, Coppa further teaches: A machine-learning model configured to generate at least one new test case of a fuzzing software test (Coppa, e.g., ¶85, “response model is updated by providing training data to the response model that includes the test signal and the corresponding feedback … providing input that includes one or more test signals and corresponding known responses … modifying the response model to reduce the difference between the model output and the known responses …” See also, e.g., ¶87, “test signal can be updated (e.g., a new test signal is generated) based on a difference between the detected response and the determined response …”), the machine-learning model being trained by: [[[the steps of the method of claim 7]]].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 6 is rejected under 35 U.S.C. § 103 as being unpatentable over Coppa in view of Weyrich et al., U.S. 2021/0182707 A1 (“Weyrich”).
Regarding claim 6, the rejection of claim 1 is incorporated, but Coppa does not more particularly teach that the new test case generated is executed by the fuzzing software test for testing the at least one form of the test target which includes a program or embedded system for controlling an at least partially autonomous robot. However, Weyrich does teach: wherein the new test case generated is executed by the fuzzing software test for testing the at least one form of the test target, wherein the at least one form of the test target includes a program and/or an embedded system for controlling an at least partially autonomous robot (Weyrich, e.g., ¶7, “apparatus may be … robot, in particular a mobile robot ab at least partially autonomous mobile or stationary robot …” See also, e.g., ¶39, “Test cases for the training are for example determined depending on … a fuzzy set model …” and ¶70, “selecting arrangement 106 may be adapted to determine the plurality of test cases depending on input from the … fuzzy set model …”) for the purpose of generating diverse test cases to test controllable units or targets in a partially or fully autonomous robot environment (Weyrich, e.g., ¶¶7, 39, 70).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for generating alternative tests using fuzzing methods as taught by Coppa to provide that the new test case generated is executed by the fuzzing software test for testing the at least one form of the test target which includes a program or embedded system for controlling an at least partially autonomous robot because the disclosure of Weyrich shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for generating test cases using fuzzy models to provide that the new test case generated is executed by the fuzzing software test for testing the at least one form of the test target which includes a program or embedded system for controlling an at least partially autonomous robot for the purpose of generating diverse test cases to test controllable units or targets in a partially or fully autonomous robot environment (Weyrich, Id.).
Conclusion
Examiner has identified particular references contained in the prior art of record within the body of this action for the convenience of Applicant. Although the citations made are representative of the teachings in the art and are applied to the specific limitations within the enumerated claims, the teaching of the cited art as a whole is not limited to the cited passages. Other passages and figures may apply. Applicant, in preparing the response, should consider fully the entire reference as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art and/or disclosed by Examiner.
Examiner respectfully requests that, in response to this Office Action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist Examiner in prosecuting the application.
When responding to this Office Action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 C.F.R. 1.111(c).
Examiner interviews are available via telephone and video conferencing using a USPTO-supplied web-based collaboration tool. Applicant is encouraged to submit an Automated Interview Request (AIR) which may be done via https://www.uspto.gov/patent/uspto-automated-interview-request-air-form, or may contact Examiner directly via the methods below.
Any inquiry concerning this communication or earlier communication from Examiner should be directed to Andrew M. Lyons, whose telephone number is (571) 270-3529, and whose fax number is (571) 270-4529. The examiner can normally be reached Monday to Friday from 10:00 AM to 6:00 PM ET. If attempts to reach Examiner by telephone are unsuccessful, Examiner’s supervisor, Wei Mui, can be reached at (571) 272-3708. Information regarding the status of an application may be obtained from the Patent Center system. For more information about the Patent Center system, see https://www.uspto.gov/patents/apply/patent-center. If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800) 786-9199 (in USA or Canada) or (571) 272-1000.
/Andrew M. Lyons/Examiner, Art Unit 2191