DETAILED ACTION
Claims 1-10 are currently pending in the application 18/442,933, which was filed on 02/15/2024, listing the inventor as Safouane Sfar and the applicant as Robert Bosch GmbH.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, and 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2024/0338273 A1 to Philip et al. (here Philip).
Claim 1
Philip shows a method for automated analysis of software tests of software (Philip: [0002], “software analysis”; [0004], comparing software crashes to reference crashes from a proxy crash generated by test cases), comprising the following steps:
ascertaining an error log about an incorrect execution of the software (Philip: [0004], “… receiving a crash signature …”), wherein the error log specifies an execution context of the incorrect execution (Philip: [0004], “… and a crash configuration …”);
ascertaining test logs (Philip: [0004], “… plurality of references … The reference includes a reference crash signature …”) that result from a performance of the software tests of the software that preceded the incorrect execution of the software (Philip: [0004], “… The proxy crash was generated prior to the software crash by executing a modified test case …”), wherein the software tests include a plurality of existing test cases (Philip: figure 3, element 302; [0053], “… The proxy crash device … analyzes one or more test cases … to determine references …”), through which various functions of the software are tested (Philip: [0053], “… The test cases … may be a part of a suite of quality assurance tests … The test cases … may be executed against a software application to determine bugs or errors within the software application …”), wherein the test logs specify a respective execution context of the existing test cases (Philip: [0004], “The reference includes … a reference configuration …”);
carrying out an evaluation of the test logs based on the error log (Philip: [0004], “… to determine a reference of a plurality of references that is closest to the crash signature and the crash configuration …”), wherein the evaluation takes place based on a similarity of an execution context of the incorrect execution to the respective execution context of the existing test cases (Philip: [0004], “… to determine a reference of a plurality of references that is closest to the crash signature and the crash configuration …”), wherein the evaluation takes place at least partially based on machine learning (Philip: [0004], “… to determine a reference of a plurality of references that is closest to the crash signature and the crash configuration …”);
wherein, based on the evaluation, a new test case that is suitable for reproducing the incorrect execution is generated, wherein the generation of the new test case takes place by a machine learning model trained for this purpose (Philip: [0049] shows straight forward generation of a test case; and [0005]-[0007], [0058]-[0060], show generating a modified test case);
wherein the generated test case is based on at least one of the existing test cases (Philip: [0059], “… to adjust a test case …”), wherein the following steps are carried out for the generation:
identifying at least one of the existing test cases whose execution context has a greatest similarity to the execution context of the incorrect execution (Philip: [0007]; and [0048], ranking); and
adapting … at least one test case so that it is suitable for reproducing the incorrect execution (Philip: [0054]-[0055]; and [0059]-[0060]);
wherein the adaptation takes place by changing a parameterization of the identified at least one test case (Philip: [0054]-[0055]; and [0059]-[0060]).
While Philip does not explicitly state adapting the identified at least one test case, Philip demonstrates that it was known before the effective filing date of the claimed invention to adapt test cases from the previously provided test cases (Philip: [0054]-[0055]; and [0059]-[0060]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the test case adaptation of Philip with adapting the test case with the greatest similarity as suggested by the teachings of Philip. This implementation would have been obvious because one of ordinary skill in the art would have found: Philip both identifies the test case with the greatest similarity and adapts previously established test cases, adapting the most similar test case would be simplest and easiest starting point for adaptation; and as Philip adapts from the pool of test cases that includes the one with the greatest similarity, Philip is already capable of adapting the test case with the greatest similarity.
Claim 4
Philip shows the method according to claim 2, further comprising the following steps:
performing the generated test case (Philip: [0055], “… to see if the software application 312 crashes with the adjustments …”); and
checking that the incorrect execution of the software is reproduced by execution of the generated test case (Philip: [0055], “… to see if the software application 312 crashes with the adjustments …”; and [0056]);
adapting the software such that an error underlying the incorrect execution of the software is corrected in a program code of the software, and/or integrating the generated test case into a testing process (Philip: [0060], shows integrating the test case for future use, including correcting code; see also [0050] and [0056]).
Claims 9 and 10
The limitations of claims 9 and 10 correspond to the limitations of claim 1, as such the limitations of claims 9 and 10 a rejected in a corresponding manner as the limitations of claim 1.
Claim(s) 5-6 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2024/0338273 A1 to Philip et al. (here Philip) in view of US 2021/0279577 A1 to West et al. (herein West).
Claim 5
Philip shows the method according to claim 1, wherein a generative machine learning model is provided in order to carry out the evaluation, and to generate, based on the ascertained error log and test logs, a test case that is suitable for reproducing the incorrect execution (Philip: see above citations for claims 1-4).
However, while Philip does discuss some implementation details for the machine learning models (Philip: [0041] and [0057], such as “convolutional networks”), Philip does not specify wherein the machine learning model has at least one of the following network architectures: a variational autoencoder, a generative adversarial network, an autoregressive model. West demonstrates that it was known before the effective filing date of the claimed invention to implement software testing using machine learning models (West: figures 1-2; [0036], “… The test pipeline may include a machine learning model …”) and for those machine learning models to have an architecture such as variational autoencoder (West: [0040], among other options such as a “convolutional neural net”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the machine learning models or models of Philip with an architecture such as variational autoencoding as suggested by the teachings of West. This implementation would have been obvious because one of ordinary skill in the art would have found: both Philip and West are directed to software test generation using machine learning models; the implementation of is a substitution and application of one known element and technique for another yielding a predictable result using an acceptable piece of prior art; and both Philip and West indicate the implementation of the machine learning model can be through several different known types, including at least one in common, convolutional networks.
Claim 6
Philip and West show the method according to claim 1, wherein the generative machine learning model is a neural network (West: figures 1 and 2; and [0040]).
Claim 8
Philip and West show the method according to claim 5, wherein the evaluation is carried out repeatedly, so that continuous learning of the machine learning model is provided based on thereby ascertained error and test logs: (i) through re-training, in which the ascertained error and test logs are included in the training data (West: [0033], “… The client 210 may inform the test results, and the informed test results may be fed back into the test pipeline 220 to further train the machine learning model 250 …”), and/or (ii) through incremental learning.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2024/0338273 A1 to Philip et al. (here Philip) in view of US 2021/0279577 A1 to West et al. (herein West) in further view of US 2020/0201915 A1 to Pathak et al. (herein Pathak).
Claim 7
Philip and West show the method according to claim 5, wherein the machine learning model is trained by the following steps: ascertaining training data, wherein the training data include example error logs about various incorrect executions of the software and example test logs about software tests of the software, wherein test cases of the software tests lack suitability for reproducing the incorrect executions, and wherein the training data include annotation data specifying test cases that are suitable for reproducing the incorrect executions (Philip: as discussed above for claims 1-4, shows the particular environment including error logs, incorrect executions, test logs, test cases; West shows training using data, as seen above; training using the data of Philip is obvious in view of West for the same reasons discussed above for claims 5-6, and 8).
However, neither Philip nor West state initiating weightings of the machine learning model, or carrying out a training process to optimize the weightings of the machine learning model based on the training data, wherein the training process uses a loss function that minimizes a difference between data generated by the machine learning model and the annotation data. Pathak demonstrates that it was known before the effective filing date of the claimed invention to implement optimizing weights for training a machine learning model (Pathak: [0008]) and to do so by using a loss function (Pathak: [0008]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the data and machine learning models or models of Philip and West with optimizing weights for training a machine learning model and by the training using a loss function, minimizing differences with new features as suggested by the teachings of Pathak. This implementation would have been obvious because one of ordinary skill in the art would have found: Philip, West, and Pathak are directed to using machine learning models; and the implementation of is an application of one known element and technique of machine learning models yielding a predictable result using an acceptable piece of prior art.
Response to Arguments
Applicant's arguments filed 02/17/2026 (herein Remarks) have been fully considered but they are not persuasive. Applicant argues: (1) Philip does not show “wherein, based on the evaluation, a new test case that is suitable for reproducing the incorrect execution is generated, wherein the generation of the new test case takes place by a machine learning model trained for this purpose” (Remarks: page 7); (2) Philip and West do not show “identifying at least one of the existing test cases whose execution context has a greatest similarity to the execution context of the incorrect execution; and adapting the identified at least one test case so that it is suitable for reproducing the incorrect execution” (Remarks: page 8); and (3) Philip and West are not combinable (Remarks: pages 8-9).
First, as the previous rejections indicate, Philip shows generating a adjusted/modified test case in paragraphs [0058]-[0060], which reads upon the claimed generating a new test case (this is consistent with the claimed adapting test cases). Further, Philip shows “… to determine a proxy crash that approximates a crash …” (Philip: [0060]). These teachings do not appear to be considered by the Remarks. For these reasons, this argument is not persuasive.
Second, Philip explains “… then select a reference 122 as representing a proxy crash that approximates the software application crash …” (Philip: [0048]), which reads upon the claimed identifying greatest similarity. Further, it is noted that Philips uses the crash signature and the crash configuration, which is not simply comparing “errors to errors”. Philip does show the necessary data in so far as what is claimed. For theses reasons, this argument is not persuasive.
Third, it should be noted that claim 3 was not rejected based on a combination of Philip and West (for the material now incorporated into claim 1). Additionally, the Remarks do not argue the actual rejections that are related to the Philip and West combination (specifically claim 5). For these reasons, this argument is not persuasive. Although, it should also be noted, it is obvious to combine Philip and West as described above. West merely teaches additional machine learning models in a testing environment when Philip has already shown using machine learning models in testing environment.
For all these reasons the arguments presented in the Remarks are not persuasive.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Correspondence Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM H WOOD whose telephone number is (571)272-3736. The examiner can normally be reached Monday-Friday 7am-3pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Kosowski can be reached at (571)272-3744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/William H. Wood/
Primary Examiner, Art Unit 3992