DETAILED ACTION
Remarks
Applicant presents a request for continued examination dated 3 February 2026 responsive to the 5 November 2025 final Office action (the “Previous Action”).
Claims 1-4 are amended. New claims 6 and 7 are added.
Claims 1-5 remain pending. Claims 1 and 4 are the independent claims.
Any unpersuasive arguments are addressed in the “Response to Arguments” section below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3 February 2026 has been entered.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Arguments
Applicant asserts with respect to the objection to a title will be submitted upon indication of allowable subject matter. (Remarks, p. 6).
This objection is accordingly maintained.
Applicant’s remaining arguments are moot in view of the new ground(s) of rejection, necessitated by Applicant’s amendments.
Specification
The title of the invention is objected to because it is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: “Creating Input and Output for Use in Regression Testing.” See pars. [0007] and [0004] of the specification.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-7 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
As to claim 1, the claim refers to:
…converting respective representations of the first output of the test target system and the second output of the reference model into a common form, and
matching, based on at least the converting, the first output of the test target system and the second output of the reference model respectively in the common form, thereby enabling matching the first output of the test target system and the second output of the reference model being not strictly identical prior to performing the converting operation…
There does not appear to by sufficient support in the specification for these features. Applicant points to paragraphs [0036-0038], [0040], [0046] and [0047] as support. (Remarks, p. 7 par. 1).
Examiner respectfully disagrees and submits that none of these paragraphs disclose what is claimed. Paragraphs [0036-0037] come the closes but those paragraphs only describe matching outputs that are not exactly the same. They do not describe performing any conversion.
As to claims 2-3 and 6, they are dependent on claim 1 but do not cure the deficiencies of that claim. Accordingly, they are rejected for the same reasons.
Further as to claim 6, the “converting…” of this claim does not have original support for the same reasons as claim 1. No converting appears to be originally described.
As to claim 4, the claim includes the same new matter as claim 1 and is rejected for the same reasons.
As to claim 7, it is dependent on claim 4 but does not cure the deficiencies of that claim. Accordingly, it is rejected for the same reasons.
Further as to claim 7, this claim includes the same new matter as claim 6 and is rejected for the same reasons.
As to claim 5, the claim includes the same new matter as claim 1 and is rejected for the same reasons.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Prologo et al. (US 6,823,478) (art of record – hereinafter Prologo) in view of Zhu et al. (US 2016/0098442) (art made of record – hereinafter Zhu) in view of Kolb et al. (US 7,454,660) (art of record – hereinafter Kolb).
As to claim 1, Prologo discloses a test generation apparatus comprising:
a processor and a memory storing instructions that, when executed by the processor, cause the processor (e.g., Prologo, col. 2 l. 67 – col. 3 l. 1-2: the invention will be described in the context of computer-executable instructions being executed by a personal computer [i.e., the processor of that computer]) to:
compare first output of a test target system executed based on an input, with a second output of a reference model executed based on the input; (e.g., Prolongo, col. 7 ll. 47-53: FIGS. A-5B are flow diagrams which describe a testing process for comparing output generated from a prior processing environment [reference model] with output generated by a changed processing environment [target test system]; col. 7 ll. 7-9: the test representation 210 includes inputs 211 and outputs 213 calculated by the application based on the inputs 211; col. 7 ll. 45-46: the profile points to the test representation; col. 8 ll. 22-28: the first time the profile is processed, key file does not exist so the process proceeds to block 524, where the generated output is saved as the key file. This key file “(e.g., created with the prior processing environment)” will be used for comparing subsequent generated output files “(e.g., created with the changed processing environment)”; col. 8 ll. 13-15: using the inputs specified in the test representation, the updated version of the application 207’ generates an output file at block 510; col. 8 ll. 36-37: at block 530, the output file [output of a test target system] is compared to the key file [output of a reference model]) and
store, based on a result of determining whether to store the input and the first output of the test target system according to the matched first output of the test target system and the second output of the reference model, the input and the first output of the test target system in a storage unit; (e.g., Prologo, Fig. 5B and associated text, col. 8 ll. 41-45: if there are not any differences, the process proceeds to block 534 where the test representation specified in the associated profile is saved; col. 7 ll. 7-9: the test representation 210 includes inputs 211 and outputs 213 calculated by the application based on the inputs 211; col. 8 ll. 51-60: at block 536 [in response to no differences found (i.e., a match) by the comparison at 530, see figure] the output file becomes the key file, such as by copying [storing] the output file as the key file. By copying the output file as the key file, any new calculations specified in the test representation that resulted in additional outputs will be saved in the key file for later comparisons).
Prologo does not explicitly disclose wherein the comparing further comprises: converting respective representations of the first output of the test target system and the second output of the reference model into a common form, and matching, based on the converting, the first output of the test target system and the second output of the reference model respectively in the common form, thereby enabling matching the first output of the test target system and the second output of the reference model being not strictly identical prior to performing the converting operation; to generate, from the stored first input and the stored output, a machine-executable test script, wherein the machine-executable test script causes the test target system to perform using the stored input and compares a resulting third output of the test target system and the stored first output; and automatically execute the machine-executable test script to cause executing the test target system using the stored input and to compare the resulting third output from the test target system with the first stored input.
However, in an analogous art, Zhu discloses:
wherein the comparing further comprises:
converting respective representations of the first output of the test target system and the second output of the reference model into a common form, (e.g., Zhu, Fig. 2 and associated text, par. [0028]: the validation engine 204 includes a comparator 204 that compares the output from the analytics system, [first output of the test target system] with the output of the analytics simulator [output of the reference model]. In some cases, the output subset from the analytics system and the output of the analytics simulator may be in different data formats, e.g., one may be in JSON format and the other may be in CSV format, or one may be in JSON and the other in an XML format. The comparator can convert one or more of the subsets into a common format) and
matching, based on the converting, the first output of the test target system and the second output of the reference model respectively in the common form, (e.g., Zhu, par. [0028]: the comparator can examine individual data points within the subsets and determine whether they are equal [match] or different; par. [0029]: the validation engine determines whether the comparison of the output subset [based on the converting, see above] is sufficient to indicate a positive validation result. For example, if the output subset different from the output of the analytics simulator only in a number of data elements, the threshold can output a positive result) thereby enabling matching the first output of the test target system and the second output of the reference model being not strictly identical prior to performing the converting operation; (see above and also M.P.E.P. § 2111.04, these limitations have no limiting effect because they only express the intended result of a process step positively recited).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the comparing of first output of a test target system and second output of a reference model taught by Prologo to include converting respective representations of the first output of the test target system and the second output of the reference model into a common form, matching, based on the converting, the first output of the test target system and the second output of the reference model respectively in the common form, thereby enabling matching the first output of the test target system and the second output of the reference model being not strictly identical prior to performing the converting operation, as taught by Zhu, as Zhu would provide the advantage of a means of comparing data in different formats. (See Zhu, par. [0028]).
Further, in an analogous art, Kolb discloses to:
generate, from the stored input and the stored first output, a machine-executable test script, (e.g., Kolb, col. 3 ll. 61-64: during the recording phase, known inputs are provided to the business layer 206 to establish what the predicted proper outputs should be; col. 4 ll. 8-9: the test control program 200 records the inputs and outputs provided by the test plugin 205 to a test script 201 [note that all data existing in a computer is stored in some memory, even if it has not yet been recorded to the script]) wherein the machine-executable test script causes the test target system to perform using the stored input and compares a resulting third output of the test target system and the stored first output; (e.g., Kolb, Fig. 9 and associated text, col. 3 ll. 31-36: the test control program may utilize the test script 201 to test another instance of the business layer 206 by applying the same inputs to the business layer 206 and comparing the resultant outputs to the expected outputs) and
automatically execute the machine-executable test script to cause executing the test target system using the stored input and to compare the resulting third output from the test target system with the stored first input (see immediately above).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the recording of input and stored first output taught by Prologo to include generating, from the stored input and the first stored output, a machine-executable test script, that causes the test target system to perform using the stored input and compares a resulting third output of the test target system and the stored first output; and automatically execute the machine-executable test script to cause executing the test target system using the stored input and compare the resulting third output from the test target system with the first stored input, as taught by Kolb, as Kolb would provide the advantage of a means of replaying operations which process the input data during the test as well as a means for a developer to customize checks on the target output. (See Kolb, col. 6 ll. 31-49). Kolb also shows that a script can be used to apply the inputs to the test target system and perform the comparison with the output instead of the mechanisms of Prologo and still achieve the same result (testing the test target system), meaning that result would have been predictable. (See M.P.E.P. § 2143(I)(B)).
As to claim 2, Prologo/Zhu/Kolb discloses the test generation apparatus according to claim 1 (see rejection of claim 1 above), Prologo further discloses:
wherein, when the first output of the test target system and the second output of the reference model match, the store operation further comprises storing the first input and the output of the test target system in the storage unit. (e.g., Prologo, Fig. 5B and associated text, col. 8 ll. 36-37: at block 530, the output file [output of the target system, see above] is compared with the key file [output of the reference model, see above]; col. 8 ll. 41-45: if there are not any differences [i.e., they match], the process proceeds to block 534 where the test representation specified in the associated profile is saved col. 8 ll. 51-60: at block 536 [in response to no differences at 532, see figure] the output file becomes the key file, such as by copying [storing] the output file as the key file. By copying the output file as the key file, any new calculations specified in the test representation that resulted in additional outputs will be saved in the key file for later comparisons).
As to claim 4, it is a method claim having limitations substantially the same as claim 1. Accordingly, it is rejected for substantially the same reasons.
As to claim 5, it is a medium claim having limitations substantially the same as claim 1. Accordingly, it is rejected for substantially the same reasons. Further limitations, disclosed by Prologo, include:
a non-transitory computer-readable recording medium storing a program that, when executed on a computer, causes the computer to perform according to (e.g., Prologo, col. 2 l. 67 – col. 3 l. 2: the invention will be described in the context of computer executable instructions, such as program modules, executed by a personal computer [which would require storing them in a non-transitory computer-readable recording medium]. See also Fig. 1 and associated text) the test generation apparatus of claim 1 (see rejection of claim 1 above).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Prologo (US 6,823,478) in view of Zhu (US 2016/0098442) in view of Kolb et al (US 7,454,660) in further view of Chakraborty et al. (US 2017/0097882) (art of record – hereinafter Chakraborty).
As to claim 3, Prologo/Zhu/Kolb discloses the test generation apparatus according to claim 1 (see rejection of claim 1 above), Prologo further discloses:
wherein, when the first output of the test target system and the second output of the reference model do not match, the store operation further comprise storing in the storage unit (e.g., Prologo, Fig. 5B and associated text, col. 8 ll. 37-42: the output file [output of the target system] is compared with the key file [output of the reference model]. If there are any differences between the two files, the process proceeds wo block 538. At block 538, the process updates any tables to identify the different, such as the sample result table in Fig. 6B; col. 10 ll. 34-35: FIG. 6B illustrates a second table 620 for storing information about each representation that failed)
Prologo does not explicitly discloses the test generation unit stores the input and the first output of the test target system in the storage unit.
However, in an analogous art, Chakraborty discloses:
storing the input and the first output of the test target system in the storage unit (e.g., Chakraborty, par. [0037], control computer 20 generates verification tests. The verification tests are configured to invoke certain methods, and therefore, include the input parameters that were recorded for those methods. To verify, one embodiment captures the data that is output by the methods during the verification tests [test target system], and compares the captured data against previous recorded output parameters for those methods [reference model]. If the parameters do not match, the verification fails; par. [0060]: in some cases, such as that seen in FIG. 5 [sic, should be FIG. 4], the test results [inherently stored] may indicate the values of the input/output parameters used in the test [including failed verifications, see figure]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the test generation unit storing of information about each test that failed when the output of the target system and reference model do not match taught by Prologo, such that the unit stores the input and the output of the test target system in the storage unit, when the output of the target system and reference model do not match taught by Chakraborty, as Chakraborty would provide the advantages of a means of providing more detail about the failed test (see Chakraborty at Fig. 4) and a means for a user to analyze the failure and determine how to correct it. (See Chakraborty, par. [0058]).
Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Prologo (US 6,823,478) in view of Zhu (US 2016/0098442) in view of Kolb (US 7,454,660) in further view of Lavie et al. (US 2017/0169625) (art made of record – hereinafter Lavie).
As to claim 6, Prologo/Zhu/Kolb discloses the test generation apparatus according to claim 1 (see rejection of claim 1 above) but Prologo does not explicitly disclose wherein the converting the second output of the reference model further comprises converting a value of the second output of the reference model from a unit expression to another unit expression to cause exact-match with the second output of the reference model.
However, in an analogous art, Zhu discloses:
wherein the converting the second output of the reference model further comprises converting a value of the second output of the reference model to cause exact-match with the second output of the reference model (e.g., Zhu, Fig. 2 and associated text, par. [0028]: the validation engine 204 includes a comparator 204 that compares the output from the analytics system, [first output of the test target system] with the output of the analytics simulator [output of the reference model]. For example, the comparator can examine individual data points [values] within the subset and determine whether they are equal [an exact match]. In some cases, the output subset from the analytics system and the output of the analytics simulator may be in different data formats, e.g., one may be in JSON format and the other may be in CSV format, or one may be in JSON and the other in an XML format. The comparator can convert one or more of the subsets into a common format).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the comparing of first output of a test target system and second output of a reference model taught by Prologo to include converting a value of the second output of the reference model to cause exact-match with the second output of the reference model, as taught by Zhu, as Zhu would provide the advantage of a means of comparing data in different formats. (See Zhu, par. [0028]).
Further, in an analogous art, Lavie discloses:
wherein the converting comprises converting a value from a unit expression to another unit expression (e.g., Lavie, par. [0146]: for information from disparate sources to be compared, the information must be normalized, i.e., converted to the same units of measure. Addition, the quality and precision must be represented within a normalized fashion. In other words, if for example, one speed is known to be accurate to within +/- 10 mph, then all speeds should have an error estimate in mph “(as opposed to kph for example)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the converting a value of second output of a reference model to cause exact-match with the second output of the reference model taught by Prologo/Zhu to include converting that value from a unit expression to another unit expression, as taught by Lavie, as Lavie would provide the advantage of a means of enabling comparison of a value comprising a numerical measurement with a value having a different unit of measure. (See Lavie, par. [0146]).
As to claim 7, it is a method claim having limitations substantially the same as claim 6. Accordingly, it is rejected for substantially the same reasons.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TODD AGUILERA whose telephone number is (571)270-5186. The examiner can normally be reached M-F 11AM - 7:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S Sough can be reached at (571)272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TODD AGUILERA/Primary Examiner, Art Unit 2192