Prosecution Insights
Last updated: April 19, 2026
Application No. 18/531,910

DYNAMIC CTD MODEL TRACKING AND OPTIMIZATION

Final Rejection §103§112
Filed
Dec 07, 2023
Examiner
BERMAN, STEPHEN DAVID
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
262 granted / 331 resolved
+24.2% vs TC avg
Strong +57% interview lift
Without
With
+56.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
26 currently pending
Career history
357
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 331 resolved cases

Office Action

§103 §112
DETAILED ACTION Remarks The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is filed in response to Applicant’s arguments and amendment dated November 13, 2025. Claims 1-4, 6-7, 9, 11-12, 14, 16, and 18-19 are currently amended and claims 1-20 remain pending in the application and have been fully considered by Examiner. In view of Applicant’s Amendment and Remarks, the 35 USC 101 rejections are withdrawn. Applicant's arguments with respect to the prior art rejections have been considered, but are moot in view of the new grounds of rejection presented herein. Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With respect to claim 9, lines 10-11 recite “the computing system”, but there is no previously recited “computing system” and it is unclear whether this refers to the previously recited “apparatus”, “processing device”, or something else. The scope of the claim is therefore indefinite. For purposes of compact prosecution only, Examiner has interpreted claim 9 as reciting “a computing system” With respect to claims 10-15, each inherits the 35 USC 112(b) deficiency of claim 9 (see above). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8-13, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Segall et al. (US 20130085741, hereinafter Segall) in view of Anonymous, “User Guide for ACTS Version 2.92” (hereinafter Anonymous), Hess et al. (US 20190171545, hereinafter Hess), and Hicks et al. (US 20200242012, hereinafter Hicks). With respect to claim 1, Segall discloses A method (e.g., Fig. 1A) comprising: dynamically generating a combinatorial test design (CTD) model to obtain a dynamically generated CTD model generated (e.g., Figs. 1-2B and associated text, e.g., [0015], The model [CTD model] defines variables (i.e., attributes), possible values for the variables … The set of valid value combinations defines the coverage model; [0018], the resulting test plan [dynamically generated CTD model]1 may include the valid value tuples of size m, wherein each tuple represents a test (e.g., a combination of variable values) in the test plan; [0024-25], dynamic synchronization during test planning, so that the changes to the test plan and the resulting holes are displayed in real time [dynamically generated]; see also [0006] and [0022].), wherein dynamically generating the CTD model comprises applying one or more limitations to an existing CTD model (e.g., Figs. 1-2B and associated text, e.g., [0006], a coverage model [CTD model] including: … value combinations … wherein zero or more of said value combinations are defined according to one or more restrictions [applying one or more limitations to an existing CTD model] for the purpose of generating a test plan [dynamically generated CTD model] to test a system for which the coverage model [existing CTD model] is constructed; [0015-16], restrictions indicating when values for one or more variables or value combinations for a plurality of variables are valid … narrowing the test space by way of defining additional … restrictions [applying one or more limitations]; [0025], In the interface shown in FIG. 2A, the user may also delete one or more tests in the left area, for example, and determine the effect of removing them from the test plan.), and wherein the one or more limitations include ; one or more test cases, of the dynamically generated CTD model, on the computing system to test a functionality of one or more components of the computing system (e.g., Fig. 3B and associated text, e.g., [0017], Testing such interactions leads to detecting a majority of bugs in a system; Abstract, generating a test plan to test a system for which the coverage model is constructed; [0018], the resulting test plan [dynamically generated CTD model] may include the valid value tuples of size m, wherein each tuple represents a test (e.g., a combination of variable values) in the test plan [test cases]); identifying a difference in coverage between the existing CTD model and the dynamically generated CTD model (e.g., Figs. 1-2B and associated text, e.g., [0015], The model [existing CTD model] defines variables (i.e., attributes), possible values for the variables … The set of valid value combinations defines the coverage model [existing CTD model]; [0018], the resulting test plan [dynamically generated CTD model] may include the valid value tuples of size m, wherein each tuple represents a test (e.g., a combination of variable values) in the test plan; [0024-25], dynamic synchronization during test planning, so that the changes to the test plan and the resulting holes are displayed in real time [dynamically generated]; [0023], the holes in the current test plan are displayed, where value combinations that are not covered for the two attributes “availability” and “carrier” are listed [identifying a difference in coverage between the existing CTD model and the dynamically generated CTD model]; see also [0006] and [0022].); and maintaining a listing of uncovered value combinations (e.g., Figs. 1-2B and associated text, e.g., [0023-24], the holes in the current test plan are displayed, where value combinations that are not covered for the two attributes "availability" and "carrier" are listed … the list in the right area is updated [maintaining a listing of uncovered value combinations].). Seagal does not appear to disclose the following, which is taught in analogous art, Anonymous: test environment (e.g., p. 0022 § 1.3, when we want to make sure a web application can run in different Internet browsers and on different Operating Systems [test environment], the configuration of IE on Mac OS is not a valid combination. A test that contains an invalid combination will … not be executed properly … ACTS allows the user to specify constraints [test environment limitations] that combinations must satisfy to be valid. The specified constraints will be taken into account during test generation; p. 009, 3rd para., This constraint specifies that if OS is Windows, then Browser has to be IE, Firefox, or Netscape [test environment limitations].) … to remove, from the existing CTD model, one or more test cases that cannot be executed based on the one or more test environment limitations (e.g., p. 002 § 1.3, when we want to make sure a web application can run in different Internet browsers and on different Operating Systems [test environment], the configuration of IE on Mac OS is not a valid combination. A test that contains an invalid combination will … not be executed properly … ACTS allows the user to specify constraints [test environment limitations] that combinations must satisfy to be valid. The specified constraints will be taken into account during test generation; p. 009, 7th full para., The right bottom of this frame shows all the added constraints; p. 010, 1st para., In order to edit an existing constraint, the user needs to remove the constraint first and then add the desired constraint as a new constraint; p. 001, ACTS supports two test generation modes, namely, scratch and extend. The former allows a test set to be built from scratch, whereas the latter allows a test set to be built by extending an existing test set. In the extend mode, an existing test set can be a test set that is generated by ACTS, but is incomplete because of some newly added parameters and values, or because of a test set that is supplied by the user and imported into ACTS. Extending an existing test set can save earlier effort that has already been spent in the testing process; see also p. 013, 1st full para and § 3.3 Modify System.) … test environment (Id.) … one or more software limitations of a computing system in a test environment (Id., particularly, p. 009, 3rd para., This constraint specifies that if OS is Windows, then Browser has to be IE, Firefox, or Netscape.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the combinatorial testing coverage invention of Segall with the combinatorial testing tool of Anonymous, such that test environment limitations are applied to avoid testing incompatible software configurations, because such tests would “not be executed properly, which may compromise test coverage,” as suggested by Anonymous (see p. 0002, § 1.3, 1st para.). Segall as modified does not appear to disclose the following, which is taught in analogous art, Hess: one or more hardware limitations and (e.g., e.g., Figs. 1-2 and associated text, e.g., [0004], a model for testing a computer may comprise a CPU attribute with possible values consisting of the available CPUs, an operating system (OS) attribute with possible values consisting of the available OSs, and restrictions that rule out impossible CPU and OS combinations; [0015] A combinatorial model, also referred to as Cartesian-product model of a system, is a set of attributes, values or value ranges for the attributes (also referred to as domains), and restrictions on value combinations that may not appear together.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Segall with the invention of Hess, such that impossible hardware combinations are eliminated, because this would improve testing efficiency by avoiding testing impossible combinations. Segall does not appear to disclose the following, which is taught in analogous art, Hicks: executing (e.g., Figs. 3-4 and associated text, e.g., [0029], test cases 110, which are then executed by the test case execution module(s) 112 to yield an execution result (pass or fail) for each test case.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Segall with the invention of Hicks, such that the test cases are executed, because this would allow the user to determine whether a product is working as intended. With respect to claim 9, Segall discloses An apparatus (e.g., Fig. 3A and associated txt, e.g., [0037], may be implemented in the form of computer readable code executed over one or more computing systems.) comprising: a processing device; and memory operatively coupled to the processing device, wherein the memory stores computer program instructions that, when executed (e.g., Figs. 3A-B and associated text, e.g., [0039], processor 1101 loads executable code from storage media 1106 to local memory 1102.), cause the processing device to: identify a difference in coverage between a combinatorial test design (CTD) model and a dynamically generated CTD model (e.g., Figs. 1-2B and associated text, e.g., [0015], The model [CTD model] defines variables (i.e., attributes), possible values for the variables … The set of valid value combinations defines the coverage model [CTD model]; [0018], the resulting test plan [dynamically generated CTD model]3 may include the valid value tuples of size m, wherein each tuple represents a test (e.g., a combination of variable values) in the test plan; [0024-25], dynamic synchronization during test planning, so that the changes to the test plan and the resulting holes are displayed in real time [dynamically generated]; [0023], the holes in the current test plan are displayed, where value combinations that are not covered for the two attributes “availability” and “carrier” are listed [identify a difference in coverage between a CTD model and a dynamically generated CTD model]4; see also [0006] and [0022].) generated by applying one or more limitations to the CTD model (e.g., Figs. 1-2B and associated text, e.g., [0006], a coverage model [CTD model] including: … value combinations … wherein zero or more of said value combinations are defined according to one or more restrictions [applying one or more limitations to the CTD model] for the purpose of generating a test plan [dynamically generated CTD model generated by] to test a system for which the coverage model [CTD model] is constructed; [0015-16], restrictions indicating when values for one or more variables or value combinations for a plurality of variables are valid … narrowing the test space by way of defining additional … restrictions [applying one or more limitations].), wherein the one or more limitations include; one or more test cases, of the dynamically generated CTD model, on the computing system to test a functionality of at least one of one or more hardware components or one or more software components of the test environment (e.g., Fig. 3B and associated text, e.g., [0017], Testing such interactions leads to detecting a majority of bugs in a system; Abstract, generating a test plan to test a system for which the coverage model is constructed; [0018], the resulting test plan [dynamically generated CTD model] may include the valid value tuples of size m, wherein each tuple represents a test (e.g., a combination of variable values) in the test plan [test cases].); and maintain a listing of uncovered value combinations (e.g., Figs. 1-2B and associated text, e.g., [0023-24], the holes in the current test plan are displayed, where value combinations that are not covered for the two attributes "availability" and "carrier" are listed … the list in the right area is updated [maintain a listing of uncovered value combinations].). Although Segall discloses the dynamically generated CTD model generated by applying one or more limitations to the CTD model (see above), it does not appear to disclose the following, which is taught in analogous art, Anonymous: test environment (e.g., p. 0025 § 1.3, when we want to make sure a web application can run in different Internet browsers and on different Operating Systems [test environment], the configuration of IE on Mac OS is not a valid combination. A test that contains an invalid combination will … not be executed properly … ACTS allows the user to specify constraints [test environment limitations] that combinations must satisfy to be valid. The specified constraints will be taken into account during test generation; p. 009, 3rd para., This constraint specifies that if OS is Windows, then Browser has to be IE, Firefox, or Netscape [test environment limitations].) … test environment (Id.) … one or more software limitations of a test environment (Id., particularly, p. 009, 3rd para., This constraint specifies that if OS is Windows, then Browser has to be IE, Firefox, or Netscape.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the combinatorial testing coverage invention of Segall with the combinatorial testing tool of Anonymous, such that test environment limitations are applied to avoid testing incompatible software configurations, because such tests would “not be executed properly, which may compromise test coverage,” as suggested by Anonymous (see p. 0002, § 1.3, 1st para.). Segall as modified does not appear to disclose the following, which is taught in analogous art, Hess: one or more hardware limitations and (e.g., e.g., Figs. 1-2 and associated text, e.g., [0004], a model for testing a computer may comprise a CPU attribute with possible values consisting of the available CPUs, an operating system (OS) attribute with possible values consisting of the available OSs, and restrictions that rule out impossible CPU and OS combinations; [0015] A combinatorial model, also referred to as Cartesian-product model of a system, is a set of attributes, values or value ranges for the attributes (also referred to as domains), and restrictions on value combinations that may not appear together.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Segall with the invention of Hess, such that impossible hardware combinations are eliminated, because this would improve testing efficiency by avoiding testing impossible combinations. Segall does not appear to disclose the following, which is taught in analogous art, Hicks: execute (e.g., Figs. 3-4 and associated text, e.g., [0029], test cases 110, which are then executed by the test case execution module(s) 112 to yield an execution result (pass or fail) for each test case.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Segall with the invention of Hicks, such that the test cases are executed, because this would allow the user to determine whether a product is working as intended. With respect to claim 16, Segall discloses A computer program product comprising a computer readable storage medium, wherein the computer readable storage medium comprises computer program instructions that, when executed (e.g., Figs. 3A-B and associated text, e.g., [0044], the methods and processes disclosed here may be implemented as ... application software 1122 … application software 1122 may be implemented as program code embedded in a computer program product in form of a computer-usable or computer readable storage medium.): identify a difference in coverage between a combinatorial test design (CTD) model and a dynamically generated CTD model (e.g., Figs. 1-2B and associated text, e.g., [0015], The model [CTD model] defines variables (i.e., attributes), possible values for the variables … The set of valid value combinations defines the coverage model [CTD model]; [0018], the resulting test plan [dynamically generated CTD model]6 may include the valid value tuples of size m, wherein each tuple represents a test (e.g., a combination of variable values) in the test plan; [0024-25], dynamic synchronization during test planning, so that the changes to the test plan and the resulting holes are displayed in real time [dynamically generated]; [0023], the holes in the current test plan are displayed, where value combinations that are not covered for the two attributes “availability” and “carrier” are listed [identify a difference in coverage between a CTD model and a dynamically generated CTD model]7; see also [0006] and [0022].) generated by applying one or more limitations to the CTD model (e.g., Figs. 1-2B and associated text, e.g., [0006], a coverage model [CTD model] including: … value combinations … wherein zero or more of said value combinations are defined according to one or more restrictions [applying one or more limitations to the CTD model] for the purpose of generating a test plan [dynamically generated CTD model generated by] to test a system for which the coverage model [CTD model] is constructed; [0015-16], restrictions indicating when values for one or more variables or value combinations for a plurality of variables are valid … narrowing the test space by way of defining additional … restrictions [applying one or more limitations].); wherein the one or more limitations include ; one or more test cases, of the dynamically generated CTD model, on the computing system to test a functionality of at least one of one or more hardware components or one or more software components of the test environment (e.g., Fig. 3B and associated text, e.g., [0017], Testing such interactions leads to detecting a majority of bugs in a system; Abstract, generating a test plan to test a system for which the coverage model is constructed; [0018], the resulting test plan [dynamically generated CTD model] may include the valid value tuples of size m, wherein each tuple represents a test (e.g., a combination of variable values) in the test plan [test cases].); and maintain a listing of uncovered value combinations (e.g., Figs. 1-2B and associated text, e.g., [0023-24], the holes in the current test plan are displayed, where value combinations that are not covered for the two attributes "availability" and "carrier" are listed … the list in the right area is updated [maintain a listing of uncovered value combinations].). Although Segall discloses the dynamically generated CTD model generated by applying one or more limitations to the CTD model (see above), it does not appear to disclose the following, which is taught in analogous art, Anonymous: test environment (e.g., p. 0028 § 1.3, when we want to make sure a web application can run in different Internet browsers and on different Operating Systems [test environment], the configuration of IE on Mac OS is not a valid combination. A test that contains an invalid combination will … not be executed properly … ACTS allows the user to specify constraints [test environment limitations] that combinations must satisfy to be valid. The specified constraints will be taken into account during test generation; p. 009, 3rd para., This constraint specifies that if OS is Windows, then Browser has to be IE, Firefox, or Netscape [test environment limitations].) … test environment (Id.) … one or more software limitations of a test environment that includes a computing system (Id., particularly, p. 009, 3rd para., This constraint specifies that if OS is Windows, then Browser has to be IE, Firefox, or Netscape.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the combinatorial testing coverage invention of Segall with the combinatorial testing tool of Anonymous, such that test environment limitations are applied to avoid testing incompatible software configurations, because such tests would “not be executed properly, which may compromise test coverage,” as suggested by Anonymous (see p. 0002, § 1.3, 1st para.)It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the combinatorial testing coverage invention of Segall with the combinatorial testing tool of Anonymous, such that test environment limitations are applied to avoid testing incompatible software configurations, because such tests would “not be executed properly, which may compromise test coverage,” as suggested by Anonymous (see p. 0002, § 1.3, 1st para.). Segall as modified does not appear to disclose the following, which is taught in analogous art, Hess: and one or more hardware limitations and (e.g., e.g., Figs. 1-2 and associated text, e.g., [0004], a model for testing a computer may comprise a CPU attribute with possible values consisting of the available CPUs, an operating system (OS) attribute with possible values consisting of the available OSs, and restrictions that rule out impossible CPU and OS combinations; [0015] A combinatorial model, also referred to as Cartesian-product model of a system, is a set of attributes, values or value ranges for the attributes (also referred to as domains), and restrictions on value combinations that may not appear together.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Segall with the invention of Hess, such that impossible hardware combinations are eliminated, because this would improve testing efficiency by avoiding testing impossible combinations. Segall does not appear to disclose the following, which is taught in analogous art, Hicks: execute (e.g., Figs. 3-4 and associated text, e.g., [0029], test cases 110, which are then executed by the test case execution module(s) 112 to yield an execution result (pass or fail) for each test case.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the invention of Segall with the invention of Hicks, such that the test cases are executed, because this would allow the user to determine whether a product is working as intended. With respect to claims 2, 10, and 17, Segall also discloses wherein the existing CTD model comprises an n-wise reduction of a cartesian product of test value combinations (e.g., Fig. 1 and associated text, e.g., [0016-18], given a Cartesian product based model with n variables, a combinatorial algorithm may be used to find a subset of valid combinations in the test space that covers possible combinations of every m variables, wherein m defines a certain interaction level … As an example, interaction level two (also referred to as a pair-wise interaction) means that, for every two variables, all valid value combinations appear in the selected subset of the test space ... applying the combinatorial algorithm to the coverage model with an interaction level m [the existing CTD model comprises an n-wise reduction of a cartesian product of test value combinations].). With respect to claims 3, 11, and 18, Segall also discloses wherein identifying the difference in coverage comprises identifying one or more test cases in the existing CTD model not included in the dynamically generated CTD model (e.g., Figs. 1-2B and associated text, e.g., [0015], The set of valid value combinations [test cases]9 defines defines the coverage model [in the CTD model]; [0018], the resulting test plan [dynamically generated CTD model] may include the valid value tuples of size m, wherein each tuple represents a test (e.g., a combination of variable values) in the test plan [test cases in the dynamically generated CTD model]; [0023], the holes in the current test plan are displayed, where value combinations [test cases in the existing CTD model] that are not covered for the two attributes “availability” and “carrier” are listed [not included in the dynamically generated CTD model]; see also [0006].). With respect to claims 4, 12, and 19, Anonymous further teaches updating the dynamically generated CTD model based on changes to the one or more test environment limitations, wherein the changes to the one or more test environment limitations comprise one or more or one or more updated software limitations (e.g., p. 002 § 1.3, when we want to make sure a web application can run in different Internet browsers and on different Operating Systems [test environment], the configuration of IE on Mac OS is not a valid combination. A test that contains an invalid combination will … not be executed properly … ACTS allows the user to specify constraints [test environment limitations] that combinations must satisfy to be valid. The specified constraints will be taken into account during test generation; p. 009, 3rd para., This constraint specifies that if OS is Windows, then Browser has to be IE, Firefox, or Netscape [test environment limitations comprise one or more software limitations]; p. 001, ACTS supports two test generation modes, namely, scratch and extend. The former allows a test set to be built from scratch, whereas the latter allows a test set to be built by extending an existing test set. In the extend mode, an existing test set can be a test set that is generated by ACTS, but is incomplete because of some newly added parameters and values, or because of a test set that is supplied by the user and imported into ACTS. Extending an existing test set can save earlier effort that has already been spent in the testing process; see also p. 013, 1st full para.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the combinatorial testing coverage invention of Segall with the combinatorial testing tool of Anonymous for the same reason set forth above. With respect to claims 5, 13, and 20, Segall also discloses receiving an update to the dynamically generated CTD model (e.g., Figs. 1-2B and associated text, e.g., [0019], The user interface mechanism may also display the currently selected tests and provide means to add, update or remove certain tests from the test plan (S120); see also [0031].). With respect to claim 8, Segall also discloses monitoring one or more coverage metrics for one or more value combinations (e.g., Figs. 2A-B and associated text, e.g., [0023], In total, 14 holes are reported, indicating that 14 out of 101 total valid pairs remain uncovered [monitoring one or more coverage metrics for one or more value combinations.]). Claims 6, 7, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Segall in view of Anonymous, Hess, and Hicks, as applied to claims 1 and 9 above, and further in view of Kuhn et al. “Practical Combinatorial Testing” (hereinafter Kuhn). With respect to claims 6 and 14, Segall also discloses an attribute of the existing CTD model (e.g., Figs. 1-2B and associated text, e.g., [0015], The model defines variables (i.e., attributes) [attribute of the existing CTD model].)and a value space for the attribute of the CTD model updating (e.g., Fig. 3 on p. 008 and Fig. 8 on pp. 014-015, along with associated text, e.g., p. 014, The values of a parameter [value space for the attribute] can be modified [updating] by selecting the parameter on the Saved Parameters table on the right hand side, and by clicking on the Modify button.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Segall with the testing tool of Anonymous, such that model attribute values are updated, because configurations invariably evolve over time and the associated model should be updated to accurately reflect the current configuration values in order to generate effective tests. Although Segall in view of Anonymous discloses updating a value space for the attribute of the CTD model (see above), it does not appear to disclose identifying one or more value boundaries for … based on the one or more value boundaries. However, this is taught in analogous art, Kuhn (e.g., Figs. 9-10 on pp. 18-19 and associated text, e.g., p. 18 last para., The first step will be to develop a table of parameters [attribute] and possible values; p. 19, top para., consider what happens with a large number of possible values [value space] … we must select representative values; p. 20, 1st para. – 3rd para., One common strategy, boundary value analysis, is to select test values at each boundary [identifying one or more value boundaries for] and at the smallest possible unit on either side of the boundary, for three values per boundary [based on the one or more value boundaries] … It is generally also desirable to test the extremes of ranges [identify one or more value boundaries for]. One possible selection of values for the time parameter would then be: 0000, 0539, 0540, 0541, 1019, 1020, 1021, and 1440 [based on the one or more value boundaries].). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Segall and Anonymous with the combinatorial testing technique of Kuhn, such that an attribute’s value space is updated based on boundary value analysis to reduce the number of values, because developers should use “a maximum of 8 to 10 values per parameter to keep testing tractable” and “errors are more likely at boundary conditions because errors in programming may be made at these points … [and] It is generally also desirable to test the extremes of ranges,” as suggested by Kuhn (see p. 20, top, and the 3rd para.). With respect to claims 7 and 15, Anonymous further teaches regenerating one or more of the dynamically generated CTD model based on the updated value space for the attribute of the existing CTD model (e.g., Fig. 1 on p. 006 and associated text, e.g., p. 006, The Test Result shows a test set [dynamically generated CTD model] of the currently selected system, where each row represents a test, and each column represents a parameter; p. 011, § 3.2, To build a test set [dynamically generated CTD model] for a system that is currently open, select the system in the System View, and then select menu Operations -> Build. The latter selection brings up the Options window, as shown in Fig. 6, which allows the following options to be specified for the build operation; p. 013, top para., Mode: This option can be Scratch … a test set should be built from scratch [regenerate] … the current test set in the system may not be complete as the system configuration may have changed [updated] after the last build; p. 014, The values of a parameter can be modified [updated value space].). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Segall with the testing tool of Anonymous, such that the test plan is regenerated to account for attribute value updates, for the same reasons set forth above. Additionally, as configurations evolve over time, some test cases may no longer reflect the current configuration and should be regenerated so that obsolete tests are eliminated and new tests are generated that can effectively test the updated configuration. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Specifically, Kacker et al. “Combinatorial testing for software: An adaptation of design of experiments” discloses combinatorial test design constraints. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN DAVID BERMAN whose telephone number is (571)272-7206. The examiner can normally be reached on M-F, 9-6 Eastern. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S. Sough can be reached on 571-272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEPHEN D BERMAN/Examiner, Art Unit 2192 /S. SOUGH/SPE, AU 2192 1 Applicant’s specification describes a “dynamically generated CTD model” as consisting of test cases defined by multiple attribute-value pairs (e.g., [0009], “Each test case scenario may be defined or expressed using multiple attributes, with each attribute being defined by a particular value”; [0030], “the test cases of the dynamically generated CTD model.” 2 Throughout this office action, page number citations for the Anonymous reference will refer to the 3 digit bates page numbers located at the bottom-center of each page. 3 Applicant’s specification describes a “dynamically generated CTD model” as consisting of test cases defined by multiple attribute-value pairs (e.g., [0009], “Each test case scenario may be defined or expressed using multiple attributes, with each attribute being defined by a particular value”; [0030], “the test cases of the dynamically generated CTD model.” 4 Applicant’s specification at [0030], recites “identifying 202 a difference in coverage between the CTD model and a dynamically generated CTD model generated by applying one or more test environment limitations to the CTD model includes identifying 204 one or more value combinations not reflected in the dynamically generated CTD model.” (Emphasis added). 5 Throughout this office action, page number citations for the Anonymous reference will refer to the 3 digit bates page numbers located at the bottom-center of each page. 6 Applicant’s specification describes a “dynamically generated CTD model” as consisting of test cases defined by multiple attribute-value pairs (e.g., [0009], “Each test case scenario may be defined or expressed using multiple attributes, with each attribute being defined by a particular value”; [0030], “the test cases of the dynamically generated CTD model.” 7 Applicant’s specification at [0030], recites “identifying 202 a difference in coverage between the CTD model and a dynamically generated CTD model generated by applying one or more test environment limitations to the CTD model includes identifying 204 one or more value combinations not reflected in the dynamically generated CTD model.” (Emphasis added). 8 Throughout this office action, page number citations for the Anonymous reference will refer to the 3 digit bates page numbers located at the bottom-center of each page. 9 Applicant’s specification recite at [0009], “Each test case scenario may be defined or expressed using multiple attributes, with each attribute being defined by a particular value”; see also [0010], a set of test cases (e.g., a set of test value combinations)
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Aug 09, 2025
Non-Final Rejection — §103, §112
Nov 12, 2025
Interview Requested
Nov 13, 2025
Response Filed
Nov 13, 2025
Applicant Interview (Telephonic)
Nov 30, 2025
Examiner Interview Summary
Feb 21, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591505
ANALYSIS OF CODE COVERAGE DIFFERENCES ACROSS ENVIRONMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12572372
SEMANTIC METADATA VALIDATION
2y 5m to grant Granted Mar 10, 2026
Patent 12572361
METHOD, APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM FOR DETERMINING PAGE JUMP INFORMATION
2y 5m to grant Granted Mar 10, 2026
Patent 12547379
GENERATING TARGET LANGUAGE CODE FROM SOURCE LANGUAGE CODE
2y 5m to grant Granted Feb 10, 2026
Patent 12541344
RECORDING MEDIUM, PROGRAMMING ASSISTING DEVICE, AND PROGRAMMING ASSISTING METHOD
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+56.6%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 331 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month