Prosecution Insights
Last updated: April 19, 2026
Application No. 18/572,214

A Concept for Generating Code of Test Cases

Non-Final OA §102§103
Filed
Dec 20, 2023
Examiner
PAULINO, LENIN
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
4y 2m
To Grant
82%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
186 granted / 327 resolved
+1.9% vs TC avg
Strong +25% interview lift
Without
With
+25.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
34 currently pending
Career history
361
Total Applications
across all art units

Statute-Specific Performance

§101
21.1%
-18.9% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 327 resolved cases

Office Action

§102 §103
DETAILED ACTION Claims 1-13 and 19-24 are pending. Claims 14-18 have been cancelled. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner’s Notes Examiner has cited particular columns and line numbers, paragraph numbers, or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 2, 10, 13, 19, 21, 23 and 24are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Tillmann et al. (US-PGPUB-NO: 2007/0033443 A1) hereinafter Tillmann. As per claim 1, Tillmann teaches an apparatus for generating code for test cases for testing a function under test, the apparatus comprising interface circuitry and processing circuitry to: obtain a diagram representation of the function under test (see Tillmann paragraph [0021], “In one example, a graph is created that identifies constraints that must be satisfied to travel a path through a graph of states of the IUT. A constraint solver automatically generates test cases (e.g., value assignments to the input parameters of the parameterized unit test) by determining the test inputs that satisfy the constraints of an execution path through the IUT”); select a plurality of symbols of interest from the diagram representation of the function under test, the symbols of interest being based on a pre-defined set of symbols of interest (see Tillmann paragraph [0027], “In another example, a PUT is instantiated with symbolic variables as PUT inputs. These symbolic inputs can be symbols replacing concrete values (e.g., integers, Boolean, string, etc.) or symbols replacing objects. In several examples, symbols replacing objects are supported by mock objects. In one example, these symbols are exported to the signature of the test. These symbols as inputs are said to symbolically instantiate the PUT, and the symbolically instantiated PUT is symbolically executed to obtain relationships on symbolic inputs. In one such example, a mock object is created and that mock object is placed in the call to the symbolically instantiated PUT (along with other input symbolic values)”); and generate, for the symbols of interest, code of a plurality of test cases based on a pre- defined set of checks related to the pre-defined symbols of interest (see Tillmann paragraph [0029], “Such chosen inputs that exercise the PUT can also be used to automatically create code for TUTs 106. Thus, a PUT instantiated with concrete values is similar to a TUT, except that the concrete values are delivered to the PUT via the input parameters”). As per claim 2, Tillmann teaches wherein the pre-defined set of symbols of interest comprises one or more of a symbol of interest related to a branch condition, a symbol of interest related to an external function call, a symbol of interest related to an input/output parameter change, a symbol of interest related to a global variable change, a symbol of interest related to one or more pre-conditions imposed on a software or hardware environment, a symbol of interest related to a sequence of function calls and a symbol of interest related to a flow of a transition between different functions (see Tillmann paragraph [0031], “In one example, in order to generalize 104 a TUT, a method removes one or more concrete values from the body of the TUT and replaces those concrete values with symbolic values. If desirable, these symbolic values can also be placed in the test call as input parameters to create the call signature for the parameterized unit test (PUT). The PUT then serves as input to symbolic execution which traverses the object(s) or implementations under test to discover concrete values that provide a desired coverage (e.g., path coverage). The PUT (or TUT) is then called (or in case of a TUT populated) with the concrete values to verify behavior for the desired coverage”). As per claim 10, Tillmann teaches wherein the circuitry is configured to recursively traverse the symbols of interest to generate the code for the plurality of test cases (see Tillmann paragraph [0031-0032], “The PUT then serves as input to symbolic execution which traverses the object(s) or implementations under test to discover concrete values that provide a desired coverage (e.g., path coverage). The PUT (or TUT) is then called (or in case of a TUT populated) with the concrete values to verify behavior for the desired coverage. Customers have already written TUTs 102 for various programs. The described methods can leverage those prior existing TUTs to create 104 PUTs that can be symbolically executed 112 to provide better or more complete coverage of those existing IUTs. And, as those existing IUTs are upgraded or otherwise altered, the PUTs as automatically generated from the TUTs, provided automated test coverage of the upgraded or altered IUTs”). As per claim 13, Tillmann teaches wherein the circuitry is configured to run the code for the plurality of test cases on the function under test (see Tillmann paragraph [0101], “The modification includes replacing plural concrete values in the traditional unit test with symbols, and exporting the symbols into a parametric signature of the parameterized unit test. A symbolic executor 508 identifies constraints while symbolically executing the created parameterized unit test of the implementation under test. A constraint solver 510 and or theorem prover 514 generates a set of test cases by solving for values that satisfy the series of constraints. The test program 518 executes the automatically generated test cases 516 and optimally directs other resources. Optionally, the parameterized unit test 506 includes program statements that require the symbolic execution of objects and associated fields”). As per claims 19, 21 and 23, these are the method claims to apparatus claims 1, 10 and 13, respectively. Therefore, they are rejected for the same reasons as above. As per claim 24, this is the machine-readable storage medium (see Tillmann paragraph [0165], “The drives and their associated computer-readable media provide nonvolatile storage of data, data structures, computer-executable instructions, etc. for the computer 820. Although the description of computer-readable media above refers to a hard disk, a removable magnetic disk and a CD, it should be appreciated by those skilled in the art that other types of media which are readable by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, and the like, may also be used in the exemplary operating environment”) claim to method claim 19. Therefore, it is rejected for the same reasons as above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3-9, 12, 20 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Tillmann (US-PGPUB-NO: 2007/0033443 A1), in further view of Krishnan et al. (US-PGPUB-NO: 2010/0198799 A1) hereinafter Krishnan. As per claim 3, Tillmann does not explicitly teach wherein the pre-defined set of symbols of interest comprises a symbol of interest related to a global variable change, the pre-defined set of checks comprising one or more of a check for ascertaining that a value of one or more global variables is set before the function under test is called and a check for ascertaining a correctness of the global variable change. However, Krishnan teaches wherein the pre-defined set of symbols of interest comprises a symbol of interest related to a global variable change, the pre-defined set of checks comprising one or more of a check for ascertaining that a value of one or more global variables is set before the function under test is called and a check for ascertaining a correctness of the global variable change (see Krishnan paragraph [0175], “FIG. 16 illustrates a method to detect race condition and redundant synchronization according to a preferred embodiment wherein the number of steps described herein are two wherein the first step it is determined if a global variable has escaped which is the ability for the global variable to be read and/or written by multiple threads (1601). Next, if a variable has escaped, the existences of adequate locks which protect the variable access are checked and it is ensured that only one thread can access the variable at a time. If there are not adequate locks then a race condition bug is reported”). Tillmann and Krishnan are analogous art because they are in the same field of endeavor of software development. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Tillmann’s teaching of automated generalization of test cases with Krishnan’s teaching of algorithmic and heuristic solutions to enable software simulation for automated software defect detection and correction performing such simulation globally for the whole or the part of a large target software program accurately to incorporate tracking global variables in order to better detect errors when testing software for possible bugs. As per claim 4, Tillmann modified with Krishnan teaches wherein the pre-defined set of symbols of interest comprises a symbol of interest related to an input/output parameter change, the pre- defined set of checks comprising a check for ascertaining that an input parameter is set before the function under test is called (see Krishnan paragraph [0377-0378], “In particular, methods which have no side effects (i.e. only compute results based in input parameter and do not update any global state) are actually executed (interpreted) instead of being simulated (4302). If there are multiple sets of input values of the method, the method body is executed once for each set of inputs, to obtain a set of result values. [0378] iii. Values of input parameters and other unknown variable values are obtained by capturing the values of those variables during run-time testing using existing test suites (since all software projects have a suite of tests) (4303). These values are then provided to the simulation engine to get more accuracy and complement the values that are derived during simulation”). As per claim 5, Tillmann modified with Krishnan teaches wherein the pre-defined set of symbols of interest comprises a symbol of interest related to one or more pre-conditions imposed on a software environment, the pre-defined set of checks comprising a check related to the one or more pre-conditions imposed on the software environment (see Krishnan paragraph [0333], “Pre-condition and invariant rules: these are rules that apply to the values of variables, and impose constraints on the set of values which are acceptable. According to a preferred embodiment assumes that if a method throws one of a specified kind of exception or error (default: all exceptions and errors), and the conditions governing the throwing this exception involve method parameters, then some pre-condition or invariant for the method has been violated. The rule generation framework determines pre-conditions in the following ways”). As per claim 6, Tillmann modified with Krishnan teaches wherein the pre-defined set of symbols of interest comprises a symbol of interest related to an external function call, the pre-defined set of checks comprising one or more of a check related to a behavior of the external function call, a check related to an input parameter of the external function call, and a check related to an execution sequence dependency of the external function call (see Krishnan paragraph [0329], “FIG. 35 illustrates a method for flag rule design and applications according to a preferred embodiment, a framework for flag rules are provided. Rules are provided with APIs that allow them to get and set the values of flags on the ValueMaps for variable/expression values (3501). Each flag type is distinguished by a key String, so multiple different kinds of flags can be set by different rules (3502). During operations on values such as union and intersection, and during unary and binary operations the flag values are propagated from operand values to result values (3503). At union operations, flag values from the input values are accumulated by default (3504). So when a rule queries the flags of a variable/expression value, it gets all possible flag values that can flow to the rule's node. This default behavior is sufficient for security vulnerability rules”). As per claim 7, Tillmann modified with Krishnan teaches wherein the pre-defined set of symbols of interest comprises a symbol of interest related to an external function call and/or a symbol of interest related to a global variable change, the pre-defined set of checks comprising a check related to an impact of the function under test on global memory or an external resource (see Krishnan paragraph [0113], “The values of a variable which represents a memory location to be either a local stack variable or a global heap variable for an object field or array element at a given point in the program are represented by a ValueMap (1002). The Value Map contains multiple entries representing different values of the variable on different calling contexts. Each entry consists of a ValueSet representing a set of values and a ContextGraph representing the calling contexts on which the variable has those values. Each value appears only once in any ValueSet in the ValueMap. The ValueMap needs only as many entries as are necessary to distinguish different values from different calling contexts. Thus even if a method has 10.sup.15 calling contexts reaching it, but a particular parameter value to the method has only 2 values, then the ValueMap for that parameter variable needs a maximum of 2 entries”). As per claim 8, Tillmann modified with Krishnan teaches wherein the pre-defined set of checks comprises one or more of a check related to a change related to an output variable in the function under test and a check related to a return value of the function under test (see Krishnan paragraph [0363], “FIG. 40 illustrates a method to use method summary according to a preferred embodiment. The method summary is created by including all expressions and computations which are on the path from every parameter use to every return statement where values are returned, and which use a parameter value directly or indirectly (4001). Each parameter value is marked with a "DependsOnInput" field, which contains a list of expressions which use the parameter value. Any operation whose operand has the DependsOnInput field adds itself to the DependsOnInput field and includes it in the output values (4002). Then at a return statement, the DependsOnInput field of the values being returned has all the expressions which have operated on the input parameters to create the result. Only those expressions are then retained, and the other nodes in the program model are deleted to produce the method summary (4003)”). As per claim 9, Tillman modified with Krishnan teaches wherein the circuitry is configured to generate the code for the plurality of test cases based one or more symbols of interest related to branch conditions (see Krishnan paragraph [0260-0261], “FIG. 26 illustrates a method for method body inlining according to the embodiment, wherein the steps involved include: [0261] i. Creation of a branch point at the method call site, and a successor BasicBlock is created for each of the callee methods (2602). This successor BasicBlock will be the special InlinedStart method after which the body of the callee method will appear. The join point corresponding to the branch point is also created and a predecessor BasicBlock is created for each callee method, which will be the special InlinedExit BasicBlock marking the end of the inlined callee method body (2603)”). As per claim 11, Tillmann modified with Krishnan teaches wherein the circuitry is configured to recursively identify a plurality of different behaviors of the function under test, and to generate the code for the plurality of test cases based on the plurality different behaviors of the function under test, each test case representing one behavior of the function under test (see Krishnan paragraph [0381], “A difficult problem is how to simulate unit testing or functional testing for different sets of concrete input values. For each set of inputs there is a different result which is a function of the inputs. All the input sets are simulated together. This requires co-relating each input-set-result pair. The simulation engine provides for co-relating inputs and results by associating a specific calling context with each input set, intermediate values and result, using context-sensitive simulation (4306). Then rules are applied separately for each input-result pair corresponding to the same calling context (4307)”). As per claim 12, Tillmann modified with Krishnan teaches wherein the diagram representation of the function under test is based on the unified modeling language (see Krishnan paragraph [0383], “viii. A unified rule architecture is created which allows rules for both simulation and dynamic analysis to be specified in a consistent way, and then those same rules can be checked during simulation as well as during dynamic run-time analysis (4309). In particular, the rule architecture described in this document allows rule script code to be injected into the run-time execution of the program”). As per claims 20 and 22, these are the method claims to apparatus claims 9 and 11, respectively. Therefore, they are rejected for the same reasons as above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Fuchs (US-PGPUB-NO: 2020/0174760 A1) teaches automatic code generation. Hirt et al. (US-PGPUB-NO: 2018/0074944 A1) teaches test case generation built into data-integration workflow editor. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LENIN PAULINO whose telephone number is (571)270-1734. The examiner can normally be reached Week 1: Mon-Thu 7:30am - 5:00pm Week 2: Mon-Thu 7:30am - 5:00pm and Fri 7:30am - 4:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LENIN PAULINO/Examiner, Art Unit 2197 /BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Dec 20, 2023
Application Filed
Feb 19, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596635
BLACK-BOX FUZZING TESTING METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12541449
AUTOMATIC GENERATION OF ASSERT STATEMENTS FOR UNIT TEST CASES
2y 5m to grant Granted Feb 03, 2026
Patent 12524217
SYSTEMS AND METHODS FOR AUTOMATED RETROFITTING OF CUSTOMIZED CODE OBJECTS
2y 5m to grant Granted Jan 13, 2026
Patent 12517811
METHOD, SYSTEM AND DEVICE FOR GENERATING TEST CASE FOR AUTOMOTIVE CYBERSECURITY DETECTION
2y 5m to grant Granted Jan 06, 2026
Patent 12505029
SYSTEMS, METHODS, AND GRAPHICAL USER INTERFACES FOR GENERATING A COMPUTER-EXECUTABLE USABILITY STUDY APPLICATION
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
82%
With Interview (+25.3%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 327 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month