Prosecution Insights
Last updated: April 19, 2026
Application No. 18/077,679

TEST COVERAGE DETERMINATION BASED ON LOG FILES

Non-Final OA §101§103§112
Filed
Dec 08, 2022
Examiner
KANG, INSUN
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
515 granted / 655 resolved
+23.6% vs TC avg
Strong +40% interview lift
Without
With
+40.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
23 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
35.2%
-4.8% vs TC avg
§102
19.8%
-20.2% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responding to application papers dated 12/8/2022. Claims 1-20 are pending in the application. The information disclosure statement filed on 12/8/2022 has been considered. Note that the specification recites that “A computer readable storage medium, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se,” at [0041], therefore, the computer-readable storage medium recited in claim 19 is considered to be non-transitory. Claim Objections Claims 1, 19 and 20 are objected to because of the following informalities: Per claim 1, after “configured to” ‘:’ is missing. Per claims 19 and 20, it appears that “computer-readable” needs to be “computer readable” to be consistent with the expression used in the specification. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6, 7, 15, 16 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 6, 7, 15, 16 and 20 recite the limitations “the productive log,” “the test log,” and “the plurality.” There is insufficient antecedent basis for these limitations in the claim. Interpretation: the productive log file, the test log file, and the nodes, respectively. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Specifically, claims 1-20 are directed to an abstract idea. Per claim 1, the claim is directed to an idea of itself, mental processes that can be performed in the human mind, or by a human using a pen and paper. The steps of comparing a test log file and identifying one or more components and generating a visualization as drafted can be pure mental processes. The visualization generation is a mere generation of a graph with nodes and edges which can be performed by a human. The limitations encompass a human mind carrying out the function through observation, evaluation, judgement and/or option, or even with the aid of pen and paper. The additional limitations, the step of storing log files and displaying the visualization via user interface are insignificant extra solution activities for data storing and displaying the result of the mental steps. The additional limitations including a data store, processor and a user interface are described at a high level of generality for applying or performing the abstract idea and do not indicate any integration of the abstract idea into a practical application as the mental steps are merely applied with a generic computing component(s). See MPEP see MPEP 2106.05(f) /2106.05(h). It is noted that employing generic computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not add significantly more, similar to how limiting the abstract idea in Flook to petrochemical and oil-refining industries was insufficient. Therefore, the additional limitations do not integrate the abstract idea into a practical application. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components or insignificant extra solution activities (e.g. processors, devices, program instructions), then it falls within the "Mental Processes" grouping of abstract ideas (2019 PEG step 2A, Prong 1: Abstract idea grouping? Yes, Mental Process). At most, the storing and displaying steps are not found to include anything more than what is well-understood, routine, conventional activity in the field. In this case, it is noted that the claimed extra-solution activities of data storing and displaying the result are acknowledged to be a well-understood, routine, conventional activity court recognized as WURC examples in MPEP 2106.05(d)(ll), for example, data gathering and retrieving, storing data, transmitting/displaying a result - Symantec, Versata Dev, Content extraction, Electric Power Group). Insignificant extra solution activities or mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Viewing the limitations individually and as a combination, the additional elements merely perform data storing to gather data for the mental steps, display the result, and perform the mental steps using generic computing components as tools without integrating the abstract idea into a practical application. For at least these reasons, claim 1 is not patent eligible. Per claims 2-9, these claims are directed to the same idea itself as in claim 1, reciting details of the mental steps (building a log tree which can be built manually, executing tests, adding a weight can be done by a human), without adding any other additional element that is significantly more, other than the insignificant extra solution of storing. Therefore, the claims are rejected for the same reasons as in claim 1. Per claims 10-18, these claims are directed to the same idea itself as in claims 1-9, reciting only the same mental steps and additional elements recited in the claims 1-9 without adding any other additional element that is significantly more. Therefore, the claims are rejected for the same reasons as in claims 1-9. Per claims 19 and 20, these claims are directed to the same idea itself as in claims 1 and 6, reciting only the same mental steps and additional elements recited in the claims 1 and 6 without adding any other additional element that is significantly more. Therefore, the claims are rejected for the same reasons as in claims 1 and 6. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-13, and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bajoria (US11151020) in view of Porter et al. (US9559928, hereafter Porter). 1. An apparatus comprising: a data store configured to store log files (Bajoria, see at least fig. 1 and associated texts, The telemetry or log data may be stored in execution log data store 14 … log data generated during execution of a software application within production environment 110 and test environment 120; fig. 5 and associated texts, obtain telemetry data from an execution log data store (e.g., a remotely located execution log data store 140… in production and test environments in a software application development environment); a processor configured to compare a test log file from the data store which is generated from tests performed on source code in a test environment to a productive log file from the data store that is generated by execution of the source code in a productive environment; identify one or more components of the source code that are not covered by the tests based on the comparison (Bajoria, see at least fig. 1 and associated texts, the telemetry or log data may be generated such that a hierarchical graph may be generated from the telemetry or log data for any given function executed within a production environment 110 or test environment 120 … generates execution graphs from log data generated during execution of a software application within production environment 110 and test environment 120, analyzes the generated execution graphs for a given function to identify differences in how the given function is executed in production environment 110 and test environment 120 – note that the differences are identified by comparing the test and production log data); and generate a visualization which identifies the one or more components that are not covered by the tests display the visualization via a user interface (Bajoria, see at least Fig. 1 and associated texts, the telemetry or log data may be generated such that a hierarchical graph may be generated from the telemetry or log data for any given function executed within a production environment 110 or test environment 120 … To compare execution graphs generated for an invoked function in production environment 110 and test environment 120 … to determine whether a node in the test environment execution graph exists at the same location as the corresponding node in the production environment execution graph … to determine the extent of the changes made to a function between a production environment 110 and test environment 120 … execution graph analyzer 134 requests that a developer manually review and approve the changes to a function prior to deployment; fig. 3 and associated texts, generate an alert identifying the differences between the identified function in the production and test environments. … the alert may indicate function calls, database calls, remote service invocation, and other operations that are not present in one of the execution graphs for the production or test environments and request that a user (e.g., a developer of the software application) provide feedback with respect to the differences between the function in the test and production environments… If, in response to the notification, a developer indicates to execution graph analyzer 134 that the changes reflected in the test environment execution graph are intentional … may prompt the creation of subsequent execution graphs that may be analyzed by execution graph analyzer 134; Note that an alert is generated for a user to review the differences and provide feedback which requires visualization of data and a user interface involved for the view and feedback). Bajoria does not explicitly state visualization including identifiers, however, Bajoria states that the alert indicates “function calls, database calls, remote service invocation, and other operations that are not present in one of the execution graphs for the production or test environments (Bajoria, see at least fig. 1 and associated texts).” Function call path id, function name/address, and database ID etc. respectively for those operations would be required to point to the specific function calls and other operations that are not present or present in the graphs. Nonetheless, Porter teaches visualization including identifiers (Porter, see at least fig. 1 and associated texts, The report 136 may include multiple test coverage metrics. The report 136 may also include path-by-path results expressed in a tabular format. For example, the report 136 may include a table, where each row in the table represents data for a particular call path. In the example shown below, the first column represents the call path identifier of each particular call path … the report 136 may be used to gain transparency into the lack of coverage in this case. By reporting the call paths that are invoked in production but not in the test scenario, or vice versa, the report generation functionality 131 may give the user the ability to identify differences between the two environments). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have combined Porter’s identifiers included in visualization with Bajoria’s testing system to modify Bajoria’s system to combine visualizing identifiers as taught by Porter, with a reasonable expectation of success, since they are analogous art because they are from the same field of endeavor related to testing or code development. Combining Porter’s functionality with that of Bajoria results in a system that displays identifiers. The modification would be obvious because one having ordinary skill in the art would be motivated to make this combination to present a user a report/alert including identifiers of particular components that are not covered in the production or test environments for transparency and user convenience (Porter, see at least fig. 1 and associated texts, The report 136 may include multiple test coverage metrics. The report 136 may also include path-by-path results expressed in a tabular format. For example, the report 136 may include a table, where each row in the table represents data for a particular call path. In the example shown below, the first column represents the call path identifier of each particular call path … the report 136 may be used to gain transparency into the lack of coverage in this case. By reporting the call paths that are invoked in production but not in the test scenario, or vice versa, the report generation functionality 131 may give the user the ability to identify differences between the two environments). 2. The apparatus of claim 1, wherein the processor is further configured to build a log tree based on the source code, wherein the log tree comprises nodes representing code segments in the source code and edges between the nodes representing data flow dependencies among the code segments during execution (Bajoria, see at least fig. 1-3 illustrating graphs generation, fig. 1 and associated texts, Based on the telemetry data stored in execution log data store 140 … To compare execution graphs generated for an invoked function in production environment 110 and test environment 120, execution graph analyzer may traverse each graph by walking the graph from the root node of the graph to each child node in the graph; Note that the graphs are in a hierarchical format (tree) and the nodes are connected with edges (e.g. calls) which represent dependencies among the nodes (child-parent)). 3. The apparatus of claim 2, wherein the processor is configured to identify a test path of components within the source code that are called by the tests based on the log tree and the test log file and identify a productive path of components within the source code that are called by execution of the source code in the productive environment based on the log tree and the productive log file ((Bajoria, see at least fig. 1 and associated texts, the telemetry or log data may be generated such that a hierarchical graph may be generated from the telemetry or log data for any given function executed within a production environment 110 or test environment 120 … fig. 4 and associated texts, execution graph 410 includes the checkout function as a root node and indicates that the checkout function invokes two services; fig. 5 and associated texts, This telemetry data may include, for example, information about function calls generated during execution of an application in production environment 110 and test environment 120, database calls, remote service invocation (e.g., through HTTP service calls), data generated or consumed during execution of an application, and other information that may be used to generate execution graphs for use by graph analyzer 530 to allow or block deployment of application components to a production environment. Note that the test and production graphs provide information about the call paths). 4. The apparatus of claim 3, wherein the processor is configured to compare the test path of components to the productive path of components to identify the one or more components of the source code that are not covered by the tests (Bajoria, see at least fig. 2 and associated texts, identifies differences between the first graph generated from the production environment and the second graph generated from the test environment. In one embodiment, to identify differences between the first and second execution graphs, the system may traverse the first and second execution graphs to determine whether nodes existing at a particular level of a graph exist in both the first and second graphs. If a node exists at the same location in both the first and second execution graphs, the system need not take any action with respect to that node. If, however, a node does not exist in one on the first or second execution graphs, the system may increment a counter used to track the number of nodes that differ between the first and second execution graphs; Note that the differences identified from the comparison includes components covered or not covered by the test). 6. The apparatus of claim 1, wherein the visualization comprises a graph which includes a path of nodes representing a sequence of dependent components identified from the productive log which are not included in the test log, and edges between the plurality of nodes representing dependencies (Bajoria, see at least fig. 1-3 illustrating graphs generation, fig. 1 and associated texts, Based on the telemetry data stored in execution log data store 140 … To compare execution graphs generated for an invoked function in production environment 110 and test environment 120, execution graph analyzer may traverse each graph by walking the graph from the root node of the graph to each child node in the graph; fig. 2 and associated texts, to identify differences between the first and second execution graphs, the system may traverse the first and second execution graphs to determine whether nodes existing at a particular level of a graph exist in both the first and second graphs. If a node exists at the same location in both the first and second execution graphs, the system need not take any action with respect to that node. If, however, a node does not exist in one on the first or second execution graphs, the system may increment a counter used to track the number of nodes that differ between the first and second execution graphs; Note that the graphs are in a hierarchical format (tree) and the nodes are connected with edges (e.g. calls) which represent dependencies among the nodes (child-parent)). 7. The apparatus of claim 1, wherein the visualization comprises a graph which includes a path of nodes representing a sequence of dependent components identified from the test log which are not included in the productive log, and edges between the plurality of nodes representing dependencies. (Bajoria, see at least fig. 1-3 illustrating graphs generation, fig. 1 and associated texts, Based on the telemetry data stored in execution log data store 140 … To compare execution graphs generated for an invoked function in production environment 110 and test environment 120, execution graph analyzer may traverse each graph by walking the graph from the root node of the graph to each child node in the graph; fig. 2 and associated texts, to identify differences between the first and second execution graphs, the system may traverse the first and second execution graphs to determine whether nodes existing at a particular level of a graph exist in both the first and second graphs. If a node exists at the same location in both the first and second execution graphs, the system need not take any action with respect to that node. If, however, a node does not exist in one on the first or second execution graphs, the system may increment a counter used to track the number of nodes that differ between the first and second execution graphs; Note that the graphs are in a hierarchical format (tree) and the nodes are connected with edges (e.g. calls) which represent dependencies among the nodes (child-parent)). 8. The apparatus of claim 1, wherein the processor is further configured to execute the tests on the source code in the test environment, generate the test log file based on the execution of the tests, and store the generated log file in the data store (Bajoria, see at least fig. 1 and associated texts, during execution of a function provided by production application components 112 or production shared services 114 in production environment 110 or test application components 122 or test shared services 124 in test environment 120, telemetry data may be generated for each action performed during execution of the function. The telemetry or log data may be stored in execution log data store 140; Fig. 5 and associated texts, -- Note that the execution log data store stores the test and production log data captured from monitoring activity generated during execution of application components and shared components in production environment 110 and test environment 120). 9. The apparatus of claim 1, wherein the processor is further configured to generate the productive log file based on execution of the source code in the productive environment and store the productive log file in the data store (Bajoria, see at least fig. 1 and associated texts, during execution of a function provided by production application components 112 or production shared services 114 in production environment 110 or test application components 122 or test shared services 124 in test environment 120, telemetry data may be generated for each action performed during execution of the function. The telemetry or log data may be stored in execution log data store 140; Fig. 5 and associated texts, -- Note that the execution log data store stores the test and production log data captured from monitoring activity generated during execution of application components and shared components in production environment 110 and test environment 120). Per claims 10-13 and 15-18, they are the method versions of claims 1-4 and 6-9, respectively, and are rejected for the same reasons set forth in connection with the rejection of claims 1-4 and 6-9 above. Per claims 19 and 20, they are the medium versions of claims 1 and 6, respectively, and are rejected for the same reasons set forth in connection with the rejection of claims 1 and 6 above. Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Bajoria in view of porter and Mahler et al. (US20140096111, hereafter Mahler). Per claim 5 Bajoria teaches wherein the processor is configured to add a counter to one or more components within the productive path of components based on a frequency in which the one or more components are called and identify the one or more components of the source code that are not covered by the tests based on the count (Bajoria, see at least fig.1 and associated texts, As execution graph analyzer 134 traverses through the production and test environment execution graphs, execution graph analyzer 134 can maintain a running counter of the number of differences between nodes in the production environment execution graph and the test environment execution graph. This counter may be used, as discussed in further detail below, to determine the extent of the changes made to a function between a production environment 110 and test environment 120 and manage deployment of one or more application components from test environment 120 to production environment 110; fig. 2 and associated texts, If, however, a node does not exist in one on the first or second execution graphs, the system may increment a counter used to track the number of nodes that differ between the first and second execution graphs). Bajoria and Porter do not explicitly teach that the counter is a weighted counter. Mahler teaches adding a weight to one or more components based on a frequency in which the one or more components are called (Mahler, see at least [0023], the logical code units that are executed more frequently in the productive environment may be weighted so that the percentage of overlap increases if frequently executed logical code units are covered by the test cases. Logical code units not likely to be executed, or executed less frequently, may be weighted lower. In still further embodiments, logical code units may be weighted based on an importance to the functioning of the business). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have combined Mahler’s use of a weight with Porter’s identifiers included in visualization and Bajoria’s testing system to modify Bajoria’s system to combine adding a weight as taught by Mahler, with a reasonable expectation of success, since they are analogous art because they are from the same field of endeavor related to testing or code development. Combining Mahler’s functionality with that of Bajoria and Porter results in a system that adds a weight to components. The modification would be obvious because one having ordinary skill in the art would be motivated to make this combination to measure importance tracking events differently based in their value for an improved analysis (Mahler, see at least [0023], the logical code units that are executed more frequently in the productive environment may be weighted so that the percentage of overlap increases if frequently executed logical code units are covered by the test cases. Logical code units not likely to be executed, or executed less frequently, may be weighted lower. In still further embodiments, logical code units may be weighted based on an importance to the functioning of the business). Bajoria in view of Porter and Mahler further teaches: identify the one or more components of the source code that are not covered by the tests based on the added weight (Bajoria, see at least fig.1 and associated texts, As execution graph analyzer 134 traverses through the production and test environment execution graphs, execution graph analyzer 134 can maintain a running counter of the number of differences between nodes in the production environment execution graph and the test environment execution graph. This counter may be used, as discussed in further detail below, to determine the extent of the changes made to a function between a production environment 110 and test environment 120 and manage deployment of one or more application components from test environment 120 to production environment 110; fig. 2 and associated texts, If, however, a node does not exist in one on the first or second execution graphs, the system may increment a counter used to track the number of nodes that differ between the first and second execution graphs … If, however, a node does not exist in one on the first or second execution graphs). Per claim 14, it is the method version of claim 5, and is rejected for the same reasons set forth in connection with the rejection of claim 5 above. Examiner’s Note The Examiner has pointed out particular references contained in the prior art of record within the body of this action for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply. Applicant, in preparing the response, should consider fully the entire reference as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US20170075795 is related to test coverage analysis by identifying test gaps using code execution paths, comparing tree data structures for the test code paths with production code paths identified by a production log analyzer and displaying the test gaps to a user; and US10248549 is related to detection of untested code execution path for test coverage data generation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to INSUN KANG whose telephone number is (571)272-3724. The examiner can normally be reached M-TR 8 -5pm; week 2: Tu-F 8-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /INSUN KANG/Primary Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Dec 08, 2022
Application Filed
Nov 01, 2023
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596632
METHOD FOR TESTING A COMPUTER PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12578981
GAME TRANSLATION METHOD, AND ELECTRONIC DEVICE, AND COMPUTER READABLE MEDIUM THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12578945
INSTANT INSTALLATION OF APPS
2y 5m to grant Granted Mar 17, 2026
Patent 12530211
SYSTEMS AND METHODS FOR DYNAMIC SERVER CONTROL BASED ON ESTIMATED SCRIPT COMPLEXITY
2y 5m to grant Granted Jan 20, 2026
Patent 12498906
INLINE CONVERSATION WITH ARTIFICIAL INTELLIGENCE WITHIN CODE EDITOR USER INTERFACE
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+40.2%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month