Prosecution Insights
Last updated: April 19, 2026
Application No. 18/617,766

PROCESSOR CORE USAGE MONITOR TOOL

Non-Final OA §102§103
Filed
Mar 27, 2024
Examiner
KHATRI, ANIL
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
961 granted / 1039 resolved
+37.5% vs TC avg
Strong +29% interview lift
Without
With
+29.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
13 currently pending
Career history
1052
Total Applications
across all art units

Statute-Specific Performance

§101
23.4%
-16.6% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
16.3%
-23.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1039 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: “Executing Processor Core Usage Monitor Tool During Software Installation”. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 10 and 17 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Tkaczyk-Walczak et al US 2019/0324878. The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. Regarding claims 1, 10 and 17 Tkaczyk-Walczak et al teaches executing, at least in part, a software installation on a processor, the processor having multiple cores of different performance type [see fig 1, 0003] according to some embodiments, a method for calculation of at least one software usage metric is disclosed, the software usage metric describing a plurality of software products installed in a computing infrastructure including a plurality of computing machines organized into a plurality of computing groups. The method includes installing a first set of software products in each computing machine in a first computing group of a plurality of computing groups, scanning one computing machine in the first computing group to discover that the first set of software products are installed thereon, and calculating an overall usage metric for the first computing group based on a number of computing machines belonging to the first computing group and the discovered first set of software products]; during the executing of the software installation on the processor, executing a usage monitor tool, wherein executing the usage monitor tool comprises monitoring, during the executing of the software installation, one or more processes of the software installation executing on the processor [see fig 3, 0034] FIG. 3 is a flowchart of method 300 for calculation of at least one software usage metric in a computing infrastructure such as computing environment 10 (shown in FIG. 1A). The software usage metric being measured by method 300 is an overall usage metric that describes the total number of instances of a particular set of used software products in the computing infrastructure. Thereby, the overall usage metric accounts for each computing machine upon which is installed a software product of interest (e.g., one the software product from a set of software products) across all the products of interest]; scanning, via a processor query, the processor at a defined frequency for core-related data of one or more particular cores of the multiple cores, the one or more particular cores executing the one or more processes of the software installation [0004] according to some embodiments, computer program product for calculating software usage is disclosed, the computer program product including a computer-readable storage medium having computer program instructions embodied therewith, the computer program instructions configured, when executed by at least one computing machine, to cause the at least one computing machine to perform a method. The method includes installing a first set of software products in each computing machine in a first computing group of a plurality of computing groups in a computing infrastructure, scanning one computing machine in the first computing group to discover that the first set of software products are installed thereon, and calculating an overall usage metric for the first computing group based on a number of computing machines belonging to the first computing group and the discovered first set of software products]; storing, based on the scanning, the core-related data of the one or more particular cores executing the one or more processes of the software installation [0017] In some embodiments, the prime machine 11 may be configured to manage each of the computing machines 12A-N. For example, the prime machine 11 may store and/or utilize multiple sets of software products and may be configured to deploy each set of software products to the respective group 13 of computing machines 12. Therefore, prime machine 11 may comprise a map which relates each set of software products with the respective groups 13 that the software products should be installed on. In some other embodiments, each group of computing machines 13 may be managed by a different prime machine 11. In such embodiments, environment 10 may comprise two prime machines 11 such that group 13A would be managed by a first prime machine 11, and group 13B would be managed by a second prime machine 11]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2-3, 5-9, 11-12, 14-16 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tkaczyk-Walczak et al US 2019/0324878 in vi of Moore et a USPN 10,554,516. Regarding claims 2, 11 and 18 Tkaczyk-Walczak et al teaches the multiple cores of different performance type comprise multiple cores with one or more different core characteristics, the one or more different core characteristics being selected from the group consisting of core processing [ 0016…. In another example, computing machines 12 may be grouped by types of processors they have or by other computing properties. For example, all of computing machines 12A-K have the same type of processor in group 13A, and all of computing machines 12L-N have the same type of processor in group 13B (albeit a different type of processor than in group 13A). Furthermore, computing machine 12A may comprise multiple processors 14A and 14B, each of which may comprise multiple cores 15A-D and 15E-H, respectively. Each core 15 may be associated with a corresponding number of PVUs. Therefore, the method as described herein with reference to computing machines 12 may be applied for processors 14 and/or cores 15 (e.g., groups of processors and a reference processor to control the groups may be determined] but doesn’t teach explicitly frequency, core type, quantity of core cache memory available, and configuration of core cache memory available, however, Moore et al teaches (column 3, line 1, the software usage metrics collected by the metrics collection system include a rate or frequency with which features of a software product are executed, a number of devices executing the software product at a deployment, a number of deployments executing versions of the software product, a number of unique users, a number of failed login attempts (e.g., by location, user, or day), a frequency of use of the software product, a frequency of crashes, bug reports, and performance metrics related to a speed or efficiency of actions of the software product. As a means of standardization, the metrics collection system may include a metrics application that executes at client devices, to quantify and format metrics submissions for the metrics collection system. The usage metrics of the metrics submissions may be based on a “Uniform Metrics Identifier” (UMI) that quantifies, based on what the software product is, what the actual metric collected is, and what the point or duration scale of the metric is, and a value of the metric). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate frequency and memory. The modification would have been obvious because one of ordinary skill in the art would have been motivated to combine teaching into improve the performance and standardize and collect software usage metrics and to generate report for performance. Regarding claims 3, 12 and 19 Tkaczyk-Walczak et al teaches executing the usage monitor tool further comprises generating a usage metric value for a defined period of time, the generating including combining each core-related data, and wherein the usage metric value is indicative for the period of time of usage of the one or more particular cores of the multiple cores of different performance type in executing the one or more processes of the software installation [0054] in order to further improve method 300, a schedule may be defined which sets a predetermined time period between performances of method 300. The schedule indicates that reference machine 12′ of the reference group 13′ is to be scanned and further indicates another selected machine or computing machine 12 (such as a newly-added computing machine 12″) from group 13′ to be scanned as well. This allows for the skipping of scans every time a new computing machine is added, which can save repetitive scanning when a significant number of computing machines 12 are added to group 13′ in a short period of time. For example, a full scan may be performed on new computing machine 12″. The results of the full scan of new computing machine 12″ and the reference machine 12′ can be compared and may result in a machine delta (e.g., machine delta means only files that are not included in the reference machine 12′, the rest may be excluded from scanning) The differences may be put on a pending queue for reference machine 12′ so that the machine delta software products can be added to list of files to exclude from scanning and/or add to reference machine 12]. The feature of providing usage metric for time... would be obvious for the reasons set forth in the rejection of claim 1. Regarding claims 5 and 14 Moore et al teaches executing the usage monitor tool further comprises transmitting the usage metric value for the defined period of time to a computing machine of the computing environment, wherein multiple usage metric values for the defined period of time are aggregated by the computing machine across multiple processors of the computing environment (column 7, line 11, as an example of the forgoing operation, the metrics application 114 may cause display of a graphical user interface configured to receive and transmit metrics submissions at a client device 110. The user 106 of the client device 110 may submit a metrics submission (e.g., a UMI) to the metrics collection system 150 through the interface. The software usage metrics are then collected automatically by the metrics application 114, and delivered to the metrics collection system 150 through the network 104. For example, the metrics application 114 may monitor various metrics features of a software product (or multiple software products) executing on the client device 110 (or at the deployment 130). The metrics application 114 may then deliver the software usage metrics collected to the collection module 210 as a metrics submission (e.g., UMI). The feature of providing usage metric for transmitting... would be obvious for the reasons set forth in the rejection of claim 1. Regarding claim 6 Moore et al teaches generating the usage metric value for the defined period of time (p) includes determining an aggregated usage metric value (v) as follows v (p, f, m, r) =∑i=1 p ∙ f m (i)∙t[mi] p ∙f where: p – period for which metric value should be aggregated (seconds) f – frequency of performing the scans (1/seconds) (column 3, line 19, the UMI may comprise three types of information: a group (e.g., the software product the metric is related to); a metric (e.g., what is being measured); and a duration (e.g., a timeframe over which the measurement was made, or an indication if the measurement is just a point value). For example, based on the UMI, the usage metrics may be formatted as a concatenation of strings associated with the above components, separated by “:” in the following form: <Group>:<Metric>:<Duration>); m – table containing measurement results, i.e. core types recognized in given scan (core type) (column 9, line 5, software usage metrics data in the form of metrics submissions flow from the deployments 610, through the network 104, and into the metrics collection system 150 based on the methods 300, 400, and 500 discussed in FIGS. 3-5. The metrics collection system 150 receives the metrics submissions, and stores the software usage metrics data within the database 126); r – table containing rates that should be applied for given core type (metric units core type) (column 3, line 1, the software usage metrics collected by the metrics collection system include a rate or frequency with which features of a software product are executed, a number of devices executing the software product at a deployment, a number of deployments executing versions of the software product, a number of unique users, a number of failed login attempts (e.g., by location, user, or day), a frequency of use of the software product, a frequency of crashes, bug reports, and performance metrics related to a speed or efficiency of actions of the software product. As a means of standardization, the metrics collection system may include a metrics application that executes at client devices, to quantify and format metrics submissions for the metrics collection system. The usage metrics of the metrics submissions may be based on a “Uniform Metrics Identifier” (UMI) that quantifies, based on what the software product is, what the actual metric collected is, and what the point or duration scale of the metric is, and a value of the metric); v – aggregated value for period p (metric units) (column 7, line 44, at operation 340, the scoring module 230 calculates a metrics score of the metrics category (or categories). The metrics score is based on the software usage metrics values collected by the collection module 210). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate metric values to calculate values and errors. The modification would have been obvious because one of ordinary skill in the art would have been motivated to combine teaching into improve the performance, standardize and optimize processes while providing consistency. Regarding claims 7 and 15 Tkaczyk-Walczak et al teaches executing the usage monitor tool further comprises determining that the one or more processes of the software installation are executing on the processor [see fig 3, 0034] FIG. 3 is a flowchart of method 300 for calculation of at least one software usage metric in a computing infrastructure such as computing environment 10 (shown in FIG. 1A). The software usage metric being measured by method 300 is an overall usage metric that describes the total number of instances of a particular set of used software products in the computing infrastructure. Thereby, the overall usage metric accounts for each computing machine upon which is installed a software product of interest (e.g., one the software product from a set of software products) across all the products of interest]. The feature of providing executing... would be obvious for the reasons set forth in the rejection of claim 1. Regarding claims 8 and 16 Tkaczyk-Walczak et al teaches the scanning comprises scanning the processor to identify the one or more particular cores of the processor executing the one or more processes of the software installation [0004] according to some embodiments, computer program product for calculating software usage is disclosed, the computer program product including a computer-readable storage medium having computer program instructions embodied therewith, the computer program instructions configured, when executed by at least one computing machine, to cause the at least one computing machine to perform a method. The method includes installing a first set of software products in each computing machine in a first computing group of a plurality of computing groups in a computing infrastructure, scanning one computing machine in the first computing group to discover that the first set of software products are installed thereon, and calculating an overall usage metric for the first computing group based on a number of computing machines belonging to the first computing group and the discovered first set of software products]. The feature of providing scanning ... would be obvious for the reasons set forth in the rejection of claim 1. Regarding claim 9 Tkaczyk-Walczak et al teaches the core-related data includes core type data for each of the one or more particular cores executing the one or more processes of the software installation [0016… in another example, computing machines 12 may be grouped by types of processors they have or by other computing properties. For example, all of computing machines 12A-K have the same type of processor in group 13A, and all of computing machines 12L-N have the same type of processor in group 13B (albeit a different type of processor than in group 13A). Furthermore, computing machine 12A may comprise multiple processors 14A and 14B, each of which may comprise multiple cores 15A-D and 15E-H, respectively. Each core 15 may be associated with a corresponding number of PVUs. Therefore, the method as described herein with reference to computing machines 12 may be applied for processors 14 and/or cores 15 (e.g., groups of processors and a reference processor to control the groups may be determined]. The feature of providing core type ... would be obvious for the reasons set forth in the rejection of claim 1. Allowable Subject Matter Claims 4, 13 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Relevant Prior Art US 11645071 B1 Liu et al teaches Intelligent Installation For Client Systems US 7707573 B1 Marmaros et al teaches Systems And Methods For Providing And Installing Software US 12333290 B2 Fortin et al teaches Automatic Upgrade Of On-premise Software Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Anil Khatri whose telephone number is (571)272-3725. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Zhen can be reached at 571-272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANIL KHATRI/ Primary Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Mar 20, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602211
PERMISSIONS AND NOTIFICATIONS FOR CONSTRUCT-MODIFICATION TAGS
2y 5m to grant Granted Apr 14, 2026
Patent 12596544
DEPLOYMENT OF UPDATES AT MULTIPLE SITES
2y 5m to grant Granted Apr 07, 2026
Patent 12596538
TRUST-BASED MODEL FOR DEPLOYING ISSUE IDENTIFICATION AND REMEDIATION CODE
2y 5m to grant Granted Apr 07, 2026
Patent 12596631
DIFFING PRIOR EXECUTIONS OF AN EXECUTABLE PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12591413
COMPUTER PROGRAM SPECIFICATION BUILDER
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+29.3%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 1039 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month