Prosecution Insights
Last updated: April 19, 2026
Application No. 17/807,205

GENERATING, AGGREGATING, AND QUERYING VIRTUALIZATION SERVICE EXECUTION METRICS USING IN-MEMORY PROCESSING

Non-Final OA §103
Filed
Jun 16, 2022
Examiner
VINCENT, ROSS MICHAEL
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
3 (Non-Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
12 granted / 22 resolved
-0.5% vs TC avg
Strong +36% interview lift
Without
With
+35.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
22.7%
-17.3% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1, 10, and 15 are currently amended. No new claims have been added. No claims have been canceled. Claims 1-20 are currently pending for examination. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/19/2025 has been entered. Response to Arguments Applicant’s arguments, pgs. 7-8, with respect to the amended claims 1, 10, and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The new grounds of rejection under 35 USC 103 rely upon Batz (US 20160344803 A1) to teach the amended limitations. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 4, 8, 9, 10, 12, 13, 15, 16, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ganguly (US 20210271506 A1) in view of Coleman (US 9596266 B1) in further view of Batz (US 20160344803 A1). As per claim 1, Ganguly discloses: A computer system comprising: a processor; and memory configured to provide computer program instructions to the processor, the computer program instructions including an execution metrics tool (“As depicted, the device 1800 includes a bus 1812, which provides communications between computer processor(s) 1814, memory 1816, persistent storage 1818, communications unit 1820, and input/output (I/O) interface(s) 1822.", 0119 ; “As noted above, metrics may be collected by metric collecting and reporting agents 1030a-j. There is one metric collecting and reporting agent 1030a-j per node in VIM POD 1020.", 0080 ; Examiner Note: the metric collecting and reporting agents together equate to an execution metrics tool) an execution metrics tool configured to: receive, by a virtualization service provider from a virtualization service client, a command configuring a computer system emulator that executes the virtualization service client, (“Returning to FIG. 3, during operation, the centralized management node 305 may be used for deploying OpenStack (e.g., facilitating distribution of images and other deployment procedures) or another form of cloud infrastructure, and after deployment, management node 305 may monitor the deployments using various tools.", 0034 ; "During operation, such processes may begin with a Dynamic Host Configuration Protocol (DHCP) request from a compute server under deployment.", 0034 ; “With reference now made to FIGS. 6A and 6B, depicted therein is a call flow diagram 600 illustrating a message exchange for installing an operating system (OS) on a node via a centralized management node 305. More specifically, call flow diagram 600 illustrates the installation of an OS on a DHCP client which is embodied as a compute server or POD arranged within a regional data center, such as a compute node arranged within POD 310a of regional data center 302a of FIG. 3.”, 0041 ; Examiner Note: the management node containing the metric collecting and reporting agents receiving a DHCP request equates to the execution metrics tool receiving a request to configure an emulator. The installation of an OS on a client of the service by the central management node equates to a virtualization service provider executing a virtualization service client) wherein the virtualization service provider is executing in a virtualization stack (“According to example embodiments, centralized cloud infrastructure provisioning and management are provided. According to such example embodiments, a first virtual machine executing on a centralized management node provides a first image file to a first computing entity arranged within a first point of delivery, wherein the first image file comprises at least one of a first boot configuration file or a first ramdisk file. A second virtual machine executing on the centralized management node provides a second image file to a second computing entity arranged within a second point of delivery different from the first point of delivery. “, 0024 ; see fig.11- central management node (1115) consists of VMs 1140a-1140f, comprising the virtualization stack) execute the command by causing a virtualization service provider running in a virtualization stack to execute a plurality of virtualization service operations on behalf of a virtualization service client of the computer system emulator ("In order to provide management features for each of PODs 310a-f, management node 305 may be configured to execute management virtual machines (VMs) 325a-f. According to this example embodiment, each of virtual machines 325a-f performs functions corresponding to management nodes 105a-f of FIG. 1 for a respective one of PODs 310a-f.", 0038 ; Examiner Note: functions corresponding to management nodes equate to virtualization service operations) generate an aggregated execution metric by aggregating execution metrics for the plurality of virtualization service operations over a current interval in the memory (“VIM-MON architecture 1000 employs metric aggregation into a time-series database (TSDB) 1012 installed on the management node 1010.", 0079 ; "These metrics are then read (“scraped”) by the event monitoring server 1011 running on the management node 1010 at regular intervals (e.g., a default scraping interval of 15 seconds).", 0080 ; Examiner Note: the aggregated metrics equate to aggregated execution metrics) in response to expiration of the current interval, push the aggregated execution metrics to a structure in the memory storing historical aggregated execution metrics. (“These metrics are then read (“scraped”) by the event monitoring server 1011 running on the management node 1010 at regular intervals (e.g., a default scraping interval of 15 seconds). Metrics that are scraped are received on the management node 1010 and are then stored in the local TSDB 1012.", 0080 ; Examiner Note: the TSDB (time-series database) equates to a memory which stores historical aggregated execution metrics.) Ganguly discloses the above limitations of claim 1, but does not explicitly disclose the virtualization service provider receiving a command configuring a computer system emulator. However, Coleman discloses: an execution metrics tool configured to: receive, by a virtualization service provider from a virtualization service client, a command configuring a computer system emulator that executes the virtualization service client (“receive a first cyber threat indicator of a type, the first cyber threat indicator including identifying information of a target host and information relating to malware behavior; instantiate, via a virtual client emulator controller implemented via the processor and in response to receiving the first cyber threat indicator, a first virtual client emulator selected based on the type of the first cyber threat indicator”, clm.1 “ As shown in FIG. 2, a TIVM server 200 can include a threat indicator module 201, a virtual client emulator controller module 202, a TIVM reporting module 203, and/or the like”, col.4, lines 44-46 ; Examiner Note: receiving, at the virtual client emulator of the TIVM server (corresponding to a virtualization service provider), the identifying information of the target host (corresponding to a virtualization service client) with information relating to malware behavior equates to receiving a command configuring a computer system emulator) The system of Ganguly in view of Coleman would be capable of receiving a command configuring a computer system emulator at a virtualization service provider from the virtualization service client the provider is executing. It would have been obvious to one of ordinary skill in the art, before the effective filing date to combine the systems of Ganguly and Coleman in order to provide a virtualization service provider network capable of quickly and accurately identifying malware threats within hosts, as well as eliminating false alarms of cyber threats (Coleman, [col.2, lines 26-28]). Ganguly in view of Coleman fully discloses the above limitations of claim 1, but does not disclose groups of virtualization service operations which are assigned a unique identifier and for which metrics are aggregated. However, Batz discloses: determine the plurality of virtualization service operations are organized into one or more groups, wherein each of the one or more groups is assigned a unique identifier that maps an execution call corresponding to the command to the plurality of virtualization service operations, (“FIG. 3 includes a first virtual service function group, which can be identified using an integer group ID=1 and a second virtual service function group, which can be identified using an integer group ID=2.”, 0089 ; “A method is provided in one example embodiment and may include receiving a first Internet protocol (IP) flow for an IP session for a subscriber; selecting a first service function group from a plurality of service function groups to perform one or more services for the IP session for the subscriber, wherein each of the plurality of service function groups comprises a plurality of service function chain types and wherein each service function chain type comprises an ordered combination of one or more service functions”, 0014 ; Examiner Note: virtual service functions equate to virtualization service operations, and the received IP flow of a subscriber equates to an execution call corresponding to a command) wherein the unique identifier indicates that execution metrics of plurality of virtualization service operations are to be aggregated based on the unique identifier (“In at least one embodiment, an example organization for the service function group load balancing table can include the service function group ID for each service function group (e.g., groups 1-3) and a load balancing metric for each respective group.”, 0080 ; Examiner Note: a load balancing metric equates to an execution metric.) It would have been obvious to one of ordinary skill in the art, before the effective filing date to combine the system of Ganguly in view of Coleman with that of Batz, in order to provide the system with the ability to support virtual groups of service functions where a service function can be a member of multiple virtual groups, as well as the ability to load-balance without requiring data-plane load balancer components and any associated redundancy and state replication (Batz, [0075]). As per claim 3, Ganguly in view of Coleman in further view of Batz fully discloses the limitations of claim 1. Furthermore, Ganguly discloses: the execution metrics tool is configured to store and update the execution metrics in a portion of the memory that is allocated to processes of the virtualization stack (“Metrics that are scraped are received on the management node 1010 and are then stored in the local TSDB 1012", 0080 ; Examiner Note: the management node handles processes of the virtual stack) As per claim 4, Ganguly in view of Coleman in further view of Batz fully discloses the limitations of claim 1. Furthermore, Ganguly discloses: the memory comprises random access memory, flash memory, or virtual memory, and the execution metrics tool is configured to store and update the execution metrics in a portion of the random access memory, the flash memory, or the virtual memory. (“In the depicted embodiment, memory 1816 includes RAM 1824 and cache memory 1826.", 0120 ; Examiner Note: memory 1816 is used to store execution metrics) As per claim 8, Ganguly in view of Coleman in further view of Batz fully discloses the limitations of claim 1. Furthermore, Ganguly discloses: the command instructs creation or deletion of the computer system emulator ("Installers of cloud infrastructure devices, such as OpenStack® installers, are used to deploy cloud infrastructure services on baremetal servers using automation tools. Such installers typically utilize one or a group of servers as “installer database servers” and these nodes are typically referred to as “management nodes” in a point-of-delivery (POD), such as an OpenStack POD", 0026 ; “In various embodiments, the centralized management node 305 may be deployed either as a virtual machine (VM) or microservice based on availability of cloud resources.”, 0034 ; Examiner Note: installation and deployment of a VM equates to the creation of a computer system emulator.) As per claim 9, Ganguly in view of Coleman in further view of Batz fully discloses the limitations of claim 1. Furthermore, Ganguly discloses: the plurality of virtualization service operations configures hardware or software on behalf of the virtualization service client ("the OS image may be broken into two images, including first and second stage image files, “stage1.img” and “stage2.img”, respectively. The content of the “stage-1.img” file may be selected based on server hardware and firmware configurations. For example, the “stage1.img” image file may contain one or more of the following non-exhaustive list of files: a bootloader file, a boot configuration file, a flinuz file, and a random access memory (RAM) or ramdisk file. The “stage2.img” image file may include the actual OS file (e.g., Windows, Linux, etc.) to be installed on the server.", 0041 ; Examiner Note: installing a boot configuration file equates to configuring hardware on behalf of a virtualization service client.) As per claim 10, it is a computer storage medium (Ganguly discloses : “Memory 1816 and persistent storage 1818 are computer readable storage media. In the depicted embodiment, memory 1816 includes RAM 1824 and cache memory 1826. In general, memory 1816 can include any suitable volatile or non-volatile computer readable storage media. Instructions for the Centralized Management, Provisioning and Monitoring Software 1825 may be stored in memory 1816 or persistent storage 1818 for execution by processor(s) 1814.”, [0120]) claim with substantially the same limitations as claim 1, and as such, it is rejected for substantially the same reasons. As per claim 12, it is a computer storage medium claim with substantially the same limitations as claim 3, and as such, it is rejected for substantially the same reasons. As per claim 13, it is a computer storage medium claim with substantially the same limitations as claim 4, and as such, it is rejected for substantially the same reasons. As per claim 15, it is a method claim with substantially the same limitations as claim 1, and as such, it is rejected for substantially the same reasons. As per claim 16, it is a method claim with substantially the same limitations as claim 3, and as such, it is rejected for substantially the same reasons. As per claim 19, it is a method claim with substantially the same limitations as claim 8, and as such, it is rejected for substantially the same reasons. As per claim 20, it is a method claim with substantially the same limitations as claim 9, and as such, it is rejected for substantially the same reasons. Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Ganguly (US 20210271506 A1) in view of Coleman (US 9596266 B1) in further view of Batz (US 20160344803 A1) in further view of Pijewski (US 20140280970 A1). As per claim 2, Ganguly in view of Coleman in further view of Batz fully discloses the limitations of claim 1, but does not disclose the storing of execution metrics in a location allocated to OS processes. However, Pijewski discloses; the execution metrics tool is configured to store and update the execution metrics in a portion of the memory that is allocated to operating system processes (“the usage metric for a tenant is stored in kernel memory of the filesystem", clm.20 ; Examiner Note: usage metrics equate to execution metrics. The location of the kernel equates to the portion of memory allocated to operating system processes) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ganguly in view of Coleman in further view of Batz with the storing of metrics in the kernel of Pijewski, in order to provide the system with direct access to the information stored in the aggregated metric files without requiring any layers of abstraction- thereby allowing the system to more easily diagnose performance issues. As per claim 11, it is a computer storage medium claim with substantially the same limitations as claim 2, and as such, it is rejected for substantially the same reasons. Claims 5 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ganguly (US 20210271506 A1) in view of Coleman (US 9596266 B1) in further view of Batz (US 20160344803 A1) in further view of Alshawabkeh (US 11375012 B2). As per claim 5, Ganguly in view of Coleman in further view of Batz fully discloses the limitations of claim 1, but does not disclose aggregating separate execution metrics for each operation of a virtualization service. However, Alshawabkeh discloses: the execution metrics tool is configured to aggregate the execution metrics separately for each type of virtualization service operation of the plurality of virtualization service operations (“defining a metric related to a feature of interest, the metric specifying a type of operation by the OS on global memory that is to be identified and collected by each of the autonomous infrastructure modules; pushing the defined metric to each autonomous infrastructure module; monitoring, by each respective autonomous infrastructure module, respective OS operations on the respective global memory on each respective storage system; processing the monitored respective OS operations,", claim 1 ; Examiner Note: a type of operation of the OS equates to a type of virtualization service operation) The system of Ganguly in view of Coleman in further view of Batz in further view of Alshawabkeh would be able to aggregate execution metrics separately for the various operations of the virtualization service. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ganguly in view of Coleman in further view of Batz with those of Alshawabkeh in order to provide the system with the ability to quickly and easily diagnose problems associated with various operations of a virtualization service, or at least determine the operation associated with a performance issue. As per claim 18, it is a method claim with substantially the same limitations as claim 5, and as such, it is rejected for substantially the same reasons. Claims 6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ganguly (US 20210271506 A1) in view of Coleman (US 9596266 B1) in further view of Batz (US 20160344803 A1) in further view of Chraim (US 12093709 B1). As per claim 6, Ganguly in view of Coleman in further view of Batz fully discloses the limitations of claim 1, but does not disclose aggregating separate execution metrics for each operation of a virtualization service. However, Chraim discloses: the execution metrics tool is configured to aggregate the execution metrics for a plurality of types of virtualization service providers operating in a plurality of layers of the virtualization stack (“a provider network may include multiple hosts each of which may be accessible through a network, and the provider network may include one more metric monitors which may determine performance metrics for the network of each host". Col.4, lines 7-11 ; Examiner Note: a host equates to a virtualization service provider, and performance metrics equate to execution metrics) The system of Ganguly in view of Coleman in further view of Batz in further view of Chraim would be capable of aggregating performance, or execution, metrics for a plurality of virtualization service providers, or hosts in a virtualization environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ganguly in view of Coleman in further view of Batz with those of Chraim to provide a virtualization environment with the ability to quickly and easily associate an identified performance issue with specific virtualization service provider through analyzing service specific execution metric logs. As per claim 17, it is a method claim with substantially the same limitations as claim 6, and as such, it is rejected for substantially the same reasons. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ganguly (US 20210271506 A1) in view of Coleman (US 9596266 B1) in further view of Batz (US 20160344803 A1) in view of Jeyakumar (US 20190132190 A1). As per claim 7, Ganguly in view of Coleman in further view of Batz fully discloses the limitations of claim 1, but does not disclose limiting the age of historical aggregated execution metrics. However, Jeyakumar discloses: the execution metrics tool is configured to limit an age of the historical aggregated execution metrics stored in the memory by deleting expired data of the historical aggregated execution metrics from the memory ("In another example, controller 102 deletes old/individual metrics after a certain amount of time has passed from generation thereof, where the certain amount of time is a configurable parameter determined based on experiments and/or empirical studies.", 0089) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ganguly in view of Coleman in further view of Batz with those of Jeyakumar to provide a virtualization service which does not waste memory resources by storing execution metric logs which exceed an age limit- i.e., would not be useful in the diagnosis of current performance issues. As per claim 14, it is a computer storage medium claim with substantially the same limitations as claim 7, and as such, it is rejected for substantially the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Nowatzki (US 20150121365 A1) – discloses a method for tracing emulated execution orders of non-native instructions based on natively executing code. Comprises maintaining a jump history, and for each of the non-native jump instructions, accessing non-native program code to determine one or more non-native instructions executed between the non-native jump instruction and last executed non-native jump instruction- and finally aggregating the instructions into a trace. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROSS MICHAEL VINCENT whose telephone number is (703)756-1408. The examiner can normally be reached Mon-Fri 8:30AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached on (571) 270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.M.V./ Examiner, Art Unit 2196 /APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Jun 16, 2022
Application Filed
Dec 16, 2024
Non-Final Rejection — §103
Mar 20, 2025
Interview Requested
Apr 02, 2025
Response Filed
Apr 02, 2025
Examiner Interview Summary
Sep 16, 2025
Final Rejection — §103
Nov 10, 2025
Interview Requested
Nov 19, 2025
Response after Non-Final Action
Nov 20, 2025
Examiner Interview Summary
Dec 17, 2025
Request for Continued Examination
Jan 03, 2026
Response after Non-Final Action
Jan 08, 2026
Non-Final Rejection — §103
Apr 13, 2026
Interview Requested
Apr 15, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530219
TIME-BOUND LIVE MIGRATION WITH MINIMAL STOP-AND-COPY
2y 5m to grant Granted Jan 20, 2026
Patent 12511158
TASK ALLOCATION METHOD, APPARATUS, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Patent 12493493
METHOD AND SYSTEM FOR ALLOCATING GRAPHICS PROCESSING UNIT PARTITIONS FOR A COMPUTER VISION ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12481529
CONTROLLER FOR COMPUTING ENVIRONMENT FRAMEWORKS
2y 5m to grant Granted Nov 25, 2025
Patent 12430170
QUANTUM COMPUTING SERVICE WITH QUALITY OF SERVICE (QoS) ENFORCEMENT VIA OUT-OF-BAND PRIORITIZATION OF QUANTUM TASKS
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
90%
With Interview (+35.9%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month