Prosecution Insights
Last updated: April 19, 2026
Application No. 17/960,863

LOAD BALANCING WITH SERVICE-TIER AWARENESS

Non-Final OA §102§103
Filed
Oct 06, 2022
Examiner
KIM, DONG U
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
VMware, Inc.
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
610 granted / 702 resolved
+31.9% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
737
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
28.0%
-12.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/15/2025 has been entered. Claims 1-21 are pending and they are presented for examination. Claim Objections Claim(s) 1, 8 and 15 is/are objected to because of the following informalities: Claims 1 (similarly claims 8 and 15) are missing an “and” prior to the last limitation. Appropriate correction is required. Response to Arguments 3. Applicant's arguments filed regarding claim 1 (page 9-10), “Claim 1 has been amended to specify that a single load balancer performs all of the recited operations. The Applicant submits that this limitation was implicit prior to the amendment of claim 1 by virtue of the term "a load balancer" providing antecedent basis for the load balancer recited throughout claim 1. In any event, additional support for this amendment can be found throughout the application as filed, including in Figs. 1, and 3-6, in which "Computer System 110" is referred to as "Load Balancer" and performs all of the operations of the method 310 of Fig. 3 and all of the load balancing operations of Fig. 4.” The examiner would like to point out to the instant specification which explicitly discloses multiple load balancers (“VM 231/232/233/234 or host 210A/210B”) (emphasis added) being referenced as a load balancer (singular term). Therefore, based on the disclosure of the instant application, the term single load balancer can be interpreted as a plural term (i.e. multiple load balancers). [Instant PGPub paragraph 16], “Using the example in FIG. 2, “computer system” 110 implementing a load balancer may be VM 231/232/233/234 or host 210A/210B with hypervisor-implemented load balancer 218A/219A.” Therefore, argument is not persuasive. Applicant's arguments filed regarding claim 1 (page 13), “Dauod, 1 0037 (emphasis added). Daoud here clearly teaches that the load balancer 300 directs selects a "the server that is best able to process the selection" out of a single pool 310, not, as the Office seems to interpret, "a server pool 310 that is best able to process the transaction. "The distinction between the indefinite article when referring to the pool 310 and the definite article when referring to the selected server augurs strongly against the Office's apparent interpretation. Moreover, neither Fig. 3 nor " 0037 teach or suggest that the pool 310 might be one of a plurality of pools, which would be required if the load balancer were selecting a server from among multiple pools.” The examiner would like to point out to Daoud which explicitly disclose of multiple server pool. [0048], “For example, multiple load balancers can be networked to administer a single server pool or multiple server pools.” Therefore, based on response to argument above (regarding single load balancer) and in view of paragraph 48 which explicitly discloses multiple server pool, argument is not persuasive. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 8 and 15 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 20020087694 A1 – hereinafter “Daoud”. With respect to claim 1, Daoud teaches, A method, comprising: (“FIG. 7 shows a method for routing the transaction 200 to a server based on a requested level of service associated with the transaction 200 generated in step 710, using suitable program code and stored on a number of (i.e., one or more) suitable computer readable storage media. In step 700, the load balancer 300 (or a suitable software/hardware agent) monitors the server pool 320, 500 to determine the service level of each server in the server pool” [0049]) receiving, by a single load balancer and from a client system, a service request that requires processing by one of multiple server pools that are reachable via the computer system, “FIG. 3 shows the transaction 200 received at a load balancer 300 and directed to a server 311, 312, 313 in a server pool 310 that is best able to process the transaction 200 based on the requested level of service indicated by the service tag 220” [0037]; “It is understood that the load balancing schemes shown in FIG. 3 and FIG. 5 are illustrative of the apparatus and method of the present invention and are not intended to limit the scope of the invention. Other configurations are also contemplated as being within the scope of the invention. For example, multiple load balancers can be networked to administer a single server pool or multiple server pools” [0048]; “FIG. 2 shows a packetized transaction 200. The packetized transaction 200 includes a data packet 210 (i.e., the data to be processed) and a service tag 220” [0025]; “Preferably, the transaction 200 is assigned a service tag 220 at its source (i.e., where the transaction 200 originates)” [0031]; wherein the multiple server pools are associated with respective multiple service tiers; “The service level being provided by each server can be based on, as illustrative but not limited to, the server meeting the service level objectives of a single user, a user group (e.g., the accounting department), or a transaction type (e.g., email). That is, preferably the load balancer 300 (or suitable software/hardware agent) monitors the service level provided by each server in the server pool to generate the server index. For example, the load balancer 300 can measure or track processing parameters of a server (e.g., total processing time, processor speed for various transactions, etc.) with respect to a single user, a user group, a transaction type, etc. Alternatively, the server index can be based on known capabilities (e.g., processor speed, memory capacity, etc.) and/or predicted service levels of the servers in the server pool (e.g., based on past performance, server specifications, etc.).” [0045]; “When the transaction 200 is received at the load balancer 300, the load balancer 300 reads the requested level of service from the service tag 220. Based on the server index 600 (FIG. 6), the load balancer 300 selects the server (e.g., 512) from the server group (e.g., 510) that is best providing the requested level of service (e.g., "premium")” [0047]; “It is understood that the load balancing schemes shown in FIG. 3 and FIG. 5 are illustrative of the apparatus and method of the present invention and are not intended to limit the scope of the invention. Other configurations are also contemplated as being within the scope of the invention. For example, multiple load balancers can be networked to administer a single server pool or multiple server pools… A possible hierarchical configuration could comprise a gatekeeping load balancer that directs transactions either to a load balancer monitoring a premium server pool or to a load balancer monitoring a standard server pool, and the individual load balancers can then select a server from within the respective server pool.” [0048]; obtaining, by the single load balancer, identity information identifying a user associated with the service request from the client system; When the transaction 200 is received at the load balancer 300, the load balancer 300 reads the requested level of service from the service tag 220. Based on the server index 600 (FIG. 6), the load balancer 300 selects the server (e.g., 512) from the server group (e.g., 510) that is best providing the requested level of service (e.g., "premium")” [0047]; “It is also understood that the service tag 220 can include multiple packets. Similarly, an individual service tag 220 may comprise more than one indicator. These multiple packets, or indicators within a packet, may be combined to indicate the requested level of service. Separate packets (or indicators) may be included, such as, a time-stamp, an origination ID, an application ID, a user ID, a project ID, etc. In such an embodiment, the requested service level can be a combination of some or all of the packets included therein” [0027]; “Preferably, the transaction 200 is assigned a service tag 220 at its source (i.e., where the transaction 200 originates)” [0031]; forwarding by the single load balancer, the service request towards a destination server for processing, “Thus, for example, where the service tag 220 indicates that the requested level of service is "premium", the load balancer 300 directs the transaction 200 to any one of the servers 511, 512, 513 in the premium group 510. The load balancer can use conventional load balancing algorithms (e.g., next available, fastest available, or any other suitable algorithm) to select a specific server 511, 512, 513 within the premium group 510” [0047]; wherein the destination server is selected from a particular server pool identified as being associated with a particular service tier mapped to the service request based on the identity information. “The requested level of service may also be based on the user identification. For example, users that generally require faster processing speeds (the CAD department or an administrator) may be assigned faster servers than those who require the servers only to back up data. Likewise, users (e.g., an administrator) can be designated as having the highest priority, overriding competing transactions” [0032]; “When the transaction 200 is received at the load balancer 300, the load balancer 300 reads the requested level of service from the service tag 220. Based on the server index 600 (FIG. 6), the load balancer 300 selects the server (e.g., 512) from the server group (e.g., 510) that is best providing the requested level of service (e.g., "premium")” [0047]; “It is understood that the load balancing schemes shown in FIG. 3 and FIG. 5 are illustrative of the apparatus and method of the present invention and are not intended to limit the scope of the invention. Other configurations are also contemplated as being within the scope of the invention. For example, multiple load balancers can be networked to administer a single server pool or multiple server pools… A possible hierarchical configuration could comprise a gatekeeping load balancer that directs transactions either to a load balancer monitoring a premium server pool or to a load balancer monitoring a standard server pool, and the individual load balancers can then select a server from within the respective server pool.” [0048]; As per claim 8, this is a non-transitory computer-readable storage medium claim corresponding to the method claim 1. Therefore, rejected based on similar rationale. As per claim 15, this is a system claim corresponding to the method claim 1. Therefore, rejected based on similar rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2, 6, 7, 9, 13, 14, 16, 20, 21 are rejected under 35 U.S.C. 103 as being unpatentable over US 20020087694 A1 – hereinafter “Daoud”, in view of US 20070263541 A1 – hereinafter “Cobb”. With respect to claim 2, Daoud teaches The method of claim 1, Daoud does not explicitly teach wherein obtaining the identity information comprises: obtaining the identity information based on (a) source address information specified by the service request or (b) the client system in the form of a source virtualized computing system. Cobb teaches, wherein obtaining the identity information comprises: obtaining the identity information based on (a) source address information specified by the service request or (b) the client system in the form of a source virtualized computing system. “Further, a user identity can be related to transactions. A user ID may be identified and associated with a session by examining and parsing a login transaction for user identity information, for example. In those cases where the login transaction possesses a session identifier, for example, this session ID may be used to establish a relationship between the user ID and the session ID, which may in turn share a relationship with one or more transactions. Another example of user to transaction binding is through the intermediary of a network address, for example where the IP source address of the packets related to the transaction is used to look up user identity in a table of IP address to user identity relationships” [0106]; Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Daoud to include, wherein obtaining the identity information comprises: obtaining the identity information based on (a) source address information specified by the service request using the teachings of Cobb. It would have been obvious to a person having ordinary skill in the art to make this combination, with a reasonable expectation of success, for the purpose of facilitating management of service quality that is agreeable to a business as well as to the service provider by accurately identifying identity information based on source address. Network services are important to businesses and placing performance thresholds ensures satisfaction from both parties (Cobb, 0002-0003). With respect to claim 6, Daoud teaches The method of claim 1, Daoud teaches, wherein obtaining the identity information comprises: extracting, from the service request, session information specifying (a) the identity information identifying the user, or (b) both the identity information and the particular service tier assigned to the user. “It is also understood that the service tag 220 can include multiple packets. Similarly, an individual service tag 220 may comprise more than one indicator. These multiple packets, or indicators within a packet, may be combined to indicate the requested level of service. Separate packets (or indicators) may be included, such as, a time-stamp, an origination ID, an application ID, a user ID, a project ID, etc. In such an embodiment, the requested service level can be a combination of some or all of the packets included therein” [0027]; “The requested level of service may also be based on the user identification. For example, users that generally require faster processing speeds (the CAD department or an administrator) may be assigned faster servers than those who require the servers only to back up data. Likewise, users (e.g., an administrator) can be designated as having the highest priority, overriding competing transactions” [0032]; Daoud does not explicitly teach wherein obtaining the identity information comprises: extracting, from the service request, session information specifying (a) the identity information identifying the user. Cobb teaches, wherein obtaining the identity information comprises: extracting, from the service request, session information specifying (a) the identity information identifying the user, or (b) both the identity information and the particular service tier assigned to the user. “User identification (ID) module 260 receives the transaction components from component ID module 250 and identifies a session ID and/or user ID from the received components” [0104]; “A user ID may be identified and associated with a session by examining and parsing a login transaction for user identity information, for example. In those cases where the login transaction possesses a session identifier, for example, this session ID may be used to establish a relationship between the user ID and the session ID” [0106]; Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Daoud to include, wherein obtaining the identity information comprises: obtaining the identity information based on (a) source address information specified by the service request using the teachings of Cobb. It would have been obvious to a person having ordinary skill in the art to make this combination, with a reasonable expectation of success, for the purpose of facilitating management of service quality that is agreeable to a business as well as to the service provider by accurately identifying identity information based on source address. Network services are important to businesses and placing performance thresholds ensures satisfaction from both parties (Cobb, 0002-0003). With respect to claim 7, Daoud teaches The method of claim 1, Daoud teaches, wherein the service request is mapped to the particular service tier comprises: mapping the service request to the particular service tier based on mapping information accessible by the computer system, “The requested level of service may also be based on the user identification. For example, users that generally require faster processing speeds (the CAD department or an administrator) may be assigned faster servers than those who require the servers only to back up data. Likewise, users (e.g., an administrator) can be designated as having the highest priority, overriding competing transactions” [0032]; “It is also understood that the service tag 220 can include multiple packets. Similarly, an individual service tag 220 may comprise more than one indicator. These multiple packets, or indicators within a packet, may be combined to indicate the requested level of service. Separate packets (or indicators) may be included, such as, a time-stamp, an origination ID, an application ID, a user ID, a project ID, etc. In such an embodiment, the requested service level can be a combination of some or all of the packets included therein” [0027]; “Preferably, the transaction 200 is assigned a service tag 220 at its source (i.e., where the transaction 200 originates)” [0031]; wherein the mapping information associates (a) multiple sets of identity information identifying respective multiple users with (b) multiple service tiers assigned to the respective multiple users based on service level agreement (SLA) information. “The requested level of service may also be based on the user identification. For example, users that generally require faster processing speeds (the CAD department or an administrator) may be assigned faster servers than those who require the servers only to back up data. Likewise, users (e.g., an administrator) can be designated as having the highest priority, overriding competing transactions” [0032]; “When the transaction 200 is received at the load balancer 300, the load balancer 300 reads the requested level of service from the service tag 220. Based on the server index 600 (FIG. 6), the load balancer 300 selects the server (e.g., 512) from the server group (e.g., 510) that is best providing the requested level of service (e.g., "premium")” [0047]; “A possible hierarchical configuration could comprise a gatekeeping load balancer that directs transactions either to a load balancer monitoring a premium server pool or to a load balancer monitoring a standard server pool, and the individual load balancers can then select a server from within the respective server pool.” [0048]; Daoud does not explicitly teach based on servile Level agreement (SLA) information. Cobb teaches, wherein the mapping information associates (a) multiple sets of identity information identifying respective multiple users with (b) multiple service tiers assigned to the respective multiple users based on service level agreement (SLA) information. “Under an SLA, the service provider agrees to meet certain quality thresholds for the level of service provided to a particular user. For example, an SLA may indicate that a certain transaction provided by a network service provider to a user must have an average response time of one second or less over a month” [0003]; Examiners note: An SLA is simply an agreement between two parties for the expected level of service. Agreed upon response times can be mapped to a particular tier of service. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Daoud to include, based on service level agreement (SLA) information using the teachings of Cobb. It would have been obvious to a person having ordinary skill in the art to make this combination, with a reasonable expectation of success, for the purpose of facilitating management of service quality that is agreeable to a business as well as to the service provider by accurately identifying identity information based on source address. Network services are important to businesses and placing performance thresholds ensures satisfaction from both parties (Cobb, 0002-0003). As per claims 9, 13 and 14. These are non-transitory computer-readable storage medium claims corresponding to the method claims 2, 6 and 7. Therefore, rejected based on similar rationale. As per claims 16, 20 and 21. These are system claims corresponding to the method claims 2, 6 and 7. Therefore, rejected based on similar rationale. Claims 3, 10, 17 are rejected under 35 U.S.C. 103 as being unpatentable over US 20020087694 A1 – hereinafter “Daoud”, in view of US 20070263541 A1 – hereinafter “Cobb”, further in view of US 20200349238 A1– hereinafter “Tyagi”. With respect to claim 3, Daoud and Cobb teach The method of claim 2, Daoud and Cobb do not explicitly teach wherein obtaining the identity information comprises: generating and sending a query that (a) specifies the source address information or (b) identifies the source virtualized computing system; and based on a response to the query, determining the identity information identifying the user. Tyagi teaches, wherein obtaining the identity information comprises: generating and sending a query that (a) specifies the source address information or (b) identifies the source virtualized computing system; and based on a response to the query, determining the identity information identifying the user. “Software application 608 can then use the target computing device's network address to identify the target computing device, such as by querying a database (e.g., CMDB 500) to determine whether a computing device is associated with the network address of the target computing device. For instance, the database might return a unique alphanumeric identifier of the target computing device, a location of the target computing device within managed network 300, and/or other information identifying the target computing device.” [0144]; “In particular, as an example process, software application 608 can be configured to read a set of multiple records (e.g., across multiple tables) and take, from the set of multiple records, a first type of information shared across the set of records. Using the first type of information as a reference, software application 608 can identify, in two or more records of the set of records, at least one other type of information that is associated with the reference. For example, software application 608 can locate one record with entries that identify, for a particular session between target computing device 614 and server device 602, a user identifier and a hostname” [0177]; Examiners note: The target computing device’s identity information is returned by the CMDB, it is then correlated with multiple records to identify a user. A CMDB is ordinarily part of infrastructure management (it inventories or tracks servers, software, and hardware dependencies). It’s a key element of infrastructure management approach, enabling the system to discover computing devices and their relationship. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Daoud and Cobb to include wherein obtaining the identity information comprises: generating and sending a query that (a) specifies the source address information and based on a response to the query, determining the identity information identifying the user using the teachings of Tyagi. It would have been obvious to a person having ordinary skill in the art to make this combination, with a reasonable expectation of success, for the purpose of tracking down a request to a particular user or device in a large environment (Tyagi, 0003). As per claim 10, this is a non-transitory computer-readable storage medium claim corresponding to the method claim 3. Therefore, rejected based on similar rationale. As per claim 17, this is a system claim corresponding to the method claim 3. Therefore, rejected based on similar rationale. Claims 4, 11, 18 are rejected under 35 U.S.C. 103 as being unpatentable over US 20020087694 A1 – hereinafter “Daoud”, in view of US 20070263541 A1 – hereinafter “Cobb”, further in view of US 20200186501 A1– hereinafter “Neystadt”. With respect to claim 4, Daoud and Cobb teach The method of claim 2, Daoud and Cobb do not explicitly teach wherein obtaining the identity information comprises: determining the identity information based on the response received from a configuration management database (CMDB) associated with at least one of the following: (a) an infrastructure management platform and (b) a network monitoring tool. Neystadt teaches, wherein obtaining the identity information comprises: determining the identity information based on the response received from a configuration management database (CMDB) associated with at least one of the following: (a) an infrastructure management platform and (b) a network monitoring tool. “The bouncer 110 acquires client identifier, such as IP address. There are a variety of kinds of client identifiers and methods for capturing client identifiers. For example, if the client identifier is the private network IP address of the client 100, the bouncer 110 can obtain it by extracting it from the source IP address field of the packets received by the client 100 carrying the request in step (5) or other method… As yet another example, the client identifier could be a name or other identifier obtained by the bouncer 110 lookup the client in the organizational CMDB (Configuration Management Database), LDAP server, or other database to get an assigned identifier. The CMDB or LDAP database may provide a user or device attribute in response. Many kinds of client identifier can be used in the teachings hereof, but preferably the client identifier uniquely identifies the client amongst other clients in the private network, or uniquely identifies a particular class or category of clients within the private network to which the client 100 belongs.” [0052]; Examiners note: A CMDB is ordinarily part of infrastructure management (it inventories or tracks servers, software, and hardware dependencies). It’s a key element of infrastructure management approach, enabling the system to discover computing devices and their relationships. The bouncer extracts information from packets, effectively making it a network monitoring tool. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Daoud and Cobb to include, wherein obtaining the identity information comprises: determining the identity information based on the response received from a configuration management database (CMDB) associated with at least one of the following: (a) an infrastructure management platform and (b) a network monitoring tool using the teachings of Neystadt. It would have been obvious to a person having ordinary skill in the art to make this combination, with a reasonable expectation of success, for the purpose, regardless of interacting with an environment that obfuscates the IP addresses within the packets, being able to track down the source of a service request (Neystadt, 0005). As per claim 11, this is a non-transitory computer-readable storage medium claim corresponding to the method claim 4. Therefore, rejected based on similar rationale. As per claim 18, this is a system claim corresponding to the method claim 4. Therefore, rejected based on similar rationale. Claim 5, 12, 19 is rejected under 35 U.S.C. 103 as being unpatentable over US 20020087694 A1 – hereinafter “Daoud”, in view of US 20070263541 A1 – hereinafter “Cobb”, further in view of US 20170353433 A1 – hereinafter “Antony”. With respect to claim 5, Daoud and Cobb teach The method of claim 2, Doaud and Cobb do not explicitly teach wherein obtaining the identity information comprises: determining the identity information based on the response received from a guest operating system (OS) associated with the client system, wherein the guest OS supports a virtual machine (VM) management tool or a network introspection driver. Antony teaches, wherein obtaining the identity information comprises: determining the identity information based on the response received from a guest operating system (OS) associated with the client system, “At 710 and 720 in FIG. 7, upon detecting a traffic flow of packets from “C1”, “VM1” tags the traffic flow with any suitable data identifying “C1”. The traffic flow from “C1” represents an egress traffic flow (may also be referred to as “outgoing packets”) from “C1” to a destination accessible via physical network 150 In practice, the traffic flow may be detected and tagged by “VM1” using a guest agent that hooks onto the network stack of guest OS 122. As described using FIG. 3, “VM1” is aware of, or has access to, the mapping between container ID (see 318 in FIG. 3), container IP address (see 320 in FIG. 3) and tag data (see 328 in FIG. 3).” [0074]; “The guest agent may be installed on guest OS 122 as part of a suite of utilities (e.g., known as “VM tools”)” [0034]; wherein the guest OS supports a virtual machine (VM) management tool or a network introspection driver. “Example process 500 may be implemented by virtual machine 120, 121 (e.g., using guest agent or “VM tools” on guest OS 122, 123)” [0057]; “The guest agent may be installed on guest OS 122 as part of a suite of utilities (e.g., known as “VM tools”) for enhancing the performance of virtual machine 120, 121” [0035]; Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Daoud and Cobb to include, wherein the guest OS supports a virtual machine (VM) management tool or a network introspection driver using the teachings of Antony. It would have been obvious to a person having ordinary skill in the art to make this combination, with a reasonable expectation of success, for the purpose of implementing a suite of utilities (Tools) in the VM that enhance its performance, as well as managing resources when tens to hundreds of virtual machines have the same physical resources (Antony, 0004, 0035). As per claim 12, this is a non-transitory computer-readable storage medium claim corresponding to the method claim 5. Therefore, rejected based on similar rationale. As per claim 19, this is a system claim corresponding to the method claim 5. Therefore, rejected based on similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DONG U KIM/Primary Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Oct 06, 2022
Application Filed
Apr 15, 2025
Non-Final Rejection — §102, §103
Jul 18, 2025
Response Filed
Aug 12, 2025
Final Rejection — §102, §103
Dec 15, 2025
Request for Continued Examination
Jan 01, 2026
Response after Non-Final Action
Jan 27, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596564
PRE-LOADING SOFTWARE APPLICATIONS IN A CLOUD COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596594
REINFORCEMENT LEARNING POLICY SERVING AND TRAINING FRAMEWORK IN PRODUCTION CLOUD SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591760
CROSS-INSTANCE INTELLIGENT RESOURCE POOLING FOR DISPARATE DATABASES IN CLOUD NATIVE ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591449
Merging Streams For Call Enhancement In Virtual Desktop Infrastructure
2y 5m to grant Granted Mar 31, 2026
Patent 12586064
BLOCKCHAIN PROVISION SYSTEM AND METHOD USING NON-COMPETITIVE CONSENSUS ALGORITHM AND MICRO-CHAIN ARCHITECTURE TO ENSURE TRANSACTION PROCESSING SPEED, SCALABILITY, AND SECURITY SUITABLE FOR COMMERCIAL SERVICES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.7%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month