Prosecution Insights
Last updated: April 19, 2026
Application No. 18/380,160

THREAD POOL MANAGEMENT FOR DATA TRANSFER BETWEEN INTEGRATED PRODUCTS

Non-Final OA §102§103
Filed
Oct 14, 2023
Examiner
MUDRICK, TIMOTHY A
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
VMware, Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
97%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
447 granted / 532 resolved
+29.0% vs TC avg
Moderate +13% lift
Without
With
+13.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
564
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
29.4%
-10.6% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 532 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The instant application having Application No. 18/380,160 filed on 10/14/2023 is presented for examination. Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Priority Acknowledgement is made of applicant’s claim for priority based on applications IN202341051459 filed in REPUBLIC OF INDIA on 07/31/2023 and #IN202341051459 filed in REPUBLIC OF INDIA on 08/30/2023. Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Drawings The applicant’s drawings submitted are acceptable for examination purposes. Authorization for Internet Communications The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax, Regular postal mail, or EFS Web. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 6-14, 16-23 and 25-29 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Gilgen (US 2009/0044193). As per claim 1, Gilgen discloses a method for leveraging management functions performed by different integrated products, comprising: executing, using an integration plugin installed on a first integrated product running in a first management node (Paragraph 10 “resource manager”), a first schedule job to assess the first management node for a specified period of time or for a specified number of assessments (Paragraph 10 “A resource manager further can be coupled to the thread pool and the event queue. Moreover, the resource manager can be programmed to allocate additional threads to the thread pool where a number of events enqueued in the event queue exceeds a threshold value and where all threads in the thread pool are busy.”); determining, using the integration plugin, whether a thread in a thread pool of the first management node is idle after the specified period of time or the specified number of assessments (Paragraph 12 “A method for managing a thread pool in a SEDA stage can include monitoring both a number of events in a coupled event queue, and a level of busyness of threads in a coupled thread pool. At least one additional thread can be added to the coupled thread pool only when the number of events in the coupled event queue exceeds a threshold value, and when the busyness of the threads in the thread pool exceeds a threshold value.”); altering, using the integration plugin, a number of threads allocated for data transfer between a second management node executing a second integrated product and the first management node based on the whether the thread is idle (Paragraph 12 “In particular, in a preferred embodiment, at least one additional thread can be added to the coupled thread pool only when the number of events in the coupled event queue exceeds a threshold value, and when the all threads in the thread pool are busy.”); and performing, using the integration plugin, the data transfer between the second management node and the first management node based on the altered number of threads (Fig. 2). As per claim 2, Gilgen further discloses further comprising: enabling to perform the management functions of the second integrated product through the first integrated product using the transferred data (Paragraph 17 “The SEDA stage 100 further can include an event handler 120 configured to process one or more computing events. Notably, the event handler 120 can be provided by the application which conforms to the SEDA design. The SEDA stage 110 yet further can include a thread pool 130 of runnable threads. In this regard, the runnable threads can be allocated to handle the processing of events in the event handler 120.”). As per claim 3, Gilgen further discloses wherein altering the number of threads allocated for the data transfer comprises: increasing the number of threads allocated for the data transfer between the second management node and the first management node when the thread is found idle in the first management node after the specified period of time or the specified number of assessments (Paragraph 21 “In block 240, only if all threads are busy can a new thread be allocated to the thread pool. Otherwise, in block 250 a delay can be incurred before repeating the process. In this way, although the number of enqueued events can exceed a threshold, adding a new thread in the face of otherwise idle threads can result in performance degradation. Consequently, through the conditional addition of a thread as in the present invention, an order of magnitude performance advantage can be realized over the conventional SEDA design.”). As per claim 4, Gilgen further discloses wherein altering the number of threads allocated for the data transfer comprises: reducing the number of threads allocated for the data transfer between the second management node and the first management node when no thread is found idle in the first management node after the specified period of time or the specified number of assessments (Paragraph 21). As per claim 6, Gilgen further discloses wherein performing the data transfer between the second management node and the first management node comprises: executing a second schedule job that invokes a job queue by inserting different topics into the job queue and triggers a business rule to import the data for each topic in the job queue (Paragraph 21); and performing the data import for each topic from the second management node to the first management node by processing the topics in parallel using the altered number of threads (Paragraph 21). As per claim 7, Gilgen further discloses wherein performing the data import for each topic comprises: determining, by the second schedule job, a number of available threads for the data import based on the altered number of threads and the number of threads being occupied for the data import (Paragraph 20); selecting, by the second schedule job, one or more topics for performing the data import based on the number of available threads (Paragraph 20 “upon detecting an excessive number of events enqueued in the event queue 110, the resource manager 140 of the SEDA stage 100 can add an additional thread to the thread pool 130 only if all threads in the thread pool 130 are considered "busy".”); and triggering, by the second schedule job, the business rule to perform the data import for the selected topics by occupying the available threads (Paragraph 20). As per claim 8, Gilgen further discloses wherein determining whether the thread is idle comprises: assessing a transaction table that stores statistics data representing active transactions in the first management node and information of threads that are processing the active transactions (Paragraph 16 “The enhanced SEDA stage can be configured to manage a logical thread pool in which new threads are added to the pool only when a number of queue events in an associated stage queue exceeds a threshold value, and when already allocated threads in the logical thread pool are busy”); and determining whether the thread is idle based on the assessment (Paragraph 16). As per claim 9, Gilgen further discloses wherein altering the number of threads allocated for the data transfer comprises: configuring a maximum number and a minimum number of threads that could be allocated for the data transfer of the integration plugin (Fig. 2); and based on other operations being carried out in the first management node, increasing the number of threads up to the maximum number that could be allocated for the data transfer or reducing the number of threads up to the minimum number that could be allocated for the data transfer (Fig. 2). As per claim 10, Gilgen further discloses wherein configuring the maximum number and the minimum number of threads that could be allocated to the data transfer comprises: evaluating traffic data and transaction data that are being performed in the first management node (Fig. 2); and auto-calibrate the maximum number and the minimum number of threads that could be allocated for the data transfer based on the traffic data and the transaction data (Fig. 2). As per claims 11-14 and 16-20, they are system claims having similar limitations as cited in claims 1-4 and 6-10 and are rejected under the same rationale. As per claims 21-23 and 25-29, they are apparatus claims having similar limitations as cited in claims 1-4 and 6-10 and are rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5, 15 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Gilgen in view of Khante (US 11,232,125) As per claim 5, Gilgen does not expressly disclose but Khante discloses wherein performing the data transfer between the second management node and the first management node comprises: obtaining an API response from the second management node by querying the second management node using an application program interface (API) call, the API response comprising the data associated with the second integrated product (Column 55, lines 30-55 “In contrast, the virtual machine monitoring application stores large volumes of minimally processed machine data, such as performance information and log data, at ingestion time for later retrieval and analysis at search time when a live performance issue is being investigated. In addition to data obtained from various log files, this performance-related information can include values for performance metrics obtained through an application programming interface (API) provided as part of the vSphere Hypervisor™ system distributed by VMware, Inc. of Palo Alto, Calif. For example, these performance metrics can include: (1) CPU-related performance metrics; (2) disk-related performance metrics; (3) memory-related performance metrics; (4) network-related performance metrics; (5) energy-usage statistics; (6) data-traffic-related performance metrics; (7) overall system availability performance metrics; (8) cluster-related performance metrics; and (9) virtual machine performance statistics.”); parsing the API response (Column 55, lines 30-55); converting the parsed API response into a defined format corresponding to the first integrated product (Column 55, lines 30-55); and persisting the converted API response in a database associated with the first integrated product by making a platform call that enables the integration plugin to interact with the database (Column 55, lines 30-55). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Gilgen to include the teachings of Khante because it provides for the purpose of monitoring the efficiency of the system in order to minimize costs to the user. In this way, the combination benefits because the end result is easy to access via API and is cheaper than conventionally performed. As per claim 15, it is a system claim having similar limitations as cited in claim 5 and is thus rejected under the same rationale. As per claim 24, it is a medium claim having similar limitations as cited in claim 5 and is thus rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Blythe (US 2004/0139434) discloses thread pools in a multithreaded server are programmatically adjusted, based on observed statistics from the server's inbound workload. In a multithreaded server environment, response time to end users is improved while increasing the efficiency of software execution and resource usage. Execution time and wait/queued time are tracked, for various types of requests being serviced by a server. Multiple logical pools of threads are used to service these requests, and inbound requests are directed to a selected one of these pools such that requests of similar execution-time requirements are serviced by the threads in that pool. The number and size of thread pools may be adjusted programmatically, and the distribution calculation (i.e., determining which inbound requests should be assigned to which pools) is a programmatic determination. In preferred embodiments, only one of these variables is adjusted at a time, and the results are monitored to determine whether the effect was positive or negative. The disclosed techniques also apply to tracking and classifying requests by method name (and, optionally, parameters). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY A MUDRICK whose telephone number is (571)270-3374. The examiner can normally be reached 9am-5pm Central Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at (571)272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIMOTHY A MUDRICK/Primary Examiner, Art Unit 2198 1/20/2026
Read full office action

Prosecution Timeline

Oct 14, 2023
Application Filed
Jan 20, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602243
METHOD AND SYSTEM FOR MIGRATABLE COMPOSED PER-LCS SECURE ENCLAVES
2y 5m to grant Granted Apr 14, 2026
Patent 12591463
DATA TRANSMISSION METHOD AND DATA TRANSMISSION SERVER
2y 5m to grant Granted Mar 31, 2026
Patent 12585501
MACHINE-LEARNING (ML)-BASED RESOURCE UTILIZATION PREDICTION AND MANAGEMENT ENGINE
2y 5m to grant Granted Mar 24, 2026
Patent 12578971
Container Storage Interface Filter Driver-based Use of a Non-Containerized-Based Storage System with Containerized Applications
2y 5m to grant Granted Mar 17, 2026
Patent 12561174
FRAMEWORK FOR EFFECTIVE STRESS TESTING AND APPLICATION PARAMETER PREDICTION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
97%
With Interview (+13.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 532 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month