Prosecution Insights
Last updated: April 19, 2026
Application No. 18/536,885

Memory Pooling Method and Related Apparatus

Non-Final OA §102§103§112
Filed
Dec 12, 2023
Examiner
KIM, DONG U
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
610 granted / 702 resolved
+31.9% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
737
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
28.0%
-12.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.— The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 16 and 20 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “big data application type” in claim(s) 16 and 20 is/are a relative term which renders the claim indefinite. The term “big data” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The examiner is unclear what would be considered a big data vs. non-big data. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 2, 5, 6, 9, 10 and 13-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Koster et al. (Pat 10009251) (hereafter Koster) . As per claim 1, Koster teaches: A method comprising: determining a first memory requirement of a first distributed application, wherein the first memory requirement indicates a memory size; ([Column 14 line 31-67], The set of containers may include an operating-system level virtualization method for deploying and running distributed applications in an isolated environment (e.g., without launching a virtual machine for each application). [Column 21 line 1-48], A stream computing environment may include a set of three containers A, B, and C. A set of threshold parameter values may indicate memory requirements of 2 gigabytes, 4 gigabytes, and 6 gigabytes for containers A, B, and C, respectively. A set of target parameter values may indicate memory requirements of 5, 6, and 10 gigabytes for containers A, B, and C, respectively. A shared pool of configurable computing resources may have a total of 16 gigabytes of memory available for use by the set of containers. ) determining N first server nodes from among second server nodes in a server cluster based on the first memory requirement and available memory resources of the second server nodes, wherein N is an integer greater than or equal to 2; ([Column 15 line 18-36], In embodiments, the collecting, the determining, the processing, and the other steps described herein may be carried-out by an internal tuple traffic management module maintained in a persistent storage device of a local computing device (e.g., network node). In embodiments, the collecting, the determining, the processing, and the other steps described herein may be carried-out by an external tuple traffic management module hosted by a remote computing device or server (e.g., server accessible via a subscription, usage-based, or other service model). In this way, aspects of tuple traffic management in a stream computing environment to process a stream of tuples may be performed using automated computing machinery without manual action. Other methods of performing the steps described herein are also possible. [Column 3 line 33-58], Scalability is achieved by distributing an application across nodes by creating executables (i.e., processing elements), as well as replicating processing elements on multiple nodes and load balancing among them. Stream operators in a stream computing application can be fused together to form a processing element that is executable. Doing so allows processing elements to share a common process space, resulting in much faster communication between stream operators than is available using inter-process communication techniques (e.g., using a TCP/IP socket). Further, processing elements can be inserted or removed dynamically from an operator graph representing the flow of data through the stream computing application. A particular stream operator may not reside within the same operating system process as other stream operators. In addition, stream operators in the same operator graph may be hosted on different nodes, e.g., on different compute nodes or on different cores of a compute node. [Column 9 line 51-55], The computing infrastructure 400 includes a management system 405 and two or more compute nodes 410A-410D—i.e., hosts—which are communicatively coupled to each other using one or more communications networks 420. [Column 13 line 29-43], The example operator graph shown in FIG. 8 includes ten processing elements (labeled as PE1-PE10) running on the compute nodes 410A-410D. [Column 17 line 5-61], For instance, determining may include comparing the set of soft limits and the set of hard limits with respect to the current topology of the stream computing environment as well as a pool of available computing resources to ascertain a tuple flow model that achieves at least the set of soft limits and does not exceed the set of hard limits. ) constructing a memory pool based on the N first server nodes; and providing a first service for the first distributed application based on the memory pool. ([Column 5 line 31- 67 ], Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service … Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). [Column 9 line 10-33], S ervice Level Agreement (SLA) planning and fulfillment provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. A cloud manager 65 is representative of a cloud manager (or shared pool manager) as described in more detail below. While the cloud manager 65 is shown in FIG. 3 to reside in the management layer 64, cloud manager 65 can span all of the levels shown in FIG. 3, as discussed below. ) As per claim 2, rejection of claim 1 is incorporated: Koster teaches wherein determining the N first server nodes comprises: determining a first type of the first distributed application; and further determining the N first server nodes based on the first type. ([Column 5 line 58-67 – Column 6 line 1-2], Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. [Column 22 line 40-67 – Column 23 line 1-25], For instance, the set of tuple traffic indicators may indicate the amount (e.g., 10 gigabytes), rate (e.g., 1000 tuples per second), type (e.g., Internet-of-Things data), workload intensity (e.g., heavy, light), priority level (e.g., high, low), congestion (e.g., 20% backpressure) or other factors that describe the nature of tuple flow with respect to the set of container. ) As per claim 17 , rejection of claim 1 is incorporated: Koster teaches wherein a sum of the available memory resources of the N first server nodes meets the first memory requirement . ([Column 21 line 1-48], A set of threshold parameter values may indicate memory requirements of 2 gigabytes, 4 gigabytes, and 6 gigabytes for containers A, B, and C, respectively. A set of target parameter values may indicate memory requirements of 5, 6, and 10 gigabytes for containers A, B, and C, respectively. A shared pool of configurable computing resources may have a total of 16 gigabytes of memory available for use by the set of containers. ) As per claim 18 , rejection of claim 1 is incorporated: Koster teaches further comprising obtaining a mapping relationship between the first memory requirement and a quantity of first server nodes required for constructing the memory pool before determining the N first server nodes . ([Column 21 line 1-48], In embodiments, the set of threshold parameter values for the set of utilization parameters may be prioritized relative to the set of target parameter values for the set of utilization parameters at block 1044. The prioritizing may be performed when resolving the tuple flow model with respect to the set of containers in the stream computing environment. Generally, prioritizing can include arranging, weighting, organizing, promoting, ranking, or otherwise favoring the set of threshold parameter values relative to the set of target parameter values. In embodiments, prioritizing may include balancing system resources of the stream computing environment with respect to the set of containers to facilitate achieving the set of threshold parameter values (e.g., soft limits, lower thresholds) for one or more containers before achieving the set of target parameter values (e.g., hard limits, upper thresholds) for a container of the set of containers (e.g., to prioritize basic functionality/operation of each container before fulfilling extra performance requirements). For instance, prioritizing may include allocating system resources from the shared pool of configurable resources to the set of containers such that the set of threshold parameter values for each container are fulfilled, and subsequently re-evaluating the set of containers with respect to the remaining available system resources to ascertain how they may be distributed to achieve the set of target parameter values for one or more containers of the set of containers. Consider the following example. A stream computing environment may include a set of three containers A, B, and C. A set of threshold parameter values may indicate memory requirements of 2 gigabytes, 4 gigabytes, and 6 gigabytes for containers A, B, and C, respectively. A set of target parameter values may indicate memory requirements of 5, 6, and 10 gigabytes for containers A, B, and C, respectively. A shared pool of configurable computing resources may have a total of 16 gigabytes of memory available for use by the set of containers. Accordingly, the set of threshold parameter values may be prioritized such that 2 gigabytes are allocated to container A, 4 gigabytes are allocated to container B, and 6 gigabytes are allocated to container C. Subsequent to achievement of the threshold parameter values for the set of containers, remaining resources may be distributed among the set of containers to promote positive performance of stream computing applications (e.g., an additional 4 gigabytes may be allocated to container C; an additional gigabyte may be distributed to each of containers A, B, and C). Other example usage cases (e.g., in which the set of target parameter values are prioritizes with respect to the set of threshold parameter values) are also possible. Other methods of prioritizing the set of threshold parameter values relative to the set of target parameter values are also possible . ) As per claim 19 , rejection of claim 2 is incorporated: Koster teaches wherein the first type is a database application type . ([Column 3 line 20-32], Stream-based computing and stream-based database computing are emerging as a developing technology for database systems. ) As per claim 20 , rejection of claim 2 is incorporated: Koster teaches wherein the first type is a big data application type . ([Column 3 line 20-32], Stream-based computing and stream-based database computing are emerging as a developing technology for database systems.) As per claims 5-6, these are sever cluster claims, corresponding to the method claims 1 and 2. Therefore, rejected based on similar rationale. As per claims 9-10 and 13-16 , these are non-transitory computer-readable storage medium claims, corresponding to the method claims 1 - 2 and 17-20 . Therefore, rejected based on similar rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3 , 7 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Koster in view of Tsirkin et al. (Pub 20220276889) (hereafter Tsirkin ). As per claim 3, rejection of claim 2 is incorporated: Although Koster discloses dynamic allocation/elasticity of physical and/or virtual resource(s) based on type of application. Koster does not explicitly disclose wherein determining the N first server nodes further comprises: determining a memory ballooning coefficient corresponding to the first distributed application based on the first type ; and further determining the N first server nodes based on the memory ballooning coefficient. Tsirkin teaches wherein determining the N first server nodes further comprises: determining a memory ballooning coefficient corresponding to the first distributed application based on the first type; and further determining the N first server nodes based on the memory ballooning coefficient. ([Paragraph 13], The guest operating system may inflate the memory balloon to reduce the amount of host memory in use by the virtual machine or may deflate the memory balloon to increase the amount of host memory in use by the virtual machine. [Paragraph 42], The content of the request may include data that is associated with the memory balloon. The data may indicate or identify the memory balloon, guest operating system, virtual machine, hypervisor, host, memory, other entity or a combination thereof. The content may also or alternatively indicate whether a size of the memory balloon should change (e.g., increase or decrease) and may include a size value. The size value may correspond to a size of the memory balloon (e.g., past, current, or future total size) or the size of a change to the memory balloon (e.g., increment size, decrement size). In one example, the request may indicate a particular amount of memory to add to the memory balloon (e.g., 1 GB). ) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Koster wherein memory requirement of a distributed application is determined, number of server nodes to create a pool of resource(s) (i.e. memory) based on the memory requirements are determined for service provided by the distributed application, into teachings of Tsirkin wherein memory ballooning coefficient (i.e. set memory size) is determined , because this would enhance the teachings of Koster wherein by providing a resource ballooning, resource(s) such as memory can be inflated/deflated to reduce an amount of host resource(s) (i.e. memory, cpu , network, etc.) used by the distributed application (i.e. container, VM) based on the type. As per claim 7 , th is is a sever cluster claim, corresponding to the method claim 3 . Therefore, rejected based on similar rationale. As per claim 11 , th is is a non-transitory computer-readable storage medium claim, corresponding to the method claim 3 . Therefore, rejected based on similar rationale. Allowable Subject Matter Claim(s) 4, 8 and 12 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT DONG U KIM whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-1313 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 9:00am - 5:00pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Bradley Teets can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 5712723338 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DONG U KIM/ Primary Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Dec 12, 2023
Application Filed
Mar 18, 2026
Examiner Interview (Telephonic)
Mar 23, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596564
PRE-LOADING SOFTWARE APPLICATIONS IN A CLOUD COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596594
REINFORCEMENT LEARNING POLICY SERVING AND TRAINING FRAMEWORK IN PRODUCTION CLOUD SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591760
CROSS-INSTANCE INTELLIGENT RESOURCE POOLING FOR DISPARATE DATABASES IN CLOUD NATIVE ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591449
Merging Streams For Call Enhancement In Virtual Desktop Infrastructure
2y 5m to grant Granted Mar 31, 2026
Patent 12586064
BLOCKCHAIN PROVISION SYSTEM AND METHOD USING NON-COMPETITIVE CONSENSUS ALGORITHM AND MICRO-CHAIN ARCHITECTURE TO ENSURE TRANSACTION PROCESSING SPEED, SCALABILITY, AND SECURITY SUITABLE FOR COMMERCIAL SERVICES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month