Prosecution Insights
Last updated: April 19, 2026
Application No. 18/343,000

DYNAMIC ALLOCATION OF SHARED MEMORY AMONG MULTIPLE THREADS VIA USE OF A DYNAMICALLY CHANGING MEMORY THRESHOLD

Non-Final OA §103§112
Filed
Jun 28, 2023
Examiner
GHAFFARI, ABU Z
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
533 granted / 676 resolved
+23.8% vs TC avg
Strong +47% interview lift
Without
With
+47.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
44 currently pending
Career history
720
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
0.1%
-39.9% vs TC avg
§112
36.8%
-3.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 676 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-20 are rejected under 35 U.S.C. 112 (b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or joint inventor regards as the invention. The following terms lack proper antecedent basis: -- the steps -- in claim 1 line 2. The following claim language is not clearly understood: Claim 1 recites “calculate a memory threshold” without clearly reciting threshold of what i.e. available memory, memory requested, allocated memory, memory usage by the request. Claim 1 recites “input parameters” without ever reciting what are input parameters. Claim 1 recites “shared memory” without clearly reciting shared memory is processor memory, memory of the computer system, memory of the external system. Claim 1 recites “request from a requesting execution thread”. It is unclear if the thread requesting memory is already executing on the processor and requesting additional memory or will be executed on the processor after allocation of memory. Claim 1 in preamble recites “ dynamically changing threshold” while claim later recites “a threshold” and there is no change in threshold value is recited directly or implied indirectly. It is unclear if threshold is dynamic or static in claim 1. Claims 15 and 18 recite elements of claim 1 and have similar deficiency as claim 1. Therefore, they are rejected for the same rational. Remaining dependent claims 2-14, 16-17 and 19-20 are also rejected due to similar deficiency inherited from the rejected independent claims. * Applicant is advised to at least indicate support present in the specification for further defining/clarifying the claim language in case Applicant believe amendments would unduly narrow the scope of the claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 15-16, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carbon-Ogden et al. (US 2023/0036737 A1, hereafter Carbon-Ogden) in view of Kesselman (US Pub. No. 9,152,549 B1, hereafter Kesselman). As per claim 1, Carbon-Ogden teaches the invention substantially as claimed including a method for dynamically allocating shared memory among multiple execution threads via use of a dynamically changing memory threshold ([0002] application, processes, allocate, memory, used [0003] predict safe amount of memory that application can allocate without being terminated, multiple processes / applications, as processes execute, allocating and deallocating memory as may be needed [0117] upper threshold), said method comprising the steps of: (a) executing, by one or more processors of a computer system, a trained machine learning model (MLM) to calculate a memory threshold (MTH) using values of one or more input parameters as input to the MLM, said MLM having been previously trained based on the one or more input parameters ([0006] use one or more neural networks trained via machine learning to predict, based on the memory metrics, memory usage information for the application. Such memory usage information may include information regarding a safe amount of memory that can be allocated without being in danger of being terminated by the computing device [0117] use machine learning to predict upper thresholds for each of a plurality of memory metrics, one or more neural networks trained to determine, upper thresholds, based on the highest value reached by each of the plurality of memory metrics [0007] neural networks may be trained using training data; different types of allocations of memories ); (b) after said executing the MLM ([0123] use machine learning to predict a safe amount of memory available for application fig. 3 304), receiving, by the one or more processors, a request from a requesting execution thread for a requested amount (MR) of the shared memory ([0003] processes execute, allocating memory, as needed ); and (c) in response to the request, redistributing, by the one or more processors, the shared memory among one or more current execution threads currently using the shared memory and the requesting execution thread ([0008] adjusting, amount of memory allocated by the application [0003] processes execute, allocating/deallocating memory as needed [0038] processes executing at one or more processors), wherein said redistributing is performed as a function MTH, MR, MU, and MC, wherein MU is a total amount of the shared memory currently being used by the one or more current execution threads, and wherein MC is a memory capacity of the shared memory ([0008] adjusting, amount of memory allocated, predicted safe amount of memory available for allocation [0038] usage of RAM by processes [0092] totalMem - total memory [0120] memory metrics, current value, availMem memory metric, upper threshold [0114] memory metrics, how much memory is being used, amount of physical memory used by the process, total program size of the process). Carbon-Ogden doesn’t specifically teach receiving, by the one or more processors, a request from a requesting execution thread amount of the memory; in response to the request, redistributing the memory among one or more threads; redistributing based on MR and MU. Kesselman, however, teaches receiving, by the one or more processors, a request from a requesting execution thread amount of the memory (col 4 lines 39-43 server, receives, request, allocate memory, process, associated, class A-E, col 3 lines 60-61 CRi, minimum memory allocation for the respective quality of service); in response to the request, redistributing the memory among one or more threads (col 12 lines 20-25 dynamically adjust the memory allocations for the classes e.g. processes, threads); redistributing based on MR and MU (col 12 lines 20-25 adjust the memory allocations for the classes e.g. processes, threads based on amount of memory used, current amount of the free memory i.e. free+used = total; col 3 lines 60-61 CRi, minimum memory allocation for the respective quality of service). It would have been obvious to one of ordinary skills in the art before the effective filing date of the invention was made to combine the teachings of analogous prior art of Carbon-Ogden with the teachings of Kesselman of receiving request for amount of memory by processes, dynamically adjusting the memory allocations for the classes /processes based on memory used and free memory to improve efficiency and allow receiving, by the one or more processors, a request from a requesting execution thread amount of the memory; in response to the request, redistributing the memory among one or more threads; redistributing based on MR and MU to the method of Carbon-Ogden as in the instant invention. The combination would have been obvious because applying the known method of dynamically adjusting amount of memory to the processes based on used and free memory upon request for memory by a process as taught by Kesselman to the memory management method of Carbon-Ogden to yield predictable result and improved efficiency. As per claim 2, Carbon-Ogden teaches said method comprising the steps of: after execution of steps (a)-(c), re-performing steps (a)-(c) using different values of the one or more input parameters resulting in a change in the calculated memory threshold MTH in re-performed step (a) ([0007] one or more neural network may be trained using training data, performed hundreds of times in each computing device across ten, hundreds or thousands of different computing devices, data from stress tests, included in training data [0141] machine learning model, trained to receive input data of one or more types, in response provide output data of one or more types [0116] periodically determine upper threshold for each of a plurality of memory metrics [0239] predicted safe amount of memory available for allocation by the application), wherein the request from the requesting execution thread in re-performed step (b) is replaced by a different request from a different requesting execution thread for another amount (MR) of the shared memory, and wherein the redistributed shared memory has changed in re-performed step (c) ([0008] adjusting, based on predicted safe amount of memory available for allocation, adjust an amount of memory allocated by the application). Kesselman teaches remaining claim elements of wherein the request from the requesting execution thread in re-performed step (b) is replaced by a different request from a different requesting execution thread for another amount (MR) of the shared memory (col 1 lines 25-30 second request, higher quality of service requirement than the first process; col 2 lines 45-56 fig. 2 Request 204 process 224). Claim 15 recites computer program product, comprising one or more computer readable hardware storage devices having computer readable program code stored therein, said program code containing instructions executable by one or more processors of a computer system to implement a method for elements similar to claim 1. Therefore, it is rejected for the same rationales. Claim 16 recites computer program product for elements similar to claim 2. Therefore, it is rejected for the same rationale. Claim 18 recites a computer system, comprising one or more processors, one or more memories, and one or more computer readable hardware storage devices, said one or more hardware storage devices containing program code executable by the one or more processors via the one or more memories to implement a method for elements similar to claim 1. Therefore, it is rejected for the same rationale. Claim 19 recites the computer system for elements similar to claim 2. Therefore, it is rejected for the same rationale. Claims 3-7, 10, 13, 17 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carbon-Ogden in view of Kesselman, as applied to above claims, and further in view of Dunshea et al. (US 2009/0113433 A1, hereafter Dunshea). As per claim 3, Carbon-Ogden teaches wherein said redistributing in step (c) comprises the steps of ([0008] adjusting, amount of memory allocation): (c1) determining whether MU + MR≤ MTH and if so then allocating the requested amount of shared memory MR to the requesting execution thread, and if not then performing step (c2) ([0031] monitor the usage of RAM by processes, determine the usage of RAM exceeds a low memory termination threshold [0092] available memory of the system [0095] threshold [0096] total memory [0098] memory usage of one or more processes executing [0099] background process, periodically, determines values of the one or more memory metrics [0100] safe amount of memory in RAM that is available for allocation by application [010] usage exceeds a threshold [0102] already allocated certain amount of memory RAM, amount of memory additional to what has already allocated, headroom [0103]); (c2) determining whether MU + MR≤ MC and if so then performing step (c3) ([0031] monitor the usage of RAM by processes [0096] total memory [0098] memory usage of one or more processes executing [0096] total memory [0098] memory usage of one or more processes executing), and if not then setting ΔM = MU + MR - MC followed by performing step (c5); (c3) determining whether the request is from a new requesting thread and MU + MMIN≤ MC, and if so then performing step (c4), and if not ([0031] monitor the usage of RAM by processes, determine the usage of RAM exceeds a low memory termination threshold y [0098] memory usage of one or more processes executing [0099] background process, periodically, determines values of the one or more memory metrics ) then setting ΔM = MU + MMIN - MC followed by performing step (c5), wherein MMIN is a predetermined minimum amount of shared memory; (c4) allocating the predetermined minimum amount of shared memory MMIN to the requesting execution thread; (c5) instructing N top execution threads to complete processing of, and then releasing of, a fraction (f) of the shared memory currently used (MN) by the N top execution threads, wherein the N top execution threads are those existing execution threads that hold a most amount of shared memory or hold shared memory for a longest period of time, wherein the N top execution threads do not include the requesting execution thread, wherein f and N are constrained to satisfy f *MN≥ΔM, and wherein 0 < f ≤ 1 and N is a positive integer of at least 1. Kesselman teaches the remaining claim elements of (c3) determining whether the request is from a new requesting thread ( col 2 lines 45-55 issues a request 202 to allocate memory for a process) and MU + MMIN≤ MC, and if so then performing step (c4) ( col 3 lines 23-35 accepts request to allocate memory, process, based on current amount of free memory, minimum memory allocated for the quality of service class, total amount of memory ), wherein MMIN is a predetermined minimum amount of shared memory; (c4) allocating the predetermined minimum amount of shared memory MMIN to the requesting execution thread (col 3 lines 23-35 minimum memory allocated for the quality of service class ); (c5) instructing N top execution threads to complete processing of, and then releasing of, a fraction (f) of the shared memory currently used (MN) by the N top execution threads (col 6 lines 30-34 request completed, releases memory allocated to the request), wherein the N top execution threads are those existing execution threads that hold a most amount of shared memory or hold shared memory for a longest period of time, wherein the N top execution threads do not include the requesting execution thread, wherein f and N are constrained to satisfy f *MN≥ΔM, and wherein 0 < f ≤ 1 and N is a positive integer of at least 1 (col 6 lines 39-46 release memory allocated to accommodate the request in any level, memory allocation for class/ process/ thread; delta M can be set based on the available information). Carbon-Ogden and Kesselman, in combination, do not specifically teach releasing by N top execution threads, wherein the N top execution threads are those existing execution threads that hold a most amount of shared memory or hold shared memory for a longest period of time. Dunshea, however, teaches releasing by N top execution threads ([0005] suspend/kill threads in order to free memory for other processes/thread), wherein the N top execution threads are those existing execution threads that hold a most amount of shared memory or hold shared memory for a longest period of time ([0005] select threads with the greatest amount of memory allocated ). It would have been obvious to one of ordinary skills in the art before the effective filing date of the invention was made to combine the teachings of Carbon-Ogden and Kesselman with the teachings of Dunshea of selecting process with greatest memory allocated for suspending/killing to free memory to improve efficiency and allow releasing by N top execution threads, wherein the N top execution threads are those existing execution threads that hold a most amount of shared memory or hold shared memory for a longest period of time to the method of Carbon-Ogden and Kesselman as in the instant invention. The combination would have been obvious because applying known method of freeing memory from the thread with largest allocated memory as taught Dunshea to the memory management method taught by Carbon-Ogden and Kesselman to yield expected result and improved memory utilization efficiency. As per claim 4, Carbon-Ogden teaches wherein MU + MR≤ MTH ([0097 memory usage of process [0100] safe amount of memory available) . Kesselman teaches remaining claim elements of MR (col 4 lines 39-43 server, receives, request, allocate memory, process, associated, class A-E, col 3 lines 60-61 CRi, minimum memory allocation for the respective quality of service). As per claim 5, Carbon-Ogden teaches wherein MU + MR> MTH ([0097 memory usage of process [0100] safe amount of memory available) . Kesselman teaches remaining claim elements of MR (col 4 lines 39-43 server, receives, request, allocate memory, process, associated, class A-E, col 3 lines 60-61 CRi, minimum memory allocation for the respective quality of service). As per claim 6, Carbon-Ogden teaches wherein MU + MR≤ MC ([0031] monitor the usage of RAM by processes [0096] total memory [0098] memory usage of one or more processes executing [0096] total memory [0098] memory usage of one or more processes executing). Kesselman teaches remaining claim elements of wherein the requesting execution thread is a new execution thread (col 2 lines 45-55 issues a request 202 to allocate memory for a process) wherein MU + MMIN≤ MC (col 3 lines 23-35 accepts request to allocate memory, process, based on current amount of free memory, minimum memory allocated for the quality of service class, total amount of memory ). As per claim 7, Carbon-Ogden teaches wherein MU + MR≤ MC, and wherein the requesting execution thread is not a new execution thread ([0031] monitor the usage of RAM by processes [0096] total memory [0098] memory usage of one or more processes executing [0096] total memory [0098] memory usage of one or more processes executing). As per claim 10, Carbon-Ogden teaches wherein MU + MR> MC ([0031] monitor the usage of RAM by processes [0096] total memory [0098] memory usage of one or more processes executing [0096] total memory [0098] memory usage of one or more processes executing). As per claim 13, Dunshea teaches the N top execution threads are those existing execution threads that hold the most amount of shared memory ([0005] select threads with the greatest amount of memory allocated ). Claim 17 recites computer program product for elements similar to claim 3. Therefore, it is rejected for the same rationale. Claim 20 recites the computer system for elements similar to claim 3. Therefore, it is rejected for the same rationale. Claim 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carbon-Ogden in view of Kesselman, and further in view of Dunshea, as applied to above claims, and further in view of Maeda (US 2019/0384658 A1). As per claim 14, Dunshea teaches wherein the N top execution threads are those existing execution threads that hold shared memory for the longest period of time ([0005] suspend/kill threads in order to free memory for other processes/thread, select threads with the greatest amount of memory allocated ). Carbon-Ogden, Kesselman, Dunshea, in combination, do not specifically teach threads hold memory for the longest period of time. Maeda, however, teaches threads hold shared memory for the longest period of time ([0005] frees the memory, after the maximum life time expires). It would have been obvious to one of ordinary skills in the art before the effective filing date of the invention was made to combine the teachings of Carbon-Ogden, Kesselman and Dunshea with the teachings of Maeda of free memory that from the process after maximum life time expires to improve efficiency and allow releasing N top execution threads that hold shared memory for a longest period of time to the method of Carbon-Ogden, Kesselman and Dunshea as in the instant invention. The combination would have been obvious because applying known method of freeing memory from the thread with expired maximum life time as taught Maeda to the memory management method taught by Carbon-Ogden, Kesselman and Dunshea to yield expected result and improved memory utilization efficiency. Examiners Note Applicant is further reminded of that the cited paragraphs and in the references as applied to the claims above for the convenience of the applicant(s) and although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider all of the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Allowable Subject Matter Claims 8-9 and 11-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Authorization for Internet Communication Applicant is encouraged to submit an authorization to communicate with the Examiner via the internet by making the following statement (MPEP 502.03) “Recognizing that internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only by submitted via Central Fax (not Examiner’s Fax), Regular postal mail, or EFS Web using PTO/SB/439. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABU GHAFFARI whose telephone number is (571)270-3799. The examiner can normally be reached on Monday-Thursday 14:00 - 15:00 Hrs. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Lee can be reached on 571-272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABU ZAR GHAFFARI/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Jun 28, 2023
Application Filed
Dec 17, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602264
DATA CENTER WITH ENERGY-AWARE WORKLOAD PLACEMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12596562
TECHNOLOGIES TO ALLOCATE RESOURCES TO START-UP A FUNCTION
2y 5m to grant Granted Apr 07, 2026
Patent 12596559
TECHNIQUES FOR PERFORMING CONTINUATION WORKFLOWS BY TERMINATING VIRTUAL MACHINE BASED ON RESPONSE TIME EXCEEDING THRESHOLD
2y 5m to grant Granted Apr 07, 2026
Patent 12585493
AUTOMATED SYNTHESIS OF REFERENCE POLICIES FOR RUNTIME MICROSERVICE PROTECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12579046
FIRMWARE-BASED ORCHESTRATION OF ARTIFICIAL INTELLIGENCE (AI) PERFORMANCE PROFILES IN HETEROGENEOUS COMPUTING PLATFORMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+47.3%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 676 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month