Prosecution Insights
Last updated: April 19, 2026
Application No. 19/076,157

MEMORY MANAGEMENT DEVICE AND MEMORY MANAGEMENT METHOD

Non-Final OA §102§103
Filed
Mar 11, 2025
Examiner
GANGER, LAUREN ZANNAH
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
MediaTek Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
94%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
221 granted / 271 resolved
+26.5% vs TC avg
Moderate +12% lift
Without
With
+12.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
7 currently pending
Career history
278
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
9.7%
-30.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 271 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 4-7, 10-13, 16-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Stabrawa et al. in US Patent Application Publication № 2023/0008874, hereinafter called Stabrawa. In regard to claim 1, Strabrawa teaches a memory management device, comprising: a processor; and a non-transitory computer-readable storage, coupled to the processor, and configured to store a memory management program comprising instructions that, when executed by the processor, cause the processor to perform an execution of an allocator module and an execution of a resolver module; wherein the execution of the allocator module causes the processor to: receive a memory allocation request from an application program, wherein the memory allocation request includes a specified size (“The memory pool may be managed by the management server 120. The management server 120, using various components, may provision external primary memory to the client 130 or multiple clients that request external memory allocation.” Paragraph 0036, “For example, region access logic requests received by the region access logic 212 may include requests to create the region 214…” paragraph 0113, further “The request may include, for example, a starting memory offset, a size of memory allocation, a starting memory location, a number of units of memory to access, or any other attribute relating to the requested memory access operation.” Paragraph 0078), allocate a buffer with the specified size in a memory in response to the memory allocation request (“Upon receiving a request to create the region 214, the region access logic 212 may allocate a portion of the memory 210 included in the memory appliance 110 for the region 214.” Paragraph 0115), and generate a buffer pointer for the buffer, and return the buffer pointer to the application program (i.e. in a memory-allocation data structure, “The memory-allocation data structure(s) may include other information related to the portion(s). For example, the memory-allocation data structure(s) may include addresses, offsets, and or sizes which describe the position and/or size of the portion(s) within the region 214 and/or the external memory allocation.” Paragraph 0212), wherein the buffer is a device buffer allocated for one or more accelerator devices (i.e. a hardware client component, as in paragraph 0365), or a CPU buffer allocated for a CPU (i.e. local primary memory, note that “When any virtual memory address is accessed by the CPU, the virtual memory address may reference a portion of physical RAM in primary memory and/or memory of the I/O device. Thus, instructions executed by the CPU that access the virtual memory address may also access the I/O device.” Paragraph 0345; alternatively or additionally, “As illustrated in FIG. 16A, an address space 1602 may include, associate with, and/or map to a first mapped portion 1604, such as with a mapping of local primary memory.” Paragraph 0374, further “Alternatively or in addition, the address space 1602 may include, associate with, and/or map to a second mapped portion 1610. One or more data structures may be updated, modified, and/or replaced to indicate that the portion of the address space 1602 corresponding to the second mapped portion 1610 includes and/or is mapped to external primary memory.” Paragraph 0375); and wherein the execution of the resolver module causes the processor to: receive a memory access request from the application program, wherein the memory access request includes a search pointer (i.e. address, “A page fault and/or other event, such as a request to access a portion of an address space and/or file, may occur (1302), triggering a flow such as the example illustrated in FIG. 13A.” paragraph 0304, note that an address is expressly taught in paragraph 0306), determine, based on the buffer pointer and the specified size, whether the search pointer corresponds to the device buffer or the CPU buffer (“Upon allocating the portion of local primary memory (1304), the client logic 312 and/or another logic may determine (1306) whether data for the portion is in external primary memory. In an example implementation, the determination (1306) may be made by evaluating whether or not an address associated with the triggering event (1302) is associated with a portion of external primary memory, such as a portion of the region 214 and/or external memory allocation.” Paragraph 0306, note that the local memory is the alternative, “If data for the portion is not in external primary memory, the portion of local primary memory may be initialized (1308).” Paragraph 0306), and allow access to the device buffer if the search pointer corresponds to the device buffer (“If data for the portion is in external primary memory, the data may be written (1310) to the portion of local primary memory, such as by copying the data from the portion of the region 214 and/or external memory allocation. In some examples, the data may be copied using client-side memory access, via one or more RDMA operations, and/or via any other one or more suitable data transfer operations” paragraph 0306, note that this access to the external memory is only performed if the portion is resident there and accordingly meets the broadest reasonable interpretation of the claim), and allow access to the CPU buffer if the search pointer corresponds to the CPU buffer (“If data for the portion is not in external primary memory, the portion of local primary memory may be initialized (1308).” Paragraph 0306). In regard to claims 7 and 13, they are substantially similar to claim 1, and accordingly are rejected under similar reasoning In regard to claim 4, Strabrawa further teaches that the execution of the resolver module further causes the processor to determine whether the memory access request is issued by a process that is authorized to access the buffer, and deny access to the buffer if the memory access request is determined to be issued by an unauthorized process (“Alternatively or in addition, the region access logic 212 may control access based on an authentication mechanism, including but not limited to a password, a key, biometrics, or a cryptographic authentication.” Paragraph 0080; alternatively or additionally, “Permissions may include data read access, data write access, metadata read access, metadata write access, destroy access, and/or any other capability that may be selectively granted and/or revoked to a client, a memory appliance, and/or a management server.” Paragraph 0066). In regard to claims 10 and 16, they are substantially similar to claim 4, and accordingly are rejected under similar reasoning In regard to claim 5, Strabrawa further teaches that the buffer pointer indicates a starting address of the buffer, an ending address of the buffer, or an offset address corresponding to a fix location of the buffer (“The memory-allocation data structure(s) may include other information related to the portion(s). For example, the memory-allocation data structure(s) may include addresses, offsets, and or sizes which describe the position and/or size of the portion(s) within the region 214 and/or the external memory allocation.” Paragraph 0212); and wherein the search pointer indicates a memory address requested by the application program (“In an example implementation, the determination (1306) may be made by evaluating whether or not an address associated with the triggering event (1302) is associated with a portion of external primary memory, such as a portion of the region 214 and/or external memory allocation.” Paragraph 0306). In regard to claims 11 and 17, they are substantially similar to claim 5, and accordingly are rejected under similar reasoning In regard to claim 6, Strabrawa further teaches that the device buffer comprises a graphics processing unit (GPU) buffer, a neural processing unit (NPU) buffer, a vision processing unit (VPU) buffer, a direct memory access (DMA) buffer, or a deep learning accelerator (DLA) buffer; and each of the one or more accelerator devices comprises a GPU, an NPU, a VPU,a DMA, or a DLA (“In a first example, the hardware client component may respond to attempts of the hardware application component to access physical addresses by accessing data included in the memory and/or cache of the hardware client component.” Paragraph 0366, wherein “A hardware client component may be and/or may include a processor, a GPU, an MMU, an IO-MMU, a communication interface, such as the one or more communication interfaces 330, a direct memory access controller, an FPGA, an ASIC, a chipset, a compute module, a hardware accelerator module, a hardware logic, a memory access transaction translation logic, any other hardware component, and/or a combination of multiple hardware components.” Paragraph 0365) In regard to claims 12 and 18, they are substantially similar to claim 6, and accordingly are rejected under similar reasoning Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 3, 8, 9, 14, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stabrawa as applied to claim 1, 7, or 13 above, as applicable, and further in view of Beard et al. in US Patent Application Publication № 2019/0018786, hereinafter called Beard. In regard to claim 2, Stabrawa teaches the memory management device as claimed in claim 1, as above. Stabrawa further teaches that the execution of the allocator module further causes the processor to store, in a data structure (i.e. memory-allocation data structures, as in paragraph 0209), mapping entries between a plurality of the buffer pointers of device buffers, the device buffers, and the specified sizes associated with the device buffers (“The memory-allocation data structure(s) may include other information related to the portion(s). For example, the memory-allocation data structure(s) may include addresses, offsets, and or sizes which describe the position and/or size of the portion(s) within the region 214 and/or the external memory allocation…” paragraph 0212); and wherein the execution of the resolver module further causes the processor to: check whether the data structure contains at least one mapping entry, and determine that the search pointer corresponds to the CPU buffer if the data structure does not contain any mapping entry (“The collection of allocated portions 818 may include and/or may reference zero or more allocated portion data structures 832. For example, the collection of allocated portions 818 may include a red-black tree of zero or more allocated portion data structures 832. Other examples may involve using any other collection data structure to include and/or reference zero or more allocated portion data structures 832.” Paragraph 0224, further, “In some examples, the allocated portion data structure 832 may not contain any collection-related data 836, such as if there is only one allocated portion” paragraph 0228). However, he fails to expressly teach to identify, from the data structure, a largest one of the buffer pointers that is not larger than the search pointer included in the memory access request if the data structure contains at least one mapping entry, and determine that the search pointer corresponds to the device buffer if the search pointer is not larger than a sum of the identified buffer pointer and the specified size associated with the identified buffer pointer, and determine that the search pointer corresponds to the CPU buffer otherwise. Beard teaches that the execution of the allocator module further causes the processor to store, in a data structure (i.e. RTB), mapping entries between a plurality of the buffer pointers of device buffers, the device buffers, and the specified sizes (i.e. address ranges) associated with the device buffers (“In FIG. 1, address translation is carried out by a so-called range table buffer (RTB) 105, 115. This performs address translation between a virtual memory address in the virtual memory address space and an output memory address in the output (real) address space.” Paragraph 0037, wherein “The data 220, 230 together define a range of virtual memory addresses between respective virtual memory address boundaries in the virtual memory address space. In the example of FIG. 2a, the range of virtual memory addresses is between the address represented by Base VA up to and including the address represented by Base VA+Range.” Paragraph 0056, note is taken that many different ways of calculating the address range using the BaseVA and range values are described in paragraph 0056, however, in each case the range value is used to calculate a range of addresses between boundaries, and the BaseVA+Range implementation is expressly taught in paragraph 0056); and wherein the execution of the resolver module further causes the processor to: identify, from the data structure, a largest one of the buffer pointers that is not larger than the search pointer included in the memory access request if the data structure contains at least one mapping entry (“If the address tag is less than the address tag at the root the left child is selected. If the address tag is greater than the address tag at the root the right child is selected. This search process continues for successive nodes until a match is found or a leaf node is reached. If a leaf node (a node with no child node) is reached without match being found, the line associated with the searched address tag is not cached in the system” paragraph 0117), and determine that the search pointer corresponds to the device buffer if the search pointer is not larger than a sum of the identified buffer pointer and the specified size associated with the identified buffer pointer (“The indicators may be start and end addresses ( or start and end address tags), or a start address and a length, for example. The indicators enable minimum (MIN) and maximum (MAX) address tag values of the range to be determined. Optionally, the request may also indicate the desired coherence state for data in the range. Alternatively, a default state (such as 'Invalid') may be assumed. The tag search structure is then used to identify address tags, in the range {MIN, MAX}, that match address tags in one or more caches of the system.” Paragraph 0119, further, “If no match is found, as depicted by the negative branch from decision block 1206, the tree is searched at block 1208 to find the smallest stored tag that is greater than the MIN value.” Paragraph 0120, note that the range is expressly taught in the form of VAtest+offset in at least paragraph 0057); and determine that the search pointer corresponds to the CPU buffer otherwise (“Alternatively, a default state (such as 'Invalid') may be assumed” paragraph 0119, note further that If a leaf node (a node with no child node) is reached without match being found, the line associated with the searched address tag is not cached in the system” paragraph 0117). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant invention to modify the local and external memory allocation system taught by Stabrawa to include the determination of an address being within an allocated range stored in a metadata tree in a memory allocation and cache system, as taught by Beard. It would have been obvious because it represents the application of a known technique (i.e. determining memory coherency using a balanced binary search tree for managing allocations, as taught by Beard in at least paragraphs 0114 and 0117, including the matching of addresses to ranges based on a base plus offset, as taught in at least paragraphs 0057 and 0119, and the mapping of virtual to physical addresses, as taught in at least paragraph 0057) to a known system (i.e. the system for mapping external memory regions, as taught by Stabrawa in at least paragraph 0306, which includes a red-black tree of zero or more allocations, as taught in at least paragraph 0224, the mapping of virtual to physical addresses, as taught in at least paragraph 0227, and the use of local memory as a cache, as taught in at least paragraph 0247) ready for improvement to yield predictable results (i.e. the balanced red-black tree may be searched for address ranges using minimal storage and rapid searching). One would have been motivated to do so in order to minimize the size of memory needed to store the mapping information, as taught by Beard in at least paragraph 0114, and to allow the tree to be searched rapidly, as taught by Beard in at least paragraph 0115. In regard to claims 8 and 14, they are substantially similar to claim 2, and accordingly are rejected under similar reasoning. [Examiner notes that the limitation “… if the data structure contains at least one mapping entry…” which is present on line 12 of claim 2 appears to be omitted in claims 8 and 14; however, the claims are sufficiently similar to claim 2 that the instant rejection applies under analogous reasoning] In regard to claim 3, Stabrawa and Beard teach the memory management device as claimed in claim 2, as above. Stabrawa further teaches that the data structure is constructed as a red-black tree (“Collection(s) of portions may be organized, such as with an array, a list, a tree, a red-black tree, a B-tree, a B+ tree, a hash table, a distributed hash table, and/or with any other collection data structure.” Paragraph 0210), However, he fails to expressly teach that the execution of the resolver module further causes the processor to perform a binary search on the red-black tree to identify the largest one of the buffer pointers that is not larger than the search pointer included in the memory access request. Beard teaches that the execution of the resolver module further causes the processor to perform a binary search on the tree to identify the largest one of the buffer pointers that is not larger than the search pointer included in the memory access request (“If no match is found, as depicted by the negative branch from decision block 1212, the tree is searched at block 1214 to find the largest stored tag that is smaller than the MAX value. The search range is updated with this value at block 1216 and flow continues to block 1218” paragraph 0121) It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant invention to modify the local and external memory allocation system taught by Stabrawa to include the determination of an address being within an allocated range stored in a metadata tree in a memory allocation and cache system, as taught by Beard. It would have been obvious because it represents the application of a known technique (i.e. determining memory coherency using a balanced binary search tree for managing allocations, as taught by Beard in at least paragraphs 0114 and 0117,) to a known system (i.e. the system for mapping external memory regions, as taught by Stabrawa in at least paragraph 0306, which includes a red-black tree of zero or more allocations, as taught in at least paragraph 0224) ready for improvement to yield predictable results (i.e. the balanced red-black tree may be searched for address ranges using minimal storage and rapid searching). One would have been motivated to do so in order to minimize the size of memory needed to store the mapping information, as taught by Beard in at least paragraph 0114, and to allow the tree to be searched rapidly, as taught by Beard in at least paragraph 0115. In regard to claims 9 and 15, they are substantially similar to claim 3, and accordingly are rejected under similar reasoning Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Patent Application Publication № 2025/0123971 teaches a system which uses a tree structure to map virtual memory addresses and which includes multiple accelerators. US Patent Application Publication № 2004/0019711 teaches a system which determines addresses in a window to perform accelerated DMA Any inquiry concerning this communication or earlier communications from the examiner should be directed to Lauren Z Ganger whose telephone number is (571)272-0270. The examiner can normally be reached 10:00 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached at (571) 272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AJAY M BHATIA/Supervisory Patent Examiner, Art Unit 2156
Read full office action

Prosecution Timeline

Mar 11, 2025
Application Filed
Mar 26, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602395
APPARATUS AND METHOD FOR FILTERING VISUALIZATIONS FROM OR ACROSS DIFFERENT ANALYTICS PLATFORMS
2y 5m to grant Granted Apr 14, 2026
Patent 12596678
HYPERGRAPH DATA STORAGE METHOD AND APPARATUS WITH TEMPORAL CHARACTERISTIC AND HYPERGRAPH DATA QUERY METHOD AND APPARATUS WITH TEMPORAL CHARACTERISTIC
2y 5m to grant Granted Apr 07, 2026
Patent 12561341
REAL-TIME REPLICATION OF DATABASE MANAGEMENT SYSTEM TRANSACTIONS INTO A DATA LAKEHOUSE
2y 5m to grant Granted Feb 24, 2026
Patent 12547639
ENRICHING EVENT STREAMS WITH ENTITY DATA
2y 5m to grant Granted Feb 10, 2026
Patent 12541547
PROFILE-ENRICHED EXPLANATIONS OF DATA-DRIVEN MODELS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
94%
With Interview (+12.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 271 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month