DETAILED ACTION
Claims 1-2, 10-11, 16-18, 20 are amended. Claims 1-20 are pending.
Priority: 10/16/2023(FP)
Assignee: Samsung
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/20/2026 has been entered.
Claim Objections
1.Amended Claim 10 is objected for reciting a typo.
Amended claim 10 recites, ‘determining,…., whether a use node ???? in a tree …. by searching for the use node in the tree’. The word ‘exists’ is missing.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
1.Claim 5 is rejected for reciting a limitation with antecedent basis issues.
Claim 5 recites, ‘….a list….of a plurality of virtual areas comprised in the process address space’.
But amended claim 1 introduces ‘a plurality of virtual areas’ as it recites, ‘….the process address space comprising a plurality of virtual areas’.
2.Claim 10 is rejected for reciting a limitation with antecedent basis issues.
Claim 10 recites, ‘receiving an unmapping instruction… of data for a target virtual area’.
Then claim 10 also recites, ‘determining….whether a use node in a tree corresponding to a target virtual area….’.
Note: In the Remarks, the Applicant does not mention the relevant specification paragraph(s) that recite the amendments.
1.Amended Claims 1,17 are rejected for reciting a limitation that is unclear, vague and indefinite.
Amended claim 1 recites, ‘wherein the unused node remains in the tree during the marking…., thereby a tree rebalancing due to added or deleted node is prevented’.
However, amended Claim 1 also recites, ‘determining,….whether an unused node exists in a tree….by searching for the unused node in the tree’. This limitation and the ‘determining’ steps are unrecited in the spec and therefore it is unclear how it is determined that ‘an unused node’ exists in the tree.
As a result the next limitation, ‘marking the unused node in the tree….’, fails because the ‘unused node’ does not exist in the tree.
Since ‘the unused node’ is not in the tree, it is unclear how claim 1 determines that the tree does not need rebalancing, or that rebalancing is prevented.
Spec, Fig. 6 recites that if the unused node does not exist in the tree, a new node is added and tree is rebalanced, which conflicts with the amendment. Accordingly, claim 1 is rejected for reciting a limitation that is unclear, vague and indefinite. Claim 17 has a similar issue.
2.Amended Claim 10 is rejected for reciting a limitation that is unclear, vague and indefinite. (the issue is similar to claims 1,17 but with an ‘use node’).
Amended Claim 10 recites, ‘where the use node remains in the tree during the marking….,thereby a tree rebalancing…. is prevented’.
However, amended Claim 10 recites, ‘determining,….whether an use node exists in a tree….by searching for the use node in the tree’. This limitation and the ‘determining’ steps are unrecited in the spec. Therefore it is unclear how it is determined that ‘an use node’ exists in the tree.
As a result the next limitation, ‘marking the use node in the tree….’, fails because the ‘use node’ is not determined.
Since ‘the use node’ is not in the tree, it is unclear how claim 10 determines that the tree does not need rebalancing, or rebalancing is prevented. Accordingly, claim 10 is rejected for reciting a limitation that is unclear, vague and indefinite.
3.Amended Claims 1,3,10,12 are rejected for reciting a limitation that is unclear, inconsistent and indefinite.
Amended claim 1 recites, ‘wherein the unused node remains in the tree during the marking…., thereby a tree rebalancing due to added or deleted node is prevented’.
This limitation, wherein if a node is not added or removed, the tree stays balanced, is well-known in the prior art because it is an inherent property of every tree, including the claimed BST. Since the recitation is an obvious outcome, it lacks technical merit and does not provide a patentable distinction.
That said, Claim 3 recites that ‘the tree comprises a self-balancing BST’. The primary purpose of self-balancing trees is to maintain balance specifically because nodes are added or deleted. But if a tree does not change, as recited in amended claim 1, it is not a self-balancing tree. It is just a static, balanced tree. Since claims 1 and 3 conflict with each other, they are rejected for reciting a limitation that is unclear, inconsistent and indefinite. Claims 10,12 have a similar issue.
4.Amended Claims 1, 10 are rejected for reciting a limitation that is unclear, inconsistent and indefinite.
Amended Claim 2 recites, ‘wherein…. executing the application using the mapped target data in the physical memory’.
But Fig. 3, Para-0061 of the spec recites, ‘When an application is executed, the process address space 300 may be allocated to the application’.
Since amended claim 1 already recites, ‘receiving a mapping instruction to map target data onto a process address space allocated for a ….application’, it suggests that since the application was not executing in claim 1, the process address space was not truly allocated. Accordingly the ‘plurality of virtual areas’ do not exist as functional, usable memory regions, thereby making claim 1 indeterminate. Furthermore the spec does not disclose virtual address to physical address mapping/translation. So ‘mapped target data in physical memory’ is non-existent. Hence claim 1 is rejected for reciting a limitation that is unclear, inconsistent and indefinite.
Claims 10 and 11 also have a similar issue. Hence claim 10 is rejected for reciting a limitation that is unclear, inconsistent and indefinite.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Note: In the Remarks, the Applicant does not mention the relevant specification paragraph(s) that recite the amendment(s).
1.Amended claims 1,10,17 are rejected for reciting a limitation that is unsupported by the spec.
Amended Claim 1 recites, ‘the process address space comprising a plurality of virtual areas mappable to physical memory’. The spec does not recite this limitation. The spec does not recite that the virtual areas are mappable to physical memory.
Spec, Fig. 3, Para-0061 recites, ‘The process address space 300 may include a plurality of virtual areas. The plurality of virtual areas may be mapped with data used by the application’. Here, the data used by the application is associated with virtual memory.
The claimed ‘mapping instruction’ maps data to virtual areas in the process/virtual address space. The ‘mapping instruction’ fails to define the page table entries or translation required to map the virtual areas to physical memory.
More importantly, the spec does not disclose a MMU and how the OS and MMU map virtual/logical addresses to physical addresses. The spec does not recite any virtual-to-physical memory translation. Not reciting how the virtual areas are mapped to physical addresses, specifically the underlying mechanisms like page tables and MMU operations, fails the written description requirement.
Therefore reciting, ‘the process address space comprising…. virtual areas mappable to physical memory’, is an unverified hypothesis, lacking written description support. Accordingly amended claim 1 recites new matter and is rejected for the same reason. Claims 10,17 have the same issue.
2.Amended claims 1,17 are rejected for reciting a limitation that lacks written description support.
Claim 1 recites, ‘determining,…. whether an unused node exists in a tree ….by searching for the unused node in the tree’. Nowhere does the spec recite this limitation or the steps included in the ‘determining….’.
More importantly, the spec does not recite the steps in actually determining if an ‘unused node’ exists in the tree. Spec, Paras: 0091-0092 recite, ‘a case in which a fast path is operated as an unused node to reuse exists in the tree is described…..In operation 607, when an unused node exists….’.
The spec assumes that an ‘unused node’ exists without reciting how it is actually found. Hence claim 1 does not fulfill the written description requirement. Accordingly claim 1 is rejected for reciting a limitation that lacks written description support and providing a false indication of a balanced tree. Claim 17 has the same issue.
3.Amended claim 1 is rejected for excluding a limitation because the exclusion is unsupported by the spec.
Claim 1 submitted as part of the original disclosure and Para-0005 of the spec recite, ‘wherein the tree manages the virtual area onto which the target data is mapped as the use node’.
But amended claim 1 canceled/excluded the above limitation, ‘
By excluding the limitation, claim 1 suggests that after mapping the target data onto the newly determined use node in the process address space, the tree which manages the nodes, does not manage the virtual area corresponding to the use node, a configuration which is unsupported by the spec. Additionally, since the spec does not provide a reason to exclude the limitation, claim 1 recites new matter and is rejected for the same reason.
4.Amended claim 1 is rejected for reciting a limitation that is unsupported by the spec.
Amended claim 1 recites, ‘wherein the unused node remains in the tree during the marking without being deleted from the tree and without adding a new node to the tree, thereby a tree rebalancing due to added or deleted node is prevented’. Nowhere does the spec recite this limitation.
Claim 1 recites a mapping operation. And Para-0081 recites, ‘….by recycling the unused node without adding a node to the tree, the device ….may map data onto the tree without performing rebalancing on the tree.’ The spec recites that a mapping operation does not add a node to the tree. That’s all.
Since the recitation is an unverified extrapolation, claim 1 recites new matter, and is rejected for reciting a limitation that is unsupported by the spec.
5.Amended claim 10 is rejected for reciting a limitation that is unsupported by the spec.
Amended claim 10 recites, ‘wherein the use node remains in the tree during the marking without being deleted from the tree and without adding a new node to the tree, thereby a tree rebalancing due to added or deleted node is prevented’. Nowhere does the spec recite this limitation.
Claim 10 recites an unmapping operation. And Para-0118 recites, ‘….since a node is not removed from the tree, rebalancing may not be performed’. The spec recites that an unmapping operation does not remove/delete a node from the tree. That’s all.
Since the recitation is an unverified extrapolation, claim 10 recites new matter, and is rejected for reciting a limitation that is unsupported by the spec.
6.Amended claim 2 is rejected for reciting a limitation that is unsupported by the spec.
Amended claim 2 recited, ‘….executing the application using the mapped target data in the physical memory’. The spec does not recite this limitation.
Para-0077 of the spec recites, ‘When an application is executed, the electronic device may allocate a process address space to the application. The electronic device may map data onto the process address space. The electronic device may map data onto a plurality of virtual areas in the process address space. The electronic device may build a tree using ….nodes corresponding to the ….virtual areas onto which the data is mapped’.
As shown, process address space and virtual areas are associated with virtual memory. The spec does not recite how the virtual areas are mapped to physical memory. Therefore ‘mapped target data in the physical memory’ is an unverified hypothesis. The spec does not recite that the ‘deep learning application’ is executed using the mapped target data in the physical memory. Hence claim 2 recites new matter and is rejected for the same reason.
7.Claims 4, 19 are rejected for reciting a limitation that is unsupported by the spec.
Claims 4,19 excluded the limitation, ‘determining whether the unused node exists in the tree’. Nowhere does the spec recite the limitations in amended claims 4,19.
Claims 4,19 submitted as part of the original disclosure, recited the sequence of steps recited in Paras-0008,0023 of the spec. Paras-0008,0023 recite a specific algorithm to ‘mark the unused node as use node’.
But in the previous O/A, claims 4,19 were amended to exclude the limitation to recite a different sequence of steps, unverified by the spec.
The spec does not provide any written description support to show that any of the limitations in Paras-0008,0023 are optional, conditional or can be recited in any order. Ending the marking method with the search step recites a different and unsupported marking method. Accordingly claim 4 recites new matter, and is rejected for reciting a limitation that is unsupported by the spec. Claim 19 has the same issue.
8.Claim 13 is rejected for reciting a limitation that is unsupported by the spec.
Claim 13 excluded the limitation, ‘searching for the use node corresponding to the target virtual area in the tree’. Nowhere does the spec recite the limitations in amended claim 13.
Claim 13 submitted as part of the original disclosure, recited the sequence of steps recited in Para-0017 of the spec. Para-0017 recites a specific algorithm to ‘mark the use node as an unused node’.
But in the previous O/A, claim 13 was amended to exclude the limitation to recite a different sequence of steps, unverified by the spec.
The spec does not provide any written description support to show that any of the limitations in Para-0017 are optional, conditional or can be recited in any order. Accordingly claim 13 recites new matter, and is rejected for reciting a limitation that is unsupported by the spec.
9.Amended Claim 10 is rejected for reciting a limitation that is unsupported by the spec.
Claim 10 recites, ‘determining,….., whether a use node in a tree…. by searching for the use node in the tree’.
The spec does not recite the limitation. Spec, Para-0014 recites, ‘a processor-implemented method includes: receiving an unmapping instruction to cancel mapping of data for a target virtual area in a process address space, in response…., marking a use node in a tree ….as an unused node, and unmapping data for the target virtual area, wherein the tree manages the unused node….’.
Based on Para-0014, the limitation adds information not found in the original description, drawings, or claims. Therefore claim 10 recites new matter.
Claim(s) 6, 15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
1.Claims 6, 15 are rejected for reciting a limitation where the spec fails to disclose how to make and use the disclosure without undue experimentation.
Claim 6 recites, ‘wherein the data comprises one or more tensors’.
And amended claim 1 recites, ‘receiving a mapping instruction to map target data onto a process address space allocated for a deep learning application’.
The data, recited in claim 1, also includes tensors. The spec does not define ‘a tensor’. It is well-known in the prior art that deep learning heavily relies on tensors. A tensor is a mathematical object, a multi-dimensional array used as the basic data structure for storing, processing, and transforming data, generalizing scalars, vectors, and matrices.
The spec does not define ‘a mapping instruction’ and how it operates. Though ‘mapping’ refers to establishing a relationship/binding between two entities such as sets of data, memory locations etc., neither the spec nor the figures disclose how the ‘mapping instruction’ maps multi-dimensional arrays/tensors onto the process/virtual address space. The mapping instruction provides no mapping definition or mechanism for multi-dimensional array placement within the process address space. In fact, the mapping instruction provides no mapping definition or mechanism for any type of data placement within the process address space.
Though spec Fig. 11, Para-0146 recites, ‘even if 1064 tensors are mapped onto respective virtual areas’, there is no written description of the actual ‘mapping’/binding between the muti-dimensional arrays and virtual areas. How the multi-dimensional arrays are mapped onto a virtual area of a ‘use node’ in the tree, lacks written description support. Though searching the tree is a key feature of the disclosure, neither the claim nor the spec recite how to search the tree for nodes mapped to multi-dimensional arrays.
Amended claim 10 recites, ‘receiving an unmapping instruction to cancel mapping of data for a target virtual area in a process address space allocated for a deep learning application’. Just as the ‘mapping instruction’ is deficient in mapping tensors to virtual areas, the ‘unmapping instruction’ is equally deficient because it does not provide any written description evidence to prove that it can unmap multi-dimensional arrays from virtual areas and help manage the tree.
Though spec, Para-0053 recites, ‘a tensor processing unit (TPU)’, there is no disclosure of the role the TPU plays in mapping or unmapping the tensors onto process memory space.
In essence, since the spec does not disclose how the ‘mapping instruction’ maps/binds the multi-dimensional arrays to the virtual areas/nodes or how the ‘unmapping instruction’ unmaps/unbinds the multi-dimensional arrays from the virtual areas, searching the tree to find an unused node or a used node would require excessive trial-and-error to make and use the disclosure. Hence the mapping instruction and unmapping instruction do not provide adequate written description support for the deep learning application to use the tree as recited in claims 1,10,17.
Claim Rejections - 35 USC§ 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title.
Note: With regards to the 101, the amendments do not overcome the rejection because they do not add ‘significantly more’ to the exception, to recite an inventive concept and/or technical benefit. The amendments are well-understood, routine college textbook information. Based on the arguments, the rejection has been clarified and maintained.
Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1, 10 are directed to a ‘processor-implemented method’, and claim 17 is directed to ‘an electronic device comprising one or more processors’. Hence they are directed to a statutory category, i.e., a machine (Step 1: Yes).
Under revised Step 2A, Prong 1 of the eligibility analysis, it is necessary to evaluate whether the claim recites a judicial exception by referring to subject matter groupings articulated in 2106.04(a) of the MPEP. In consideration of the analysis, the claims recite an abstract idea. Claims 1, 10, 17 recite the abstract idea of a pre-populated, static tree, arranged in a hierarchical structure, with used and unused nodes. A tree is based on graph theory, a subfield of mathematics.
Independent claim 1 recites the abstract idea of: receiving a mapping instruction to map data onto a process address space allocated for a deep learning application; marking an unused node in the tree that manages the process/virtual address space, mapping the target data onto a virtual area in the process address space. Independent claims 10 and 17 have similar recitations of the abstract idea.
The nodes contain data, including tensor data. Dependent claim 3 recites that the tree is a self-balancing BST/binary search tree. Dependent claims 4-5, 13, 19-20 recite how to mark an unused node into a use node by searching the tree and adding data to it. Claim 8 recites marking a use node to an unused node by searching the tree and deleting its data.
Claims 1-20 recite a pre-populated, static BST comprising used and unused nodes and perform operations on the tree, such as search the tree to locate the used and unused nodes. Since these operations involve well-known rules to identify a parent node or a child node, the operations performed on the BST are mental processes done easily with a pencil and paper. In fact, the static BST encourages an effortless pencil-paper approach to readily locate nodes rather than employ a realistic assessment of the technical requirements to understand how the BST was built (from the beginning). What further validates the abstract idea is that the underlying rules and design of the tree/BST remain the same regardless of whether the tree is used in a deep learning application or a simple sorting algorithm. Even if the claims require a computer, they may still be considered a mental process since the computer is used merely as a tool to perform the mental steps. See MPEP 2106.04(a)(2). Thus claims 1-20 recite an abstract idea.
Under revised Step 2A, Prong 2 of the eligibility analysis, if it is determined that the claims recite a judicial exception, it is then necessary to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of that exception. In this case, claims 1-20 recite additional elements such as ‘process address space’ and ‘virtual areas’. For example, claims 2, 11, 18 recite, ‘the tree manages the process address space using a plurality of use nodes corresponding to a plurality of virtual areas in which data is mapped onto the process address space and one or more unused nodes that do not correspond to the plurality of virtual areas’. Here, the tree managing the process/virtual address space with virtual areas does not provide significantly more than the judicial exception because the step is well-understood, routine, and conventional activity previously known to the industry of application data management.
Furthermore, the claims recite the additional elements as virtual areas, without supporting underlying hardware. The claims do not recite any virtual to physical memory mapping or address translation. Though the data includes tensors, no mechanism is specified to map tensors to virtual memory, leaving the mapping incomplete. Hence the additional elements amount to mere software as they do not represent any structural components of the electronic device. They merely comprise the software for performing the BST operations. Based on Recentive Analytics v. Fox (2025), training a deep learning model without specific improvements to the model architecture or hardware utilization, is deemed patent-ineligible. Claims 1-20 are drawn to software per se. See MPEP 2106.01 (I).
Though the spec recites, ‘an electronic device and method with efficient memory management’, claims 2, 11, 18 merely recite software instructions unsupported by hardware limitations. They do not recite any mechanism that shows how virtual memory is mapped to physical memory, tracked and/or managed.
In essence, the additional elements individually and in combination, do not integrate the exception into the deep learning application or any application. Due to the lack of virtual memory to physical memory mapping, the additional elements do integrate the BST into the deep learning application. They do not include specific, non-generic improvements to the tree structure itself that optimizes the deep learning application. This is because they are merely used to apply the abstract idea using a generic processor, as defined in MPEP 2106.04(d).
Claims 1, 10, 17 recite the step of, ‘wherein the unused/used node remains in the tree during the marking without being deleted from the tree and without adding a new node to the tree, thereby a tree rebalancing….is prevented’. Since the recitation is an inherent feature of every tree, including BST, it is considered to be insignificant extra-solution activity. Extrasolution activity are activities that are incidental to the primary method that are merely a nominal or tangential addition to the claim. See MPEP 2106.05(g).
Under Step 2B of the eligibility analysis, if it is determined that the claims recite a judicial exception that is not integrated into a practical application of that exception, it is then necessary to evaluate the additional elements individually and in combination to determine whether they provide an inventive concept. In this case, claim 3 recites, ‘the tree is a self-balancing binary search tree’. The claimed BST is a binary tree, well-known in graph theory and easily found in a college textbook. And the Claim 13 recitation, ‘searching for the use node corresponding to the target virtual area in the tree……..and in response to the depth of the searched use node not exceeding the threshold depth, marking the use node as an unused node’, is a well-known concept in BST self-balancing, previously disclosed in graph theory in a college textbook. The claims do not focus on improvements to the BST itself, such as reciting a specialized tree-balancing algorithm that reduces memory usage in the deep learning application or any application. Hence claims 3, 13 do not provide significantly more than the judicial exception.
Since the claims do not recite how the tree and the additional elements, considered individually or in combination, improve the functioning of the electronic device or provide a clear technological improvement, the claims do not provide an inventive concept. The claims amount to no more than applying the abstract idea using a generic computer. For court cases, please see at least: Uniloc USA, Inc. v. Med. Info.Tech., Inc., No. 6:16-cv-00463 (E.D. Tex. Mar. 30, 2017), Visual Memory LLC v. NVIDIA Corp., No. 1:15-cv-00789, Op. at 7, 14 (D. Del. May 27, 2016).
Hence independent claims 1, 10 and 17 recite limitations of the abstract idea and are ineligible subject matter. Dependent claims 2-9, 11-16 and 18-20, also being ineligible, do not aid in the eligibility of their respective parent.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-12, 15-18 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Kwon et al (20150193354) in view of Edwards et al (20240168830) and Hyland et al (20030009474).
As per Claim 1, Kwon discloses a processor-implemented method (Kwon, [Fig. 1: Processor 200]; [0005 - A memory management method of an operating system for a nonvolatile main memory system, and provide a memory mapping method that enables an application program to quickly access a file through memory mapping]) comprising:
receiving (Kwon, [0056 – In Fig. 1, NVM controller 100 communicates with the processor 200 through a processor channel to receive a command and an address to receive data]) a mapping instruction (Kwon, [0097 - A system call such as mmap]) to map target data onto a process address space (Kwon, [Fig. 15]; [0115 - The user area 310 is a space of a memory accessed by the user by using an application and includes user buffer 315 and library buffer 314]; [0151 – In Fig. 11, step S100, a virtual area of a process is allocated due to a command or a system call of a processor, and the processor applies a mapping command for mapping a virtual address of the virtual area to a physical address of a file page]) allocated for a application (Kwon, [0005 - Provide a memory mapping method that enables an application program to quickly access a file/virtual area through memory mapping; Since Fig. 1, Para-0068 recites a multi-core processor which can be used for deep learning, mainly for data preprocessing, data loading, and small-scale training, the ‘application’ is considered equivalent to a deep learning application]);
the process address space comprising a plurality of virtual areas mappable (Kwon, [0066 - The page map table is formed in units of pages/virtual areas, and converts a logical address number/LAN into a physical page number/PPN]; [0067 - When data is stored in units of pages, each page may be referred to as a file page]) to physical memory (Kwon, [0021 - A nonvolatile memory/physical memory mapping method includes: performing a system call in order to access a file page that is used to operate a process stored in a kernel area of a nonvolatile main memory/NVM, wherein both the file page and process are stored in the kernel area of the NVM, mapping a physical address of the file page to a virtual address of a user area of the NVM, receiving, from a nonvolatile memory system, mapping table information that is updated by a processor and updating a mapping table in a TLB based on the updated mapping table information]),
determining (Kwon, [Fig. 13]), in response to the mapping instruction (Kwon, [0097 - A system call such as mmap]), whether an unused node exists in a tree that manages the plurality of virtual areas in the process address space (Kwon, [Fig. 15]; [0066 - The page map table is formed in units of pages/virtual areas]; [0057 - A memory-based file system/process address space resides in a memory space of the kernel area]) by searching ([See 112(a)]) for the unused node in the tree (Kwon, [0167 – In Fig. 13, step S370, a virtual area corresponding to an allocated virtual address of a process is searched. In step S380, characteristics of the searched virtual area, are detected. Next, in step S390, it is determined whether the virtual area may be re-used]),
marking the unused node in the tree as a use node to reuse (Kwon, [0167 – In Fig. 13, when the characteristics are identical, it is determined in step S390 that the virtual area may be re-used, thereby marking the unused node as a use node]),
thereby a tree rebalancing ([See 112(a)]) due to added or deleted node is prevented (Kwon, [0168 - In Fig. 13, since an overhead of deleting, allocating, and re-arranging a virtual area in the tree is reduced, a response speed of a memory system is increased, thereby implying that a rebalancing is prevented]);
mapping the target data onto a virtual area corresponding to the use node (Kwon, [0014 - The mapping includes: before allocating a virtual area to the nonvolatile main memory, detecting characteristics of the virtual area, determining whether the virtual area is to be re-used based on the characteristics, and if the virtual area may be re-used, storing a memory virtual address of a file system or the process in the virtual area]) in the process address space (Kwon, [0071 - When a file or data is stored/written in nonvolatile main memory 300, the file system/process address space organizes the file or the data]; [0065 - The controller includes a mapping manager including a page map table and the FTL, and a local memory to drive the mapping manager. The FTL functions to convert a logical address provided by the processor 200 into a physical address that is used by the flash memory/NVM]),
Kwon discloses a multi-core processor executing an application. It is well-known that a multi-core processor can be used for deep learning, mainly for data preprocessing, data loading, and small-scale training. Since GPUs are preferred for heavy model training due to their massive parallelization,
Edwards further clarifies,
receiving a mapping instruction (Edwards, [0400 - A WD fetch unit 3591 in accelerator integration slice 3590 fetches next WD 3584 which includes an indication of work to be done by one or more graphics processing engines of graphics acceleration module 3546]; [0464 – In Fig. 42, global thread dispatcher is configured to provide an instruction to a graphics core within a graphics processor]) to map target data (Edwards, [0675 - The API is to receive as input a tensor data type]; [0098 – In Fig. 1, mapping is to be used to store data of first tensor to be stored according to mapping. API receives as input information indicating where to store mapping. API is to receive as input a plurality of characteristics of first tensor. e.g., a shape of tensor, location in memory, size, data type, etc.]; [0491 - Tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing]) onto a process address space allocated (Edwards, [Fig. 35: application effective address space 3582+OS virtual address space 3585]; [0397 - An application effective address space 3582 within system memory 3514 stores process elements 3583]; [0095 – In Fig. 1, an API to cause information to be stored in a plurality of storage locations allocated to a first GPU]) for a deep learning application (Edwards, [0519 - In Fig. 50, application 5000 is an AI/ML application implemented using a deep learning framework such as MXNet, PyTorch, or TensorFlow]; [Figs. 43-54C]),
the process address space (Edwards, [0400 - MMU 3539 includes segment/page walk circuitry for accessing segment/page tables 3586 within OS virtual address space 3585]) comprising a plurality of virtual areas (Edwards, [0432 - Each processing cluster 3894 includes an MMU 3845 that is configured to map virtual addresses/virtual areas into physical addresses. MMU 3845 includes a set of page table entries/PTEs used to map a virtual address to a physical address of a tile and a cache line index]) mappable ([See 112(a)]) to physical memory (Edwards, [0407 – In Fig. 36A, graphics processor 3610 includes one or more MMUs 3620A-3620B, caches 3625A-3625B, and circuit interconnects 3630A-3630B. One or more MMU(s) 3620A-3620B provide for virtual to physical address mapping for graphics processor 3610]),
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the deep learning application of Edwards into the memory mapping method of Kwon, for the benefit of including one or more tensor cores in processing cores. The tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing (Edwards, 0491).
It is well-known in the prior art that the BST does not require rebalancing if a node is not added or removed from the BST.
Accordingly Hyland clarifies,
wherein the unused node remains in the tree during the marking without being deleted from the tree and without adding a new node ([See 112(b]) to the tree (Hyland, [0043 – In Fig. 7, tree 71 has a predetermined structure such that for each level, each node has two child nodes, thereby implying that the tree is balanced from the beginning]; [0045 - If the tree is structured/balanced as shown in Fig. 7 then Fig. 8 is used when searching for an element]; [Fig. 8: step 82, Element found, thereby implying that the unused node which is the search element is found in the tree; Since the spec does not recite the steps to find ‘unused node’, the citation is a valid interpretation]), thereby a tree rebalancing due to added or deleted node is prevented (Hyland, [0035 - Fig. 2 represents a balanced BST which is structured in software]; [0037 - It is desirable to achieve a balanced tree in order to minimize the number of operations required to achieve a match between the keys and the address data in the entry, thereby implying that rebalancing is prevented]);
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the balanced BST of Hyland into the memory mapping method of Kwon,Edwards for the benefit of the BST staying balanced with the same depth of tree both to the left and to the right of the root node (Hyland, Para-0039).
As per Claim 2, the rejection of claim 1 is incorporated, and Kwon discloses,
wherein the tree (Kwon, [0166 - A virtual area corresponding to an allocated virtual address of a process is searched by using a binary data structure such as a red-black tree]) manages the process address space using a plurality of use nodes corresponding to a plurality of virtual areas (Kwon, [0163 - Fig. 13 shows a method of re-using a virtual area]; [Fig. 13: step S350-insert virtual area/use node, step S360-map virtual area, thereby implying a plurality of use nodes correspond to a plurality of virtual areas]) in which data is mapped onto the process address space (Kwon, [0071 – In Fig. 1, when a file or data is stored in a storage unit/NVM 300, the file system organizes the file or the data. The file system is used according to the OS executed in system 10]; [0057 - A memory-based file system resides in a memory space of the kernel area]) and one or more unused nodes that do not correspond (Kwon, [Fig. 13: step S330-Delete virtual area/node, implying unused node]) to the plurality of virtual areas (Kwon, [0014 - The mapping includes: before allocating a virtual area to the nonvolatile main memory, detecting characteristics of the virtual area, determining whether the virtual area that is already allocated is to be re-used based on the detected characteristics, and if determined that the virtual area may be re-used, storing a memory virtual address of a file system or the process in the virtual area]),
executing the application (Kwon, [0058 - A file page, a file system, etc., used by the processor 200 to execute a program/application loaded in the kernel area]; [0061 - Nonvolatile main memory 300 may drive/execute an operating system/OS, an application, a file system, a memory manager, and I/O drivers]) using the mapped target data in the physical memory (Kwon, [0059 - Data that is stored in the nonvolatile main memory 300 has a physical address and is mapped to a virtual address of of a virtual area of a process]).
Edwards clarifies,
executing the application (Edwards, [0508 - Fig. 47 shows a CUDA implementation of software stack 4600 of Fig. 46. A CUDA software stack 4700, on which an application 4701 may be launched/executed, includes CUDA libraries 4703, a CUDA runtime 4705, a CUDA driver 4707, and a device kernel driver 4708]; [0510 - CUDA libraries 4703 include mathematical libraries, deep learning libraries, parallel algorithm libraries, and/or signal/image/video processing libraries, which parallel computing applications such as application 4701 utilize]) using the mapped target data (Edwards, [0136 - Fig. 18 shows a technique of generating a tensor map data structure]) in the physical memory (Edwards, [0486 - MMU 4418 provides an interface between GPC 4400 and a memory partition unit and MMU 4418 provides translation of virtual addresses into physical addresses, memory protection, and arbitration of memory requests]; [0432 - Each processing cluster 3894 includes an MMU 3845 that is configured to map virtual addresses into physical addresses]; [0064 - Fig. 57 shows how threads of a CUDA grid are mapped to different compute units of Fig. 56]);
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the deep learning application of Edwards into the memory mapping method of Kwon,Hyland for the benefit of the application and software stack run on hardware, wherein the hardware includes one or more GPUs, CPUs, FPGAs, AI engines, and other types of compute devices that support a programming platform (Edwards, 0502).
As per Claim 3, the rejection of claim 1 is incorporated, and Kwon discloses,
wherein the tree comprises a self-balancing binary search tree (Kwon, [0166 - In Fig. 13, a virtual area corresponding to an allocated virtual address of a process is searched by using a binary data structure such as a red-black tree; It is well-known that a red-black tree is a self-balancing BST]).
As per Claim 6, the rejection of claim 1 is incorporated, and Kwon, Edwards, Hyland disclose,
wherein the data comprises one or more tensors (Edwards, [0100 - In Fig. 1, API 108 receives as input a layout of one or more tensors to be used to perform one or more image-to-column transformations]; [0070 - API 108 includes functions to generate more than one type of tensor descriptor, e.g., a first function to generate a tensor descriptor to be used with a tiled tensor mapping, and a second function to generate a tensor descriptor to be used with an image-to-column tensor mapping]; [0079 – In Fig. 2, inputs to tensor map API 240 include a location to store a generated tensor map data structure, a tensor data type, a tensor rank, a global address, global tensor dimensions, global strides, box dimensions, element strides, an interleave data structure, a swizzle data structure, an L2 promotion data structure, an out of bounds fill data structure, and/or other suitable inputs]).
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the tensors of Edwards into the memory mapping method of Kwon, for the benefit of including one or more tensor cores in processing cores. The tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing (Edwards, 0491).
As per Claim 7, the rejection of claim 1 is incorporated, and Kwon discloses,
wherein virtual areas (Kwon, [Claim 10 - Mapping the newly allocated file page to a virtual area]) comprised in the process address space are managed by one or more groups (Kwon, [0172 - In Fig. 14, step S410, when an area to be mapped to operate a process exceeds an overall file offset, the number of file pages that are to be first allocated is calculated. In step S420, the file pages/virtual areas are allocated. And in step S430, the newly allocated file pages are newly appended to a file/group by being connected through a data structure of a file system/process address space]) in response to a grouping instruction (Kwon, [Fig. 14: MAP APPEND]) to group the virtual areas as the one or more groups (Kwon, [0172 – In Fig. 14, step S440, the newly allocated file pages are recognized as the file, and the existing mapping process of mapping a physical address and a virtual address of each allocated file page is performed]),
virtual areas comprised in one of the one or more groups (Kwon, [0175 - In Fig. 15, referring to the first picture, a file page request/arbitrary instruction may be transmitted four times in total to the library buffer 510 when a size of a file page is 1 KB]) are concurrently processed with respect to an arbitrary instruction (Kwon, [0175 - Library buffer 510 does not access a kernel area of the nonvolatile main memory during a system call whenever the file page request command is received, but may perform only one system call for the file page request command four times, for example, a 4 KB-file page, thereby implying concurrent processing]).
As per Claim 8, the rejection of claim 1 is incorporated, and Kwon discloses,
receiving an unmapping instruction (Kwon, [Fig. 13: MAP_REPLACE; In Linux mmap can also be used with MAP_FIXED to replace/unmap the mapping]) to cancel mapping of data for a target virtual area in the process address space (Kwon, [0167 - When a special flag such as MAP_REPLACE is detected during the checking of the flag]);
in response to reception of the unmapping instruction (Kwon, [Fig. 13: MAP_REPLACE; In Linux mmap can also be used with MAP_FIXED to replace the mapping]), marking another use node in the tree as another unused node (Kwon, [Fig. 13: step S390, No, thereby implying marking another use node as an unused node]);
unmapping data for the target virtual area (Kwon, [Fig. 13: step S330, Delete virtual area. In step S340, a virtual area is re-allocated, implying old data is unmapped and the newly allocated virtual area is clean. In step S350, the virtual area is re-inserted into the red-black tree to be managed, thereby implying that the virtual area corresponds to the unused node in the process address space]),
wherein the tree manages the other unused node to reuse in future (Kwon, [0166 - In Fig. 13, step S360, a virtual address of the virtual area may be mapped to a physical address of a file page which the process desires to access, thereby implying that the unused node is ready to reuse for future use]).
As per Claim 9, the rejection of claim 1 is incorporated, and Kwon discloses,
when executed by one or more processors (Kwon, [0068 – In Fig. 1, processor 200 may be a single core processor, or a multi-core processor. For example, the processor 200 may be a dual core processor, a quad-core processor, or a hexa-core processor]),
As per Claim 10, Kwon discloses a processor-implemented method (Kwon, [Fig. 1: Processor 200]; [0005 - A memory management method of an operating system for a nonvolatile main memory system, and provide a memory mapping method that enables an application program to quickly access a file through memory mapping]) comprising:
receiving (Kwon, [0056 – In Fig. 1, NVM controller 100 communicates with the processor 200 through a processor channel to receive a command]) an unmapping instruction (Kwon, [Fig. 13: MAP_REPLACE; In Linux mmap can also be used with MAP_FIXED to replace/unmap the mapping]) to cancel mapping of data for a target virtual area in a process address space (Kwon, [Fig. 15]; [0115 - The user area 310 is a space of a memory accessed by the user by using an application and includes user buffer 315 and library buffer 314; Here the virtual area associated with user buffer 315 can be unmapped]; [0167 - When a special flag such as MAP_REPLACE is detected during the checking of the flag]) allocated for a deep learning application (Kwon, [0005 - Provide a memory mapping method that enables an application program to quickly access a file/virtual area through memory mapping; Since Fig. 1, Para-0068 recites a multi-core processor which can be used for deep learning, mainly for data preprocessing, data loading, and small-scale training, the ‘application’ is equivalent to a deep learning application]),
the process address space comprising a plurality of virtual areas mappable (Kwon, [0066 - The page map table is formed in units of pages/virtual areas, and converts a logical address number/LAN into a physical page number/PPN]; [0067 - When data is stored in units of pages, each page may be referred to as a file page]) to physical memory (Kwon, [0021 - A nonvolatile memory/physical memory mapping method includes: performing a system call in order to access a file page that is used to operate a process stored in a kernel area of a nonvolatile main memory/NVM, wherein both the file page and process are stored in the kernel area of the NVM, mapping a physical address of the file page to a virtual address of a user area of the NVM, receiving, from a nonvolatile memory system, mapping table information that is updated by a processor and updating a mapping table in a TLB based on the updated mapping table information]),
determining, in response to the unmapping instruction (Kwon, [Fig. 13: MAP_REPLACE; In Linux mmap can also be used with MAP_FIXED to replace the mapping]), whether a use node in a tree corresponding to a target virtual area in the process address space (Kwon, [Fig. 15]; [0057 - A memory-based file system/process address space resides in a memory space of the kernel area]) by searching ([See 112(a),112(b)]) for the use node (Kwon, [0167 – In Fig. 13, step S370, a virtual area corresponding to the allocated virtual address of the process is searched in the red-black tree]) in the tree (Kwon, [0167 – In Fig. 13, after steps S370, S380, which determine that the virtual area characteristics is valid, in step S390, it is determined whether the virtual area may be re-used? No]),
the tree manages the plurality of virtual areas in the process address space (Kwon, [0169 – In Fig. 13, characteristics of a virtual area, a flag of a file, and a virtual address are managed by the tree]);
marking the use node in the tree as an unused node (Kwon, [Fig. 13: step S390, No, thereby implying marking the use node as an unused node]),
thereby a tree rebalancing ([See 112(a)]) due to added or deleted node is prevented (Kwon, [0168 - In Fig. 13, since an overhead of deleting, allocating, and re-arranging a virtual area in the tree is reduced, a response speed of a memory system is increased, thereby implying that a rebalancing is prevented]);
unmapping data for the target virtual area corresponding to the unused node in the process address space, (Kwon, [Fig. 13: step S330, Delete virtual area. In step S340, a virtual area is re-allocated, implying old data is unmapped and the newly allocated virtual area is clean. In step S350, the virtual area is re-inserted into the red-black tree to be managed, thereby implying that the virtual area corresponds to the unused node in the process address space]),
wherein the tree manages the unused node to reuse in future (Kwon, [0166 - In Fig. 13, step S360, a virtual address of the virtual area may be mapped to a physical address of a file page which the process desires to access, thereby implying that the unused node is ready to reuse for future use]).
Kwon discloses a multi-core processor executing an application. It is well-known that a multi-core processor can be used for deep learning, mainly for data preprocessing, data loading, and small-scale training. Since GPUs are preferred for heavy model training due to their massive parallelization,
Edwards further clarifies,
receiving an umapping instruction (Edwards, [0594 - cudaFree(d_A); CUDA calls to free memory for vector A are migrated to corresponding DPC++ calls]) to cancel mapping of data for a target virtual area (Edwards, [0260 - The CUDA Runtime API function cudaFree() releases or frees the memory space on the GPU device that was previously allocated by functions like cudaMalloc()]) in a process address space allocated (Edwards, [Fig. 35: application effective address space 3582+OS virtual address space 3585]; [0397 - An application effective address space 3582 within system memory 3514 stores process elements 3583]; [0095 – In Fig. 1, an API to cause information to be stored in a plurality of storage locations allocated to a first GPU]) for a deep learning application (Edwards, [0519 - In Fig. 50, application 5000 is an AI/ML application implemented using a deep learning framework such as MXNet, PyTorch, or TensorFlow]; [Figs. 43-54C]),
the process address space (Edwards, [0400 - MMU 3539 includes segment/page walk circuitry for accessing segment/page tables 3586 within OS virtual address space 3585]) comprising a plurality of virtual areas (Edwards, [0432 - Each processing cluster 3894 includes an MMU 3845 that is configured to map virtual addresses/virtual areas into physical addresses. MMU 3845 includes a set of page table entries/PTEs used to map a virtual address to a physical address of a tile and a cache line index]) mappable ([See 112(a)]) to physical memory (Edwards, [0407 – In Fig. 36A, graphics processor 3610 includes one or more MMUs 3620A-3620B, caches 3625A-3625B, and circuit interconnects 3630A-3630B. One or more MMU(s) 3620A-3620B provide for virtual to physical address mapping for graphics processor 3610]),
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the deep learning application of Edwards into the memory mapping method of Kwon, for the benefit of including one or more tensor cores in processing cores. The tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing (Edwards, 0491).
It is well-known in the prior art that the BST does not require rebalancing if a node is not added or removed from the BST.
Accordingly Hyland clarifies,
wherein the use node remains in the tree during the marking without being deleted from the tree and without adding a new node ([See 112(b]) to the tree (Hyland, [0043 – In Fig. 7, tree 71 has a predetermined structure such that for each level, each node has two child nodes, thereby implying that the tree is balanced from the beginning]; [0045 - If the tree is structured/balanced as shown in Fig. 7 then Fig. 8 is used when searching for an element]; [Fig. 8: step 82, Element found, thereby implying that the use node which is the search element is found in the tree]), thereby a tree rebalancing due to added or deleted node is prevented (Hyland, [0035 - Fig. 2 represents a balanced BST which is structured in software]; [0037 - It is desirable to achieve a balanced tree in order to minimize the number of operations required to achieve a match between the keys and the address data in the entry, thereby implying that rebalancing is prevented]);
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the balanced BST of Hyland into the memory mapping method of Kwon,Edwards for the benefit of the BST staying balanced with the same depth of tree both to the left and to the right of the root node (Hyland, Para-0039).
The remaining limitations are similar to claims 1, 8 and therefore the same mappings are incorporated.
As per Claim 11, it is similar to claim 2 and therefore the same mappings are incorporated.
As per Claim 12, it is similar to claim 3 and therefore the same mappings are incorporated.
As per Claim 15, it is similar to claim 6 and therefore the same mappings are incorporated.
As per Claim 16, it is similar to claim 7 and therefore the same mappings are incorporated.
As per Claim 17, Kwon discloses an electronic device (Kwon, [0054 – As per Fig. 1, system 10 includes processor 200, memory system 11, and secondary storage apparatus 400. System 10 may be included in a terminal such as a desktop or laptop. Also, the system 10 may be a mobile system]) comprising:
one or more processors (Kwon, [0056 – In Fig. 1, NVM controller 100 communicates with processor 200];
a memory comprising one or more non-transitory storage media (Kwon, [0055 – In Fig. 1, memory system 11 includes a nonvolatile memory controller 100 and at least one nonvolatile main memory 300. The nonvolatile main memory 300 may be a semiconductor flash main memory such as a NAND memory chip or a NOR memory chip]) that store instructions that, when executed by the one or more processors (Kwon, [0056 – In Fig. 1, the nonvolatile memory controller 100 communicates with the processor 200 through a processor channel to receive a command and an address and to transmit/receive data]), configures the electronic device to:
The remaining limitations are similar to claim 1 and therefore the same mappings are incorporated.
As per Claim 18, it is similar to claim 2 and therefore the same mappings are incorporated.
Claims 4-5, 13-14, 19-20 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Kwon et al (20150193354) in view of Edwards et al (20240168830), Hyland et al (20030009474) and Majnemer et al (20140074841).
As per Claim 4, the rejection of claim 1 is incorporated, and Kwon discloses,
wherein the marking of the unused node as the use node ([See 112(a)]) to reuse (Kwon, [Fig. 13]) comprises:
searching for a space to be mapped with the target data in the process address space (Kwon, [0166 - In Fig. 13, if in step S300, a flag check determines a general flag, then in step S310, a virtual area corresponding to an allocated virtual address of a process is searched by using a binary data structure such as a red-black tree]);
Majnemer discloses,
setting a lock enabling a read operation to the tree (Majnemer, [0028 - Before reading data, an operation upon a B-tree can acquire a read lock]; [0022 - At a basic level, a B-tree operates like a binary-search tree/BST]);
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the read lock of Majnemer into the memory mapping method of Kwon,Edwards,Hyland for the benefit managing a complex hierarchical file system such as the B-tree data structure, wherein for implementing the file system operations such as acquiring an exclusive lock are performed (Majnemer, 0004-0005).
As per Claim 5, the rejection of claim 4 is incorporated, and Kwon,Edwards,Hyland,Majnemer disclose,
wherein the marking of the unused node as the use node to reuse comprises (Hyland, [Figs. 6-7]):
searching for an initial node (Hyland, [0032 - Fig. 3: The index node 120 is a root node]; [0040 - In Fig. 4, the data structure is implemented as a tree structure and includes a Level 0 or root level index page 132, a Level 1 index page 134, and a Level 2 data page 136]; [0041 - A traversal/search begins at the Level 0/L0 or root level index page 132; Since the claim does not define ‘initial node’, it is valid to interpret it as the root node]) using the tree (Hyland, [0044 – In Fig. 6, data structure 144 is based on a tree structure such as binary search tree]);
searching for the unused node from the initial node (Hyland, [0059 - Fig. 10 shows the search for new unoccupied nodes when inserting a new element]; [0061 - Fig. 11 is an algorithm for the insertion of new elements. It commences with stage 111, where the ‘current’ node, that is to say the node in respect of which operations are being performed, is set to the root/initial node]) using a list indicating an address order of a plurality of virtual areas comprised in the process address space (Hyland, [0013 – lookup memory available/process address space]; [0041 - Fig. 6 shows an array 60 of hardware memory locations, each defined by a multiple binary word. The array 60/list of memory locations is organized as a binary tree 61 from a root node 62. The tree 61 has nodes corresponding to the addresses in array 60 and except for the leaf nodes at the lowest level, each node has two child nodes of which the addresses can be computed from their parent node, thereby implying a list indicating an address order of virtual areas/nodes]; [0043 - In Fig. 7, tree 71 has a predetermined structure such that for each level/depth, each node has two child nodes of which the right node has an address greater than the address of the left node and each is computable from the address of the parent node]).
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the search of Hyland into the memory mapping method of Kwon,Edwards,Majnemer for the benefit of the using the binary search tree as it offers a convenient and deterministic minimal search latency because every memory address is associated with a single MAC address and the search algorithm has a fixed worse case value fixed by the size of the look-up memory available (Hyland, 0013).
As per Claim 13, the rejection of claim 10 is incorporated, and Kwon,Edwards,Hyland disclose searching the BST.
wherein the marking of the use node as an unused node ([See 112(a)]) comprises:
setting a lock enabling a read operation to the tree (Majnemer, [0028 - Before reading data, an operation upon a B-tree can acquire a read lock]; [0022 - At a basic level, a B-tree operates like a binary-search tree/BST]);
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the read lock of Majnemer into the memory mapping method of Kwon,Edwards,Hyland for the benefit managing a complex hierarchical file system such as the B-tree data structure, wherein for implementing the file system operations such as acquiring an exclusive lock are performed (Majnemer, 0004-0005).
Hyland further discloses,
determining whether a depth of the use node (Hyland, [Fig. 11: after step 111, at step 112-currentElement==0? No, step 113-currentNode==lastNodeAtLevel? No, step 114-currentLevel==0? Yes, thereby determining depth of searched use node]; [0004 - If a BST has L levels the root node is the only node to be a member of level L, the uppermost level. A full binary tree with L levels has (2^L)−1 nodes; Note: The depth of a node is the number of edges present in the path from the root node of a tree to that node]) searched (Hyland, [0059 - The search commences at the root node 101, then proceeds along the next level (L−1) for nodes 102 and 103, then proceeds along the next level (L−2) to nodes 104 to 105 and so on to the next level of which the first node is 106 and the last node of the level is 107]) in the tree exceeds a threshold depth (Hyland, [0009 - For a given number of nodes there is an associated minimum tree depth/threshold that yields maximally efficient searches for any given element of that tree; Note: the depth of a node in a binary tree is also its level. Both terms describe the number of edges on the path from the root node to the given node]);
in response to the depth of the searched use node not exceeding the threshold depth (Hyland, [Fig. 11: step 114-currentLevel==0? Yes, step 115-noFreeSpace]), marking the use node as an unused node (Hyland, [0064 – In Fig. 11, if the current level/depth is zero as determined by step 114, there is no free space, as indicated by step 115. The algorithm has reached node 109 as shown in Fig. 10, thereby marking the use node as an unused node]).
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the balanced BST of Hyland into the memory mapping method of Kwon,Edwards,Majnemer for the benefit of using the balanced BST because every new element is inserted at the highest available node in the hierarchy of the tree. Thus for a full tree of L levels or depth there is a worst case of L possible comparisons before the search element can be located (Hyland, 0012).
As per Claim 14, the rejection of claim 13 is incorporated, and Kwon, Edwards,Hyland,Majnemer further disclose,
as the threshold depth (Hyland, [0009 - For a given number of nodes there is an associated minimum tree depth/threshold that yields maximally efficient searches for any given element of that tree; Since the claim does not define ‘threshold depth’, the citation is a valid interpretation]) increases, a number of unused nodes comprised in the tree increases (Hyland, [0039 - Unbalance occurs owing to the fact that the entries have to be compiled in an uncontrolled order. This is shown in Fig. 3. The root node is established first and contains the element 90. Thus it is seen that the number of nodes and the depth of the tree is greater on the left-hand side of Fig. 3 than the right-hand side, thereby implying that as the threshold depth increases, the number of unused nodes increases]),
and as the threshold depth decreases, a number of unused nodes comprised in the tree decreases (Hyland, [0040 – In Fig. 5, a shuffling operation is performed, wherein node 41 becomes the root node and node 40 a child of the root node. The new element ‘2’ is put into node 43 and the tree is balanced, thereby implying that a decreased threshold depth decreases the number of unused nodes]).
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the balanced BST of Hyland into the memory mapping method of Kwon,Edwards,Majnemer for the benefit of using the balanced BST because every new element is inserted at the highest available node in the hierarchy of the tree. Thus for a full tree of L levels or depth there is a worst case of L possible comparisons before the search element can be located (Hyland, 0012).
As per Claim 19, it is similar to claim 4 and therefore the same mappings are incorporated.
As per Claim 20, it is similar to claim 5 and therefore the same mappings are incorporated.
Response to Arguments
The Applicant's arguments filed on January 20, 2026 have been fully considered, but they are not persuasive.
The Applicant submitted improper amendments in the Final which resulted in 112(a)’s and 112(b)’s. The current amendments submitted in the RCE fail to resolve the issues introduced by previous amendments, resulting in claims that lack the clarity, conciseness and consistency required under § 112.
Applicant argues, ‘Accordingly, Kwon does not discuss, inter alia, "determining, in response to a mapping instruction to map target data onto a process address space allocated for a deep learning application, the process address space comprising….virtual areas mappable to…. and mapping the target data… in the process address space," as recited by claim 1’. (Rem, Pg. 10)
Response: This argument is deemed invalid. Here is why:
Amended Claim 1 recites limitations that are unsupported by the spec, or indefinite, leading to 112(a)’s and 112(b)’s.
For example, ‘the process address space comprising…. virtual areas mappable to physical memory’, is unrecited in the spec. The spec does not recite any virtual area to physical memory mapping, no MMU or page table entries, no address translation. The broadening of the scope by the limitation fails the written description requirement. Please see 112(a), 112(b).
Further, ‘determining,…. whether an unused node exists in a tree ….by searching for the unused node in the tree’, is also unrecited in the spec. It recites new matter because in the Final the applicant improperly split via deletion/cancellation of the claim 4 algorithm to ‘mark the unused node’ and created the ‘new matter’. See 112(a), 112(b).
Further, ‘wherein the unused node remains in the tree….without being deleted….and without adding a new node to the tree, thereby a tree rebalancing….is prevented’. This recitation lacks novelty. More importantly it is unrecited in the spec. Fig. 6 of the spec recites a mapping method where balance is prevented because no new node is added. Hence the limitation is an unverified extrapolation and recites new matter. See 112(a),112(b).
Further, claim 1 excluded the original limitation, ‘wherein the tree manages the virtual area….as the use node’. The exclusion is unsupported by the spec because the newly found use node is unmanaged by the tree. See 112(a). Excluding key limitations and reinstating them later creates an unclear prosecution record, and also incurs more 112(a)’s, 112(b)’s.
Similar issues can be found in amended independent claims 10,17 and dependent claims 2,11,18,4,13.
In essence, relying on improper amendments to mischaracterize the prior art makes the argument(s) invalid. Please see O/A.
Applicant further argues, ‘The claims do not recite a mental process or mathematical concept….. Instead, the claims are directed to…. computer-implemented method for efficient memory management…., using a tree data structure….. By searching for and reusing unused nodes…..without adding/deleting nodes avoids tree rebalancing, which is a computational overhead to computer data structures, and not a human-performable mental step’. (Rem, Pg. 13)
Response: The claim language dictates scope, not the argument. The claims recite marking unused and used nodes and avoiding rebalancing of a ‘prepopulated, static tree/BST’, which is a mathematical concept. Simply implementing node manipulation (used to unused node, unused to used node) on a static tree does not improve the computer's functionality itself, but rather uses a computer to perform mathematical manipulation faster.
As an aside, avoiding rebalancing without adding or deleting a node is a well-known feature of any balanced tree. Hence ‘avoiding rebalancing’ does not add significantly more to the judicial exception.
Reciting a prepopulated, static balanced tree to search for nodes indicates no technical improvement in processing, but rather a well-known data organization method.
Applicant further argues: ‘The claims do not recite a "pencil-and-paper" analogy, as rebalancing involves algorithmic rotations and color changes in trees (e.g., red-black trees) that require machine execution for efficiency at scale, particularly in multi-threaded environments sharing a process address space’. (Rem, Pg. 14)
Response: The tree operations. e.g. ‘identifying nodes’, ‘rebalancing the tree’, are purely mathematical, or logical, requiring no specific technical implementation. A human could map out the prepopulated tree on paper, follow the claimed instructions to mark, move, delete, or modify nodes, and arrive at the same result. Because the process can be done manually, the claims are directed to an abstract idea.
Merely digitizing a manual tree operation (e.g., using a process address space to store a tree structure instead of paper), does not change the abstract nature of the underlying logic. The claims do not improve the function of the computer itself, but only uses the computer to do a conceptual task faster. The underlying rules and design remain the same regardless of whether the tree/BST is used in a deep learning application or a simple sorting algorithm. The claims do not focus on improvements to the BST itself, such as reciting a specialized tree-balancing algorithm that reduces memory usage in the deep learning application, or any application.
That said, the ‘rebalancing’, ‘self-balancing’, ‘red-black tree’ are well-known BST features and can be found in any computer science textbook.
Applicant further argues: Again, as supported by the specification, the claims improve computer memory management technology by enabling efficient, low-overhead mapping of data (e.g., tensors) onto virtual areas in a process address space,….and optimizing….in deep learning frameworks, reducing computational overhead, memory fragmentation, and OS invocations, leading to predictable performance and system efficiency (spec., FIG. 4; paras. on deep learning usage)’. (Rem, Pg. 14)
Response: The computer memory management is implemented using generic computing parts, such as ‘a processor’, ‘a memory’, without describing any specific, unconventional interaction between hardware and software. Though the data includes tensors, which are multi-dimensional arrays, the spec does not recite how the mapping instruction maps the tensors to the virtual areas in the process address space. The same is true of the unmapping instruction. The instructions fail to define the virtual-to-physical translation to map any type of data, tensors included. The spec does not recite a MMU and/or page table entries. This is one of the key drawbacks of the deep learning application and the disclosure.
The spec merely recites ‘high level’ concepts of memory management (optimization, reducing computation overhead, memory fragmentation, performance, efficiency etc.) without providing the underlying technical details required to implement these concepts. Merely reciting a couple of words in the spec (e.g. reducing overhead, spec, Para-0074) does not provide sufficient detail to show that the inventor possessed the entire scope of the technology, as it applies to the disclosure. That said, the spec does not provide details of any of the above mentioned technical benefits, in relation to the disclosure. Reciting two-three words or a making fleeting mention of a technical feature in the spec does not equate to describing the entire technology.
Though the spec recites ‘deep learning’, the spec does not recite a concrete underlying hardware implementation. Based on Recentive Analytics v. Fox (2025), training a deep learning model without specific improvements to the model architecture or hardware utilization, is deemed patent-ineligible.
The claims are directed to an abstract idea of ‘mapping’ data into virtual areas of a static tree, without providing a specific, unconventional, and detailed algorithm or hardware configuration for achieving said ‘mapping’. This suggests that the claims are directed to the abstract idea on a generic computer.
Examiner Notes
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Patel et al. US 2021/0342273
The apparatus has a processing device comprising a processor connected to a memory. The processing device generates a set of log records, where each log record represents a pointer from one of set of leaf pages in a logical address space of a storage system to one of virtual block addresses in the address space and comprises a leaf page address of one of the leaf pages. The processor identifies a subset of the log records that represent pointers to a given virtual block address to determine a first reference count for the given block address, and modifies pointers to the given virtual block address in the subset of the leaf pages with associated leaf page addresses in an identified subset of the log records(Patel, 0047).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARVIND TALUKDAR whose telephone number is (303)297-4475. The examiner can normally be reached M-F, 10 am-6pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at 571-272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Arvind Talukdar
Primary Examiner
Art Unit 2132
/ARVIND TALUKDAR/Primary Examiner, Art Unit 2132