DETAILED ACTION
This Office Action is in response to the Applicants' communication filed on February 3, 2026. Claims 1-13, 15-18 and 20 are amended. Claims 1-20 are currently pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments/remarks made in an amendment filed February 3, 2026, have been fully considered. In view of the amended claims 1-13, 15-18 and 20 and upon further consideration, a new ground(s) of rejection, necessitated by the amendments is made in view of different interpretation of the previously applied references as presented in this Office action. Applicant’s arguments with respect to claim(s) 1-20 are therefore moot.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 5978893 A1 (Bakshi et al.)(hereinafter Bakshi) in view of CN 103793267 A (HE) in view of US 20080112423 A1 (Christenson et al.)(hereinafter Christenson) in view of US 10061834 B1 (Kulesza et al.)(hereinafter Kulesza) and in further view of US 20210373982 A1 (Neilson et al.)(hereinafter Neilson).
In re claims 1, 9 and 15, Bakshi discloses a method (Fig. 2A) and a system (Fig. 1, Col 1, lines 1-6, “A method and a system for allocating and deallocating fixed size memory blocks in a system memory in a graphic imaging system for processing data”) comprising: one or more processors (Fig. 1, “processor 70”) comprising circuitry to, in response to an application programming interface (API) call, cause one or more lock-free statically-sized regions of linked storage locations to be deallocated (Col 4, lines 9-18, “A system memory manager 100 allocates memory blocks from the system memory 50 for the various purposes described above, including storing display list entries and frame buffer entries. The system memory 50 includes queues, each containing a linked list of fixed size blocks, and a page pool of memory, containing variable size blocks, which may be allocated for storing display list entries. The system memory manager 100 allocates blocks from the linked lists and the page pool for storing the display list entries in response to requests from the interpreter 40”. Col 5, lines 29-34, “If, at step 2400, the system memory manager determines that there is insufficient memory available in the page pool to create an extension, the process moves to step 2700, at which the display list entries are rendered from the allocated memory blocks, and the allocated memory blocks are deallocated”. Col 2, lines 65-67, Col 3, lines 1-20, “It is an object of the present invention to dynamically allocate and deallocate memory blocks in a graphic imaging system while minimizing processor time and avoiding fragmentation. A system memory includes at least one queue containing a linked list of fixed size memory blocks and a page pool of variable size memory blocks. Upon a request (through user input or API) for a memory block of a particular fixed size, the system memory manager allocates a memory block of the fixed size from a queue containing memory blocks of the fixed size if the queue has memory blocks available. If the queue does not have memory blocks available, the system memory manager creates an extension to the queue containing memory blocks of the fixed size. The extension is created from a page pool. The extension is linked to the queue, and a memory block of the fixed size is then allocated from the queue. If the page pool does not contain adequate memory to create an extension, the portion of the page which has been reduced to display list entries is rendered, and all of the allocated memory blocks are deallocated”), wherein the linked storage locations are organized as a lock-free data structure to store shared fifth generation new radio (5G-NR) information.
Bakshi does not explicitly disclose lock-free statically sized regions of linked storage locations and in response to an application programming interface (API) call, cause one or more lock-free statically-sized regions of linked storage locations to be deallocated, wherein the linked storage locations are organized as a lock-free data structure to store shared fifth generation new radio (5G-NR) information.
HE discloses statically sized regions of linked storage locations ([0057], “queue on the shared memory, shared memory space in advance into forming blocks of the same size as the memory block according to the node queue, each memory block for one node in the queue is stored”. [0058], “wherein, the queue can be used for processing the management through the linked list structure, which is the location of the next one node by one node stores, besides including information of each node itself, and a node head is used for pointing at one node, the node head is after the content of the node. For information of the storage node head, the space in the head part of the shared memory a reserved, reserved, it is divided the memory block” (discloses a data queue accessing method. The queue may be managed through linked list structure. The queue is built on shared memory which is previously divided into chunks of equal size according to size of nodes in queue as memory chunks, each memory chunk holding node in queue (corresponding to information to be stored in one or more statically-sized regions of linked storage locations)).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Bakshi with HE to provide a memory management method and device for dynamically allocating and deallocating storage space in a memory unit to store the 5G data according to the demand. The advantage of doing so is to utilize the memory in an efficient manner using a direct memory access (DMA) mechanism, thus eliminating processor intervention while allocating the memory unit, and reducing the load on the processor and power utilized by the processor.
Bakshi and HE do not explicitly disclose a lock-free statically-sized regions of linked storage locations.
Christenson discloses a lock-free statically-sized regions of linked storage locations ([0002], “More particularly, the present invention relates to concurrent, non-blocking, lock-free, first-in first-out (FIFO) queues employing processor synchronization primitives, such as load-linked/store conditional (LL/SC)”. [0036], “According to the preferred embodiments of the invention, a dummy node is enqueued to a concurrent, non-blocking, lock-free FIFO queue only when necessary to prevent the queue from becoming empty. The dummy node is only enqueued during a dequeue operation and only when the queue contains a single user node during the dequeue operation. This reduces overhead relative to conventional mechanisms that always keep a dummy node in the queue. User nodes are enqueued directly to the queue and can be immediately dequeued on-demand by any thread”. [0082], “According to the preferred embodiments of the present invention, the queue must atomically update three unrelated pointers: the queue's head pointer; the queue's tail pointer; and the node's next pointer. All three pointers cannot be atomically updated at the same time, making this a difficult problem to solve. The load-linked/store conditional synchronization primitives will atomically update one pointer and allow testing (loads) of unrelated shared memory between the load-linked and store conditional” (discloses an apparatus for implementing lock-free concurrent FIFO queue comprising at least one user node, each user node having a next pointer to a next node, a dummy node having a next pointer to a next node, wherein the dummy node is enqueued during a dequeue operation and only when the lock-free concurrent FIFO queue contains a single one of the user nodes during the dequeue operation, a head pointer identifying a head node, a tail pointer identifying a tail node, a dummy pointer identifying the dummy node, wherein the enqueueing step, the dequeueing step, the conditional dequeueing step, and the enqueueing step each includes the use of load-linked/store conditional primitives. The enqueueing/dequeueing mechanism 120 may be pre-programmed, manually programmed, transferred from a recording media or downloaded over the internet. Lock-free algorithms have also been proposed for shared data structures, including concurrent FIFO queues (corresponding to cause information to be stored in one or more regions of linked storage locations allocated to store shared information)).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Bakshi and HE with Christenson to provide a memory management method and device for dynamically allocating and deallocating storage space in a memory unit to store the 5G data according to the demand. The advantage of doing so is to enable updating the tail pointer variable value by parallel access so as to execute node operation of the queue, thus improving accessing efficiency of the queue without locking.
Bakshi, HE and Christenson do not explicitly disclose in response to an application programming interface (API) call, cause one or more lock-free statically-sized regions of linked storage locations to be deallocated.
Kulesza discloses in response to an application programming interface (API) call, cause one or more lock-free statically-sized regions of linked storage locations to be deallocated (Col 5, lines 53-67; Col 6, lines 1-5, “A client, such as clients 250a through 250n, may communicate with a data warehouse cluster 225 or 235 via a desktop computer, laptop computer, tablet computer, personal digital assistant, mobile device, server, or any other computing system or other device, such as computer system 1000 described below with regard to FIG. 8, configured to send requests to the distributed data warehouse clusters 225 and 235, and/or receive responses from the distributed data warehouse clusters 225 and 235. Requests, for example may be formatted as a message that includes parameters and/or data associated with a particular function or service offered by a data warehouse cluster. Such a message may be formatted according to a particular markup language such as Extensible Markup Language (XML), and/or may be encapsulated using a protocol such as Simple Object Access Protocol (SOAP). Application programmer interfaces (APIs) may be implemented to provide standardized message formats for clients, such as for when clients are communicating with distributed data warehouse service manager 202”. Col 14, lines 23-31, “As indicated at 650, the respective storage locations for the data chunk may then be reclaimed to store other data. For example, the storage locations may be de-allocated or returned to an operating system for other system persistent storage needs. In some embodiments, whether the block-based persistent storage device is implemented as part of a multi-tenant storage system, the reclaimed storage locations may be utilized to store data for another client, system, or device” (as requested by API cause lock free statically sized linked storage locations to be deallocated)).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Bakshi, HE, Christenson with Kulesza to provide a memory management method and device for dynamically allocating and deallocating storage space in a memory unit to store the 5G data according to the demand. The advantage of doing so is to enable updating the tail pointer variable value by parallel access so as to execute node operation of the queue, thus improving accessing efficiency of the queue without locking.
Bakshi, HE, Christenson and Kulesza do not explicitly disclose wherein the linked storage locations are organized as a lock-free data structure to store shared fifth generation new radio (5G-NR) information.
Neilson discloses wherein the linked storage locations are organized as a lock-free data structure to store shared fifth generation new radio (5G-NR) information (Fig. 19, [0028], “In accordance with some embodiments of the disclosure, a multi-level hierarchy data structure represents shared states of a system via a shared memory architecture”. [0037], “The writer manager receives a request to create the new entry for a child object in a hierarchical state data structure in the shared memory”. [0039], “The storing of the notification in the notification queue causes the one or more readers to be alerted of a modification, addition to, or deletion of the entry. The writer removes the object from the multi-level hierarchical system data structure and deallocates the allocated shared memory buffer in response to deletion of the entry from the multi-level hierarchical state data structure”. [0058], “Writer agent 110 is a producer to shared memory 108 and each of reader agents 1-N are state consumers of shared memory 108. In some embodiments, writer agent 110 may operate on shared memory 108 concurrently while one or more of the reader agents 1-N consumes data from or simply monitors to shared memory 108...Reader agents 1-N are notified of new entries or updates to existing entries—modifications—and even entry deletions by notifications in notification queue 128 and perform read operations in a lock-free and wait-free fashion”).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Bakshi, HE, Christenson, Kulesza with Neilson to provide a memory management method and device for dynamically allocating and deallocating storage space in a memory unit to store the 5G data according to the demand. The advantage of doing so is to utilize the memory in an efficient manner using a direct memory access (DMA) mechanism, thus eliminating processor intervention while allocating the memory unit, and reducing the load on the processor and power utilized by the processor.
In re claims 2, 10 and 16, the combination discloses the one or more processors of claim 1, the system of claim 9 and the method of claim 15, wherein Bakshi discloses wherein the one or more statically-sized regions of linked storage locations are to be deallocated by causing one or more regions of memory reserved to the one or more statically-sized regions of linked storage locations to no longer store the shared 5G- NR information (Col 7, lines 9-10, “The deallocation is performed repeatedly until all the blocks have been freed” (no longer stored). Col 5, lines 54-57, “The present invention takes advantage of processes occurring normally in a laser printer to dynamically free memory blocks without requiring a halt in the processing to return unused memory blocks to the page pool”. Col 6, lines 6-8, “The fewer the extensions, the less time will be consumed upon deallocating in determining which extension a block of freed memory belongs to” (deallocate by no longer storing the information and freeing the memory)).
In re claims 3 and 11, the combination discloses the one or more processors of claim 1 and the system of claim 9, wherein Christenson discloses wherein the one or more lock-free statically-sized regions of linked storage location are to be organized in memory as a lock-free ([0002], “More particularly, the present invention relates to concurrent, non-blocking, lock-free, first-in first-out (FIFO) queues employing processor synchronization primitives, such as load-linked/store conditional (LL/SC)”. [0036], “According to the preferred embodiments of the invention, a dummy node is enqueued to a concurrent, non-blocking, lock-free FIFO queue only when necessary to prevent the queue from becoming empty. The dummy node is only enqueued during a dequeue operation and only when the queue contains a single user node during the dequeue operation. This reduces overhead relative to conventional mechanisms that always keep a dummy node in the queue. User nodes are enqueued directly to the queue and can be immediately dequeued on-demand by any thread”).
In re claims 4, 12 and 20, the combination discloses the one or more processors of claim 1, the system of claim 9 and the method of claim 15, wherein Bakshi discloses wherein the one or more lock-free statically-sized regions of linked storage locations are to be deallocated by causing one or more regions of memory reserved to the one or more lock-free statically-sized regions of linked storage locations to no longer be reserved (Col 4, lines 9-23, “The system memory 50 includes queues, each containing a linked list of fixed size blocks, and a page pool of memory, containing variable size blocks, which may be allocated for storing display list entries. The system memory manager 100 allocates blocks from the linked lists and the page pool for storing the display list entries in response to requests from the interpreter 40. The system memory also contains memory blocks reserved for other purposes, such as storing frame buffer representations, spooling and processing” (discloses allocating means reserving memory to one or more blocks for various purposes would interpret deallocating means to no longer reserve)).
In re claims 5, 13 and 18, the combination discloses the one or more processors of claim 1, the system of claim 9 and the method of claim 15, wherein Bakshi discloses wherein the linked storage locations comprise one or more pointers to indicate one or more portions of the one or more lock-free statically-sized regions (Col 1, lines 66-67, Col 2, lines 1-4, “For applications using fixed size blocks, blocks are typically marked as allocated in a bit map. A running pointer or index identifies the next available block in the bit map. When a block is requested, the pointer or index is updated by executing a search algorithm to search for the next available block” (pointers are used to indicate blocks) and one or more counters to indicate a count of operations performed on the one or more lock-free statically-sized regions (Fig. 2c, Col 6, lines 18-27, “FIG. 2c illustrates in detail the step of updating the usage count, i.e., step 2600. Referring to FIG. 2c, it is first determined at step 2620 whether there is an extension for which a usage count must be updated by determining whether the parameter "Extension (N)" is greater than 0. If the parameter "Extension (N)" is greater than 0, the parameter "usage (Extension (N))" is updated by adding one at step 2640. If the parameter "Extension (N)" is not greater than 0, this indicates that no extension has been created for queue N, and no updating occurs” (counters to indicate a count of operations performed on the blocks)).
In re claims 6, 14 and 19, the combination discloses the one or more processors of claim 1, the system of claim 9 and the method of claim 15, wherein Bakshi discloses wherein the 5G-NR information comprises integer data and the integer data is to be used to indicate at least one of the linked storage locations (Fig. 2d, Col 6, lines 28-46, “At step 2720, it is determined whether the parameter "Extension (N)" (an integer) is greater than zero. If the parameter "Extension (N)" is greater than zero, this indicates that the freed block was allocated from an extension, since extension blocks are allocated after queue blocks. The flow then moves to block 2722, where the extension from which the block was allocated is located. If the blocks have been allocated and freed sequentially, no searching is required to locate the extension that the freed block belongs to. The extension from which a freed block of size n was allocated is simply determined to be the most recently created extension that is still linked to the queue N containing blocks of size n” (N indicates the linked storage locations).
In re claim 7, the combination discloses the one or more processors of claim 1, the system of claim 9 and the method of claim 15, wherein Kulesza discloses wherein the API call is to receive one or more parameters to indicate the one or more lock-free statically-sized regions of linked storage locations to be deallocated (Col 19, lines 14-17, “In various embodiments, a network-based service may be requested or invoked through the use of a message that includes parameters and/or data associated with the network-based services request”. Col 5, lines 61-64, “Requests, for example may be formatted as a message that includes parameters and/or data associated with a particular function or service offered by a data warehouse cluster”).
In re claims 8 and 17, the combination discloses the one or more processors of claim 1 and the method of claim 15, wherein Kulesza discloses wherein the one or more lock-free statically-sized regions of linked storage location are to be organized in memory as a lock-free array queue (Fig. 6: 620, Col 9, lines 1-6, “In at least some embodiments this block metadata may be aggregated together into a superblock data structure, which is a data structure (e.g., an array of data) whose entries store information (e.g., metadata about each of the data blocks stored on that node (i.e., one entry per data block)”).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SWATI JAIN whose telephone number is (571)270-0699. The examiner can normally be reached Mon - Fri (830 am - 530 pm).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pan Yuwen can be reached on 571-272-7855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SWATI JAIN/Examiner, Art Unit 2649 /YUWEN PAN/Supervisory Patent Examiner, Art Unit 2649