Prosecution Insights
Last updated: April 19, 2026
Application No. 18/065,241

HARDWARE ASSISTED EFFICIENT MEMORY MANAGEMENT FOR DISTRIBUTED APPLICATIONS WITH REMOTE MEMORY ACCESSES

Non-Final OA §101§103
Filed
Dec 13, 2022
Examiner
HEADLY, MELISSA A
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
306 granted / 408 resolved
+20.0% vs TC avg
Strong +40% interview lift
Without
With
+40.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
24 currently pending
Career history
432
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
58.1%
+18.1% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax, Regular postal mail, or EFS Web (PTO/SB/439). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 9-12 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. During examination, the claims must be interpreted as broadly as their terms reasonably allow. In re American Academy of Science Tech Center, 367 F.3d 1359, 1369, 70 U.S.P.Q.2d 1827, 1834 (Fed. Cir. 2004). Independent claim n recites a “semiconductor apparatus,” which is not comprehensively defined by the specification. The broadest reasonable interpretation of a claim drawn to a “semiconductor apparatus” covers software per se in view of the ordinary and customary meaning of “semiconductor apparatus”, particularly when the specification is silent. Software per se is not a “process,” a “machine,” a “manufacture,” or a “composition of matter” as defined in 35 U.S.C. § 101. Examiner suggests adding a recitation of a “processor” or “circuitry.” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4-5, 7-10, 12-13, 15-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kaminski et al. (US 2010/0211756) and Wang et al. (US 2020/0192715). As per claim 1, Kaminski teaches the invention substantially as claimed including a computing system ([0077], NUMA computer system 700) comprising: plurality of processor cores ([0077], A NUMA computer system 700 may include two or more processors (e.g., 710 and 720), each of which may include multiple cores, any of which may be single or multi-threaded); a system bus coupled to the plurality of processor cores ([0077], Each processor (e.g., 710) may be coupled to at least one other processor (e.g., 720) with an interconnect 730, which may comprise a direct link, such as a HyperTransport.TM. bus... Each processor 710 and 720 may directly access its own memory (740 and 790 respectively) and/or access the memory of the other processors (i.e., remote memory) indirectly over interconnect 730); and a memory management subsystem ([0034], a NUMA-aware heap memory manager may expose an API that software programs may invoke in order to perform various memory management functions), wherein the memory management subsystem includes logic ([0079], Program instructions 750 may comprise software components and/or mechanisms configured to provide functions, operations and/or other processes for implementing program instructions executable to implement a NUMA-aware heap memory manager 770), the logic to: detect a local allocation request associated with a local thread ([0012], a NUMA-aware heap memory manager may attempt to first find a memory block that resides on the same node as the requesting thread (i.e., select the thread's execution node as the allocation node); and [0042], NUMA-aware heap memory management architecture 300 may associate each thread with a unique local thread cache, such as 302, 304, or 306. Each such thread cache may identify one or more free memory blocks on the thread's execution node. These identified blocks may be allocated from the heap manager to the associated thread... if a thread requests that a memory block of 32 KB or smaller be allocated to it, then the heap manager may attempt to locate an appropriate block of memory in the local thread cache corresponding to the thread), detect a remote allocation request associated with a remote thread ([0036], the heap manager may determine that the memory block cannot or should not be allocated from the execution node under certain circumstances... If such a determination is made, the heap manager may choose an allocation node that is relatively "close" to the execution node; and [0061], if the heap manager determines that it should not ask the operating system to allocate more memory, for example because a memory usage limit has been exceeded by the application (as indicated by the affirmative exit from 445), then it may attempt to allocate a memory block to the thread from a node other than the execution node (i.e., a remote node), as in 450. The heap manager may accomplish this, for example, by searching the central cache for an appropriate block that is on a node other than the execution node. Doing so may comprise checking a free list associated with a node in the central cache that is not the execution node), and process the local allocation request ([0013], Once the heap manager selects an allocation node, it may locate a memory block of the given size on the allocation node and allocate the block to the requesting thread; and [0049], the heap manager may determine if a block of the requested size is available in the local thread cache corresponding to the requesting thread, as in 410. If a properly sized block is available, as indicated by the affirmative exit from 410, the heap manager may allocate the block to the thread, as in 415) and the remote allocation request ([0013], Once the heap manager selects an allocation node, it may locate a memory block of the given size on the allocation node and allocate the block to the requesting thread; and [0061], if the heap manager determines that it should not ask the operating system to allocate more memory, for example because a memory usage limit has been exceeded by the application (as indicated by the affirmative exit from 445), then it may attempt to allocate a memory block to the thread from a node other than the execution node (i.e., a remote node)) with respect to a central heap ([0014], each node may be associated with one or more free block listings (e.g., a central cache). In such embodiments, if a satisfactory block cannot be found in a local thread cache, then the heap manager may attempt to locate a satisfactory free memory block in one or more listings in a central cache; [0046], in central cache 310, the heap manager may keep a separate list of free page spans for each node (e.g., free page spans lists 344-348). In such embodiments, each such list may track free page spans on the associated node; and [0061], searching the central cache for an appropriate block that is on a node other than the execution node. Doing so may comprise checking a free list associated with a node in the central cache that is not the execution node; Examiner Note: Under the broadest reasonable interpretation, the claimed “central heap” can be interpreted to include Kaminski’s “central cache” because Applicant’s specification describes its “central heap” as a data structure and a cache. Applicant’s specification also manages its “central heap” equivalent to Kaminski’s management of its “central cache”: Applicant’s Specification: [0021], Objects are moved from a central heap 24 (e.g., shared data structure) into the local thread caches 22 as needed; [0022], When the local thread cache cannot satisfy the request, the allocator transitions to a central heap such as, for example, the central heap 24 (FIG.1), wherein the central heap is shared by multiple threads ; [0023], the central heap includes the set of free lists 40; and [0025], transfer cache (e.g., central heap) in hardware,), wherein the central heap is shared by the local thread and the remote thread ([0014], each node may be associated with one or more free block listings (e.g., a central cache); and [0061], searching the central cache for an appropriate block that is on a node other than the execution node. Doing so may comprise checking a free list associated with a node in the central cache that is not the execution node). Kaminski fails to specifically teach, a memory management subsystem coupled to the system bus, wherein the memory management subsystem includes logic coupled to one or more substrates; and wherein the remote allocation request bypasses a remote operating system. However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the step of “wherein the remote allocation request bypasses a remote operating system” because Kaminski teaches direct communication with a remote memory rather than memory allocation from a local operating system. ([0050], if the requested allocation is not small, the heap manager may go directly to the page heap to locate an appropriate block, bypassing the thread's local cache and the central cache; [0061], if the heap manager determines that it should not ask the operating system to allocate more memory... then it may attempt to allocate a memory block to the thread from a node other than the execution node (i.e., a remote node)). Furthermore, Wang teaches, a memory management subsystem ([0105], FIG. 11 depicts a system. The system can use embodiments described herein to allocate memory and/or to caching of relevant content prior to a processing operation. For example, an accelerator 1142 can include a work scheduler or queue management device that manages memory allocation and/or caches relevant content prior to processing, in accordance with embodiments described herein) coupled to the system bus ([0109], system 1100 can include one or more buses or bus systems between devices, such as a memory bus, a graphics bus, interface buses, or others. Buses or other signal lines can communicatively or electrically couple components together, or both communicatively and electrically couple the components), wherein the memory management subsystem includes logic coupled to one or more substrates ([0110], system 1100 includes interface 1114, which can be coupled to interface 1112. In one example, interface 1114 represents an interface circuit, which can include standalone components and integrated circuitry); and wherein the remote allocation request bypasses a remote operating system ([0048], A core can request work scheduler 600 for a memory allocation. In some examples, where a core executes an OS and user application, the user application can directly message work scheduler 600 instead of requesting the OS to provide a memory management command to work scheduler 600 via a system call. In some cases, for memory allocation or deallocation, the user application calls a memory allocation library for execution in user space, which in turn calls work scheduler 600 for a memory allocation; and [0128], network interface and other embodiments described herein can be used in connection with ...hybrid data centers (e.g., data center that use virtualization, cloud and software-defined networking to deliver application workloads across physical data centers and distributed multi-cloud environments)). Wang also teaches, process the local allocation request and the remote allocation request with respect to a central heap ([0019], memory segments in central heap 204 may need to be accessed if cache 202-0 or 202-1 do not have a memory segment to allocate; and [0025], work scheduler can use command queues to receive commands from various cores or software for memory allocation; Examiner Note: Wang uses its central cache as its central heap:[0021], if the linked list of the local thread cache is empty, a central cache can be accessed to obtain new memory objects... The central cache can distribute objects to a local thread cache if requested. A central cache can be shared by multiple threads; [0024], CPU can offload to a work scheduler management of allocation or deallocation of a central cache; [0025], a work scheduler can use a volatile memory (e.g., static random access memory (SRAM) or other memory) to allocate multiple logical queues to represent available memory objects of a central cache and page heap; and [0046], Central cache 530 can be implemented as a set of queues that store memory objects), wherein the central heap is shared by the local thread and the remote thread ([0019], memory segments in central heap 204 may need to be accessed if cache 202-0 or 202-1 do not have a memory segment to allocate; and [0021], A central cache can be shared by multiple threads). Kaminski and Wang are analogous because they are each related to memory management. Kaminski teaches a method of memory management including local and remote memory allocation for distributed threads. (Abstract, system and method for allocating memory to multi-threaded programs on a Non-Uniform Memory Access (NUMA) computer system using a NUMA-aware memory heap manager is disclosed. In embodiments, a NUMA-aware memory heap manager may attempt to maximize the locality of memory allocations in a NUMA system by allocating memory blocks that are near, or on the same node, as the thread that requested the memory allocation. A heap manager may keep track of each memory block's location and satisfy allocation requests by determining an allocation node dependent, at least in part, on its locality to that of the requesting thread. When possible, a heap manger may attempt to allocate memory on the same node as the requesting thread). Wang teaches a method of local and remote memory allocation including utilizing a central heap/cache and page heap. ([0019], A software thread can obtain a memory object from local cache 202-0 or 202-1 without accessing and locking any centralized data structure. However, memory segments in central heap 204 may need to be accessed if cache 202-0 or 202-1 do not have a memory segment to allocate; [0022], If there are no free objects in a central cache, or if there is a request for a large object (e.g., larger than objects in the central cache), a page heap can be accessed; [0044], when processor-executed software cannot allocate memory objects from the local thread cache due to empty object lists, the software sends the allocation request to QMD 500. For example, when the processor-executed software deallocates a memory object but its local thread cache is full, the software sends the deallocation request to the QMD 500 to provide the objects back to the central cache 530). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention that based on the combination, the heap manager and central cache of Kaminski would be modified with the central heap/cache and page heap mechanisms taught by Wang resulting in a system that can allocate memory to various threads using a central heap/cache and a page heap in order to service memory allocation requests. Therefore, it would have been obvious to combine the teachings of Kaminski and Wang As per claim 2, Kaminski teaches, wherein the local allocation request and the remote allocation request ([0025], work scheduler can use command queues to receive commands from various cores or software for memory allocation) include one or more of a first request to allocate a memory block of a specified size ([0012], Once the heap manager selects an allocation node, it may locate a memory block of the given size on the allocation node and allocate the block to the requesting thread), a second request to allocate multiple memory blocks of a same size, a third request to resize a previously allocated memory block, or a fourth request to deallocate the previously allocated memory block. As per claim 4, Wang teaches, wherein the remote allocation request is to be received via a network interface card (NIC) ([0047], when a core-executed application requests a memory allocation via work request queue 620, work scheduler 600 can request an operating system (OS) kernel to enqueue objects by providing a work request to work request queue 620; [0110], Network interface 1150 can transmit data to a device that is in the same data center or rack or a remote device, which can include sending data stored in memory. Network interface 1150 can receive data from a remote device, which can include storing received data into memory. Various embodiments can be used in connection with network interface 1150; and [0128], network interface and other embodiments described herein can be used in connection with ...hybrid data centers (e.g., data center that use virtualization, cloud and software-defined networking to deliver application workloads across physical data centers and distributed multi-cloud environments)), and wherein the logic is to: update memory bin information ([0078], work scheduler 910 can re-arrange workloads in a work queue or move a workload to another work queue and update the position in a work queue), and send a memory buffer pointer to the NIC ([0020], when software requests a memory allocation for memory of a certain size, one of the linked lists that matches the requested size will provide an object from the linked list, and provide a pointer of the object back to the requester; and [0110], Network interface 1150 can transmit data to a device that is in the same data center or rack or a remote device, which can include sending data stored in memory. Network interface 1150 can receive data from a remote device, which can include storing received data into memory. Various embodiments can be used in connection with network interface 1150) . As per claim 5, Wang teaches, wherein the logic is to: process the local allocation request with respect to a page heap if the central heap cannot satisfy the local allocation request ([0022], If there are no free objects in a central cache, or if there is a request for a large object (e.g., larger than objects in the central cache), a page heap can be accessed. A page heap can contain a linked list of different sizes of memory spans), process the remote allocation request with respect to the page heap if the central heap cannot satisfy the remote allocation request ([0022], If there are no free objects in a central cache, or if there is a request for a large object (e.g., larger than objects in the central cache), a page heap can be accessed; and [0025], work scheduler can use command queues to receive commands from various cores or software for memory allocation...work scheduler can use a response queue to provide responses to received commands). As per claim 7, Kaminski teaches, wherein the logic is to prioritize the remote allocation request over the local allocation request ([0037], the heap manager may determine that the memory block cannot or should not be allocated from the execution node under certain circumstances. Such a decision may be made if the heap manager determines that allocation on the execution node is undesirable or impossible due to one or more exceptional factors, such as the level of free memory on the execution node dropping below a given level. In such embodiments, the decision may be based on various criteria, such as a determination that the application may be allocating too much memory from the operating system, which can result in other components running out of memory. If such a determination is made, the heap manager may choose an allocation node that is relatively "close" to the execution node). As per claim 8, Wang teaches, wherein the logic is to: generate a first profile for the local thread, generate a second profile for the remote thread ([0035], An incoming request has a priority that is either pre-assigned by the requesting core/thread or assigned by QMD 500 upon receipt by QMD 500. A request is then stored in a buffer that corresponds to the request's priority (1-n) and/or type (enqueue or dequeue)), and proactively allocate one or more memory bins based on the first profile and the second profile ([0036], Scheduler 516 chooses a buffer among buffers 514 and selects one or more requests from the head of buffer. The buffer is chosen according to a scheduling policy. Various scheduling policies, such as Round Robin, Weighted Round Robin, preemptive priority, and a combination of these and other policies may be implemented... In a Weighted Round Robin policy, scheduler 516 chooses and serves each buffer sequentially based on their associated priority. The ability to control the order in which to serve the buffers is called request-level flow control. After choosing a buffer and selecting one or more requests from the head of the chosen buffer, the scheduler 516 schedules each selected requests for execution by either the enqueue circuitry 518 or dequeue circuitry 520 according to the request type). As per claim 9, this is the “semiconductor apparatus claim” corresponding to claim 1 and is rejected for the same reasons. The same motivation used in the rejection of claim 1 is applicable to the instant claim. The same motivation used in the rejection of claim 1 is applicable to the instant claim. As per claim 10, this claim is similar to claim 2 and is rejected for the same reasons. As per claim 12, this claim is similar to claim 4 and is rejected for the same reasons. As per claim 13, this claim is similar to claim 5 and is rejected for the same reasons. As per claim 15, this claim is similar to claim 7 and is rejected for the same reasons. As per claim 16, this claim is similar to claim 8 and is rejected for the same reasons. As per claim 17, this is the “method claim” corresponding to claim 1 and is rejected for the same reasons. The same motivation used in the rejection of claim 1 is applicable to the instant claim. As per claim 18, this claim is similar to claim 2 and is rejected for the same reasons. As per claim 20, this claim is similar to claim 4 and is rejected for the same reasons. Claims 3, 11, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kaminski-Wang as applied to independent claims 1, 9, and 17 and in further view of Tsirkin et al. (US 2013/0138760). As per claim 3, Wang teaches, wherein the local allocation request is to be received via an allocator library ([0048], the user application calls a memory allocation library for execution in user space, which in turn calls work scheduler 600 for a memory allocation), and wherein the logic is to: write a memory pointer to a completion record ([0020], when software requests a memory allocation for memory of a certain size, one of the linked lists that matches the requested size will provide an object from the linked list, and provide a pointer of the object back to the requester; [0025], work scheduler can use command queues to receive commands from various cores or software for memory allocation ... The work scheduler can use a response queue to provide responses to received commands; and [0038], A queue entry can include metadata and opaque data... Information contained in each queue entry's metadata is used by QMD 500 to perform enqueue and dequeue related operations on that entry. The opaque data portion contains... pointers to actual data, to be shared with consumer core, thread, device, etc. via a dequeue request) that is accessible by the allocator library ([0020], when software requests a memory allocation for memory of a certain size, one of the linked lists that matches the requested size will provide an object from the linked list, and provide a pointer of the object back to the requester; and [0048], the user application calls a memory allocation library for execution in user space, which in turn calls work scheduler 600 for a memory allocation). Wang fails to specifically teach, issue an interrupt to the allocator library if the allocator library is operating in a non-polling mode. However, Tsirkin teaches, issue an interrupt to the allocator library if the allocator library is operating in a non-polling mode ([0004], interrupt requests (IRQ) are triggered to notify the computer system that there are data packets pending in the device queues, whereby IRQs are transmitted to the processor of the computer system for handling; and [0049], interrupt manager 252 may disable (block 410) polling of those shared device queues 254 for which the request applies and re-enable (block 412) device interrupts for messages received in shared device queues 254 that are no longer being polled). The combination of Kaminski-Wang and Tsirkin analogous because they are each related to memory management. Kaminski teaches a method of memory management including local and remote memory allocation for distributed threads. Wang teaches a method of local and remote memory allocation including utilizing a central heap/cache and page heap. Kaminski also teaches an allocator library used for memory allocation. ([0033], embodiments of a NUMA-aware heap memory management library that is not application specific, may be general-purpose and used by arbitrary software applications without modification to the application's source code or to the heap manager; and [0034], each call of the software application to a memory management function may invoke the corresponding functions of the NUMA-aware heap memory manager rather than those of the NUMA-unaware standard libraries). Wang also teaches an allocator library used for memory allocation among work queues. ([0024], The work scheduler can queue operations for core-to-core communications and schedule work to various cores or other devices; and [0048], the user application calls a memory allocation library for execution in user space, which in turn calls work scheduler 600 for a memory allocation). Tsirkin teaches a method of memory management that includes dynamically enabling a polling mode or interrupt functionality for a queue scheduler. (Abstract, One or more applications running in non-virtualized or virtualized computing environments may be adapted to enable methods for polling shared device queues. Applications adapted to operate in a polling mode may transmit a request to initiate polling of shared device queues, wherein operating in the polling mode disables corresponding device interrupts; and [0049], interrupt manager 252 may disable (block 410) polling of those shared device queues 254 for which the request applies and re-enable (block 412) device interrupts for messages received in shared device queues 254 that are no longer being polled). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention that based on the combination, the scheduling mechanism of the allocation library taught by the combination of Kaminski-Wang would be modified with the polling/interrupt mechanisms taught by Tsirkin resulting in a system that can dynamically use a polling mode or issue interrupts to a scheduling mechanism. Therefore, it would have been obvious to combine the teachings of the combination of Kaminski-Wang and Tsirkin. As per claim 11, this claim is similar to claim 3 and is rejected for the same reasons. The same motivation used in the rejection of claim 3 is applicable to the instant claim. As per claim 19, this claim is similar to claim 3 and is rejected for the same reasons. The same motivation used in the rejection of claim 3 is applicable to the instant claim. Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kaminski-Wang as applied to independent claims 1 and 9 and in further view of Harris et al. (US 2016/0070593). As per claim 6, the combination of Kaminski-Wang fails to specifically teach, wherein the logic is to: monitor the page heap for an exhaustion condition, and send an out of band message to a local operating system in response to the exhaustion condition. However, Harris teaches, wherein the logic is to: monitor the page heap for an exhaustion condition ([0124], a garbage collection coordinator process may receive notifications of heap usage), and send an out of band message to a local operating system in response to the exhaustion condition ([0123], a “stop the world everywhere” policy may be implemented within coordinated garbage collection by using a broadcast message at the start of collection on any node; and [0124], a “stop the world everywhere” policy may be implemented in a variety of different ways. For example, in some embodiments, a garbage collection coordinator process may receive notifications of heap usage, and may broadcast a request to stop to all of the machines to perform garbage collection when any one of them gets to the point at which it needs to collect). The combination of Kaminski-Wang and Harris are analogous because they are each related to memory management. Kaminski teaches a method of memory management including local and remote memory allocation for distributed threads. Harris teaches a method of memory management that includes heap management. ([0128], a node that reaches a predetermined maximum heap occupancy threshold may, itself, broadcast a message to all of the virtual machine instances on which the distributed application is executing (e.g., in embodiments that do not include a centralized or designated garbage collection coordinator) in order to trigger a collection on all of the virtual machine instances at substantially the same time). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention that based on the combination, the scheduling mechanism of the allocation library taught by the combination of Kaminski-Wang would be modified with the polling/interrupt mechanisms taught by Harris resulting in a system that can dynamically use a polling mode or issue interrupts to a scheduling mechanism. Therefore, it would have been obvious to combine the teachings of the combination of Kaminski-Wang and Harris. As per claim 14, this claim is similar to claim 6 and is rejected for the same reasons. The same motivation used in the rejection of claim 6 is applicable to the instant claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MELISSA A HEADLY whose telephone number is (571)272-1972. The examiner can normally be reached Monday- Friday 9-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MELISSA A HEADLY/Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Dec 13, 2022
Application Filed
Jan 17, 2023
Response after Non-Final Action
Feb 20, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602242
OPTIMIZED SYSTEM DESIGN FOR DEPLOYING AND MANAGING CONTAINERIZED WORKLOADS AT SCALE
2y 5m to grant Granted Apr 14, 2026
Patent 12591447
HARDWARE APPARATUS FOR ISOLATED VIRTUAL ENVIRONMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12585554
SERVER GROUP SELECTION SYSTEM, SERVER GROUP SELECTION METHOD, AND PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12578989
EFFICIENT INITIATION OF AUTOMATED PROCESSES
2y 5m to grant Granted Mar 17, 2026
Patent 12578984
Virtual Machine Register in a Computer Processor
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+40.4%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month