Prosecution Insights
Last updated: April 19, 2026
Application No. 17/546,643

ASYNCHRONOUS MEMORY ALLOCATION

Non-Final OA §102§103
Filed
Dec 09, 2021
Examiner
VINCENT, ROSS MICHAEL
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
12 granted / 22 resolved
-0.5% vs TC avg
Strong +36% interview lift
Without
With
+35.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
22.7%
-17.3% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-12, 14-17, 20, 22, 24, 25, 29, 30, and 32 have been amended. No new claims have been added. No claims have been canceled. Claims 1-32 are currently pending for examination. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 7/25/2025 has been entered. Response to Arguments Applicant’s arguments, pgs.7-9, with respect to claims 1, 9, 17, and 25 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s arguments, pgs.7-9, with respect to claims 2-8, 10-16, 18-24, and 26-32 have been considered, however, in light of the new grounds of rejection of the independent claims, the dependent claims are still rejected under 35 USC 103. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 9, 17, and 25 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bernhard (US 20120179882 A1). As per claim 1, Bernhard discloses: One or more processors, comprising: one or more circuits circuitry to, in response to a call to asynchronously allocate memory in an order to one or more execution streams, perform an application programming interface ("API") to cause one or more memory locations to be asynchronously allocated in the order to the one or more execution streams. (“Thus processes taught by the discussion above may be performed with program code such as machine-executable instructions that cause a machine that executes these instructions to perform certain functions. In this context, a "machine" may be a machine that converts intermediate form (or "abstract") instructions into processor specific instructions (e.g., an abstract execution environment such as a "virtual machine" (e.g., a Java Virtual Machine), an interpreter, a Common Language Runtime, a high-level language virtual machine, etc.), and/or, electronic circuitry disposed on a semiconductor chip (e.g., "logic circuitry" implemented with transistors) designed to execute instructions such as a general-purpose processor and/or a special-purpose processor.”, 0066 ; “In one embodiment, a malloc manager can manage memory allocated via memory management libraries 303. Thus, API calls, e.g. for allocating/freeing memory pages, via common memory management libraries from each application (such as application 301 via memory management libraries 303) may be forwarded to the malloc manager which implements specific memory management capabilities for multiple applications. The malloc manager may communicate asynchronously with kernel 113 to request memory pages or release memory pages.”, 0049 ; “In one embodiment, applications (e.g. currently active or running) may be ordered in a special queue associated with a memory management library commonly linked in these applications to allocate and free memory. Particularly, the memory management library may implement legacy API calls such as malloc( ) and free( ) in the applications. In one embodiment, memory space allocated and freed via the memory management library or the legacy API calls may accumulate or be cached within the memory management library without being returned back to the kernel for other queues. The special queue may be based on amounts of memory allocated or accumulated via API calls to the memory management library from the applications.”, 0034 ; “In an alternative embodiment, one or more queues representing ordered relationships among separate groups of running applications may be maintained in a data processing system having a level of memory usage”, 0012 ; Examiner Note: as an execution stream is interpreted to be a FIFO queue of commands that run asynchronously on a device, the ordered special queue equates to an execution stream. As to claim 9, it is a method claim whose limitations are substantially the same as those of claim 1. Accordingly, it is rejected for substantially the same reasons. As to claim 17, it is a system claim whose limitations are substantially the same as those of claim 1. Accordingly, it is rejected for substantially the same reasons. As to claim 25, it is a machine-readable medium (see Bernhard: “An article of manufacture may be used to store program code. An article of manufacture that stores program code may be embodied as, but is not limited to, one or more memories (e.g., one or more flash memories, random access memories (static, dynamic or other)), optical disks, CD-ROMs, DVD ROMs, EPROMs, EEPROMs, magnetic or optical cards or other type of machine-readable media suitable for storing electronic instructions.”, 0067) claim whose limitations are substantially the same as those of claim 1. Accordingly, it is rejected for substantially the same reasons. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-8, 10-16, 18-24, and 26-32 are rejected under 35 U.S.C. 103 as being unpatentable over Bernhard (US 20120179882 A1) in view of Wolfe, “Implementing the OpenACC Data Model”. As to Claim 2, Bernhard fully discloses the limitations of claim 1, but does not disclose a processor comprising a GPU. However, Wolfe discloses: The one or more processors of claim 1; wherein a graphics processing unit ("GPU") comprises the one or more execution streams (e.g., Wolfe, Section B,1: The OpenARC compiler supports OpenCL on NVIDIA/AMD GPUs, Intel Xeon Phi Coprocessors, and Altera FPGAs ; Examiner Note: running cuda on NVIDIA/AMD GPUs provides the capability to manage one or more execution streams.) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Bernhard with those of Wolfe, in order to provide the improved memory management performance associated with using a second memory pool (Wolfe, [section IV.F.1]). As to claim 3, Bernhard fully discloses the limitations of claim 1, but does not disclose a virtual memory address. However, Wolfe discloses: the one or more memory locations are to be asynchronously allocated using a virtual memory address provided in response to the API (e.g., Wolfe, Section B,1: The current OpenARC runtime implements the virtual device address space using the CPU malloc() calls [note that the malloc subsystem is a memory management API]) As to claim 4, Bernhard fully discloses the limitations of claim 1, but does not disclose the use of backing memory. However, Wolfe discloses: the one or more memory locations are to be asynchronously allocated using backing memory allocated from a memory pool (e.g., Wolfe, Section D,3: The PGI implementation allows for asynchronous free operations. As described earlier, the runtime uses asynchronous data transfers to internal pinned buffers. The runtime saves a descriptor, so that at some later point, such as at a synchronization or when the buffer is needed, the runtime can copy the data from the buffer to the user memory) As to claim 5, Bernhard fully discloses the limitations of claim 1, but does not disclose the use of backing memory. However, Wolfe discloses: the one or more memory locations are to be asynchronously allocated using backing memory allocated when a process executes on the one or more execution streams. (e.g., Wolfe, Section A, 2: The region stack can guarantee that the data list created at the entry of a region (be it data or compute region) will be freed at the exit of the same region… The device memory is allocated for these data and copied to the device as necessary (for copy and copyin clauses) [note that data which is copied as necessary equates to backing memory]). Furthermore, Bernhard discloses: memory allocated when a process executes on the one or more execution streams (“Usually the kernel may determine which process is executed at what time in the system.”, 0028 ; “In an alternative embodiment, one or more queues representing ordered relationships among separate groups of running applications may be maintained in a data processing system having a level of memory usage.”, 0012) As to claim 6, Bernhard fully discloses the limitations of claim 1, but does not disclose the specified parameters being indicated by the API. However, Wolfe discloses: the API at least indicates a location used to return an asynchronously allocated memory address, a size of memory requested, and the order to be used by the one or more execution streams (e.g., Wolfe, Section B,3: OpenUH compiler generates OpenCL runtime function calls to allocate and free data memory ; Section B,3: In the Open UH implementation, the structure of the data not only includes the host address, device address and data size… ; Section A,2: The OpenUH runtime maintains a region stack to track the region chain and the new data created within each region. The region stack can guarantee that the data list created at the entry of a region (be it data or compute region) will be freed at the exit of the same region [note that the region stack equates to a stream order]) As to claim 7, Bernhard fully discloses the limitations of claim 1, but does not disclose GPU memory. However, Wolfe discloses: the one or more memory locations are memory locations in GPU memory (e.g., Wolfe, Section II, Par. 7: Current NVIDIA GPUs support a shared address space between the CPU and GPU, called CUDA Unified Memory.) As to claim 8, Bernhard fully discloses the limitations of claim 1, but does not disclose the one or more memory locations use backing memory returned to a memory pool after a previous allocation memory. However, Wolfe discloses: The one or more memory locations use backing memory returned to a memory pool after a previous allocation (e.g., Wolfe, Section B, Par.1: memory deallocation is done by passing the allocation address to a free routine ; Section A,2: The region stack can guarantee that the data list created at the entry of a region (be it data or compute region) will be freed at the exit of the same region... All the newly created data in the region level i are appended to a linked list and then inserted into the dynamic hash table. The device memory is allocated for these data and copied to the device as necessary (for copy and copyin clauses)) As to claim 10, it is a method claim whose limitations are substantially the same as those of claim 2. Accordingly, it is rejected for substantially the same reasons. As to claim 11, Bernhard fully discloses the limitations of claim 9, but does not disclose a PPU. However, Wolfe discloses: a parallel processing unit ("PPU") comprises the one or more execution streams (e.g., Wolfe, Section B,1: The OpenARC compiler supports OpenCL on NVIDIA/AMD GPUs, Intel Xeon Phi Coprocessors, and Altera FPGAs; Examiner Note: running CUDA on NVIDIA/AMD GPUs, which are necessarily PPUs, provides the capability to manage one or more execution streams.) As to claim 12, Bernhard fully discloses the limitations of claim 9, but does not disclose generating a virtual memory address in response to the API. However, Wolfe discloses: generating a virtual memory address in response to the API; and providing the virtual memory address to a process executing on the one or more execution streams (e.g., Wolfe, Section B,2: Because the fake device address cannot be used in any user-written OpenCL kernels, the runtime also provides an API routine that returns the OpenCL handle and offset corresponding to the fake device address. The current OpenARC runtime implements the virtual device address space using the CPU malloc() calls; when device memory is allocated, the OpenARC runtime also allocates dummy space in the CPU memory space, and uses the allocated address as a fake device address) Furthermore, Bernhard discloses: a process executing on the one or more execution streams (“Usually the kernel may determine which process is executed at what time in the system.”, 0028 ; “In one embodiment, applications (e.g. currently active or running) may be ordered in a special queue associated with a memory management library commonly linked in these applications to allocate and free memory.”, 0034) As to claim 13, Bernhard fully discloses the limitations of claim 9, but does not disclose allocating backing memory from a memory pool. However, Wolfe discloses: creating a memory pool; allocating backing memory from the memory pool; and associating the backing memory with the one or more memory locations (e.g., Wolfe, Section A, 2: The region stack can guarantee that the data list created at the entry of a region (be it data or compute region) will be freed at the exit of the same region. Figure 5a shows an example OpenACC code and Figure 5b shows the structure of the region stack. Whenever a new data region is encountered, a region pointer is pushed into the region stack. If the regions are not nested, then they are pushed into the stack in sequence. All the newly created data in the region level i are appended to a linked list and then inserted into the dynamic hash table. The device memory is allocated for these data and copied to the device as necessary) As to claim 14, Bernhard fully discloses the limitations of claim 9, but does not disclose allocating backing memory asynchronously when a process begins execution on the one or more processors. However, Wolfe discloses: allocating backing memory asynchronously when a process begins execution on the one or more execution streams (e.g., Wolfe, Section A, 2: The region stack can guarantee that the data list created at the entry of a region (be it data or compute region) will be freed at the exit of the same region. Figure 5a shows an example OpenACC code and Figure 5b shows the structure of the region stack. Whenever a new data region is encountered, a region pointer is pushed into the region stack. If the regions are not nested, then they are pushed into the stack in sequence. All the newly created data in the region level i are appended to a linked list and then inserted into the dynamic hash table. The device memory is allocated for these data and copied to the device as necessary (for copy and copyin clauses)) Wolfe does not explicitly disclose processes running on execution streams, however, Bernhard discloses: “In one embodiment, applications (e.g. currently active or running) may be ordered in a special queue associated with a memory management library commonly linked in these applications to allocate and free memory.”, 0034 As to claim 15, Bernhard fully discloses the limitations of claim 9, but does not disclose deallocating backing memory asynchronously when a process completes execution on the one or more processors. However, Wolfe discloses: deallocating backing memory asynchronously when a process completes execution on the one or more execution streams (e.g., Wolfe, Section A,2: The region stack can guarantee that the data list created at the entry of a region (be it data or compute region) will be freed at the exit of the same region.) Wolfe does not explicitly disclose processes running on execution streams, however, Bernhard discloses: “In one embodiment, applications (e.g. currently active or running) may be ordered in a special queue associated with a memory management library commonly linked in these applications to allocate and free memory.”, 0034 As to claim 16, Bernhard fully discloses the limitations of claim 9, but does not disclose determining whether a set of allocated memory from a memory pool will be available when a process begins execution on the one or more processor. However, Wolfe discloses: determining, based at least in part on a stream order specified in the API, whether a set of allocated memory from a memory pool will be available when a process begins execution on the one or more execution streams; and allocating backing memory asynchronously from the set of allocated memory when the process begins execution based at least in part on the determination (e.g., Wolfe, Section D,1: Asynchronous free operations are implemented by putting the host address and the associated async queue in a postponed-free table. Synchronization operations check the postponed-free table and perform pending free operations associated with the synchronizations) As to claim 18, it is a system claim whose limitations are substantially the same as those of claim 2. Accordingly, it is rejected for substantially the same reasons. As to claim 19, it is a system claim whose limitations are substantially the same as those of claim 7. Accordingly, it is rejected for substantially the same reasons. As to claim 20, Bernhard fully discloses the limitations of claim 17, but does not disclose the API indicates the order in one or more of the one or more executable instructions. However, Wolfe discloses: the API indicates the order in one or more of the one or more executable instructions (e.g., Wolfe, Section A,2: The OpenUH runtime maintains a region stack to track the region chain and the new data created within each region. The region stack can guarantee that the data list created at the entry of a region (be it data or compute region) will be freed at the exit of the same region. Figure 5a shows an example OpenACC code and Figure 5b shows the structure of the region stack [note that the region stack equates to a stream order]) As to claim 21, Bernhard fully discloses the limitations of claim 17, but does not disclose the API indicates a memory pool usable to asynchronously allocate the one or more memory locations. However, Wolfe discloses: the API indicates a memory pool usable to asynchronously allocate the one or more memory locations (e.g., Bardakoff, abstract: Hedgehog has asynchronicity built in ; Section E: The API provides a mechanism to recycle memory back to the memory manager, which will add memory back into the pool and signal to a task that is waiting. This allows a waiting task to obtain the needed memory and resume execution. The recycle mechanism provides two steps: (1) complete the allocation -deallocation cycle to eliminate memory leaks and (2) update the state of the memory. The state is updated when memory is returned to a memory manager.) As to claim 22, Bernhard fully discloses the limitations of claim 17, but does not disclose the one or more executable instructions at least include executable instructions to execute a process on the one or more processors. However, Wolfe discloses: the one or more executable instructions at least include executable instructions to execute a process on the one or more execution streams(e.g., Wolfe, abstract: This paper describes how the data model is supported in current OpenACC implementations, ranging from research compilers (OpenUH and OpenARC) to a commercial compiler (the PGI OpenACC compiler) ) Wolfe does not explicitly disclose processes running on execution streams, however, Bernhard discloses: “In one embodiment, applications (e.g. currently active or running) may be ordered in a special queue associated with a memory management library commonly linked in these applications to allocate and free memory.”, 0034 As to claim 23, Bernhard fully discloses the limitations of claim 17, but does not disclose the one or more executable instructions at least include executable instructions to execute a process on the one or more processors. However, Wolfe discloses the one or more memory locations are to be asynchronously allocated using a virtual memory address (e.g., Wolfe, Section B, 1: The OpenARC implementation uses a fake virtual device address space) As to claim 24, Bernhard fully discloses the limitations of claim 17, but does not disclose the one or more memory locations are to be asynchronously allocated using backing memory allocated from a memory pool when a process executes on the one or more execution streams. However, Wolfe discloses the one or more memory locations are to be asynchronously allocated using backing memory allocated from a memory pool when a process executes on the one or more execution streams (e.g., Wolfe, Section A,2: Whenever a new data region is encountered, a region pointer is pushed into the region stack. If the regions are not nested, then they are pushed into the stack in sequence. All the newly created data in the region level i are appended to a linked list and then inserted into the dynamic hash table. The device memory is allocated for these data and copied to the device as necessary (for copy and copyin clauses)) Wolfe does not explicitly disclose processes running on execution streams, however, Bernhard discloses: “In one embodiment, applications (e.g. currently active or running) may be ordered in a special queue associated with a memory management library commonly linked in these applications to allocate and free memory.”, 0034 As to claim 26, it is a machine-readable medium claim whose limitations are substantially the same as those of claim 2. Accordingly, it is rejected for substantially the same reasons. As to claim 27, it is a machine-readable medium claim whose limitations are substantially the same as those of claim 7. Accordingly, it is rejected for substantially the same reasons. As to claim 28, Bernhard fully discloses the limitations of claim 25, but does not disclose the one or more memory locations are to be asynchronously allocated using backing memory allocated from a memory pool. However, Wolfe discloses: the one or more memory locations are to be asynchronously allocated using a virtual memory address provided in response to the API; the one or more memory locations are to be asynchronously allocated using backing memory allocated from a memory pool; and the virtual memory address is associated with the backing memory (e.g., Wolfe, Section B,1: The OpenARC implementation uses a fake virtual device address space ; Section D,3: The PGI implementation allows for asynchronous free operations. As described earlier, the runtime uses asynchronous data transfers to internal pinned buffers. The runtime saves a descriptor, so that at some later point, such as at a synchronization or when the buffer is needed, the runtime can copy the data from the buffer to the user memory.; Section B,1 :The current OpenARC runtime implements the virtual device address space using the CPU malloc() calls; when device memory is allocated, the OpenARC runtime also allocates dummy space in the CPU memory space, and uses the allocated address as a fake device address. The fake virtual device address is used only for the CPU-GPU address mapping and device address calculations, and no data are stored in the allocated dummy memory.) As to claim 29, it is a machine-readable medium claim whose limitations are substantially the same as those of claim 6. Accordingly, it is rejected for substantially the same reasons. As to claim 30, Bernhard fully discloses the limitations of claim 25, but does not disclose the one or more memory locations are determined, based at least in part, on the order. However, Wolfe discloses: the one or more memory locations are determined, based at least in part, on the order (e.g., Wolfe, Section A,2 : The OpenUH runtime maintains a region stack to track the region chain and the new data created within each region. The region stack can guarantee that the data list created at the entry of a region (be it data or compute region) will be freed at the exit of the same region. Figure 5a shows an example OpenACC code and Figure 5b shows the structure of the region stack. Whenever a new data region is encountered, a region pointer is pushed into the region stack.) As to claim 31, Bernhard fully discloses the limitations of claim 25, but does not disclose synchronization events between a plurality of processes executing on the one or more processors. However, Wolfe discloses: the one or more memory locations are determined, based at least in part, on one or more synchronization events between a plurality of processes executing on the one or more processors (e.g., Wolfe, Section D,3: As described earlier, the runtime uses asynchronous data transfers to internal pinned buffers. The runtime saves a descriptor, so that at some later point, such as at a synchronization or when the buffer is needed, the runtime can copy the data from the buffer to the user memory.) As to claim 32, Bernhard fully discloses the limitations of claim 25, but does not disclose the one or more memory locations are to be asynchronously allocated to the one or more execution streams before a process executes on the one or more execution streams; and the one or more memory locations are to be asynchronously deallocated from the one or more execution streams after a process executes on the one or more execution streams. However, Wolfe discloses: the one or more memory locations are to be asynchronously allocated to the one or more execution streams before a process executes on the one or more execution streams; and the one or more memory locations are to be asynchronously deallocated from the one or more execution streams after a process executes on the one or more execution streams (e.g., Wolfe, Section B, Par.1: A memory allocation routine returns an address; the program can add offsets to this address; memory deallocation is done by passing the allocation address to a free routine) Wolfe does not explicitly disclose processes running on execution streams, however, Bernhard discloses: “In one embodiment, applications (e.g. currently active or running) may be ordered in a special queue associated with a memory management library commonly linked in these applications to allocate and free memory.”, 0034 Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gurfinkel (US 20230005096 A1) – discloses a method for generating graph nodes to allocate memory. Memory allocation can be asynchronous. Graph nodes are, in at least one embodiment, generated using CUDA. Vishnuswaroop (US 20220334898 A1) – discloses a method for using an API to facilitate parallel processing, including memory allocation. API’s are used to indicate storage locations for allocation within a GPU, and allocation may be asynchronous. Weber (US 20210240526 A1) – discloses a method for implementing an asynchronous execution queue for accelerator hardware; includes replacing a malloc operation in an execution queue to be sent to an accelerator with an asynchronous malloc operation that returns a unique reference pointer. Execution of the asynchronous malloc operation in the execution queue by the accelerator allocates a requested memory size and adds an entry to a look-up table accessible by the accelerator that maps the reference pointer to a corresponding memory address. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROSS MICHAEL VINCENT whose telephone number is (703)756-1408. The examiner can normally be reached Mon-Fri 8:30AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached on (571) 270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.M.V./ Examiner, Art Unit 2196 /APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Dec 09, 2021
Application Filed
Jun 03, 2024
Non-Final Rejection — §102, §103
Aug 30, 2024
Interview Requested
Sep 10, 2024
Applicant Interview (Telephonic)
Sep 10, 2024
Examiner Interview Summary
Nov 07, 2024
Response Filed
Feb 03, 2025
Final Rejection — §102, §103
Apr 03, 2025
Interview Requested
Apr 30, 2025
Examiner Interview Summary
Jul 14, 2025
Request for Continued Examination
Jul 18, 2025
Response after Non-Final Action
Jul 25, 2025
Response Filed
Feb 17, 2026
Non-Final Rejection — §102, §103
Apr 13, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530219
TIME-BOUND LIVE MIGRATION WITH MINIMAL STOP-AND-COPY
2y 5m to grant Granted Jan 20, 2026
Patent 12511158
TASK ALLOCATION METHOD, APPARATUS, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Patent 12493493
METHOD AND SYSTEM FOR ALLOCATING GRAPHICS PROCESSING UNIT PARTITIONS FOR A COMPUTER VISION ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12481529
CONTROLLER FOR COMPUTING ENVIRONMENT FRAMEWORKS
2y 5m to grant Granted Nov 25, 2025
Patent 12430170
QUANTUM COMPUTING SERVICE WITH QUALITY OF SERVICE (QoS) ENFORCEMENT VIA OUT-OF-BAND PRIORITIZATION OF QUANTUM TASKS
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
90%
With Interview (+35.9%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month