Prosecution Insights
Last updated: April 19, 2026
Application No. 16/746,714

TECHNIQUE FOR COMPUTATIONAL NESTED PARALLELISM

Final Rejection §103
Filed
Jan 17, 2020
Examiner
TANG, KENNETH
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
9 (Final)
88%
Grant Probability
Favorable
10-11
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
682 granted / 771 resolved
+33.5% vs TC avg
Strong +19% interview lift
Without
With
+19.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
18 currently pending
Career history
789
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 771 resolved cases

Office Action

§103
DETAILED ACTION The present application is being examined under the pre-AIA first to invent provisions. This action is in response to the Claims/Remarks on 12/15/25. Applicant’s arguments have been fully considered but are moot in view of the new grounds of rejections. Claims 1-23 are presented for examination. Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 1-9 and 23 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Schuster (US 2013/0125133 A1) in view of Nickolls et al. (hereinafter Nickolls) (“Scalable Parallel Programming”, ACM Queue, March/April 2008, pgs 40-53), and further in view of Steffen et al. (hereinafter Steffen) (“Improving SIMT Efficiency of Global Rendering Algorithms with Architecture Support for Dynamic Micro-Kernels”, 2010). Schuster and Nickolls were cited in a previous PTO-892. As to claim 1, Schuster teaches a computer-readable storage medium (Memory 510) having stored thereon instructions (program instructions 515, library functions 505, application 520, etc.), which if performed by one or more processors (GPU(s) 540), cause the one or more processors to at least ([0024]; [0013]; [0023]; [0119]; Fig. 5): execute a parent thread within a multiprocessor (executing a given/parent thread, wherein each number of threads, including the given/parent thread, may execute concurrently with the same number of processors or cores in a multi-core processor) (Abstract; [0023]-[0024]; Figs. 1 (start 105, spawn 110), 3, and 5, items 105 and 305; [0040]; 0123]); launch a grid (set/collection of spawned child threads in deque 230 of Fig. 2 satisfies broadest reasonable interpretation of claimed “grid”) of child threads within the multiprocessor (nested parallelism by parent/given thread spawning one or more children with concurrent/parallel execution of one or more GPUs, wherein a GPU processor that is a multi-core or multi-threaded processor) (Abstract; [0013]; [0023]; [0075]; [0119]; Figs. 1, 2, 3, and 5, items 105 and 305; [0040]; 0123]); and in response to a synchronization function call (sync function call), block execution of the parent thread while waiting for the child thread to complete (in response to a sync function call, the parent thread is block by suspending and waits until all of its child threads are completed before continuing/resuming execution) (Fig. 1, items 120 and 130; claim 1; [0023]; [0031]-[0033]; [0095]); and de-allocate resources used by the parent thread (parent thread abandons its call/return stack, making it an orphan stack, and switches to a newly acquired empty call/return stack, which is equivalent to the claimed de-allocating the resources of the parent thread) ([0035]; steps 370 and 375 of Fig. 3). Furthermore, Schuster does not teach to launch a kernel to execute a parent thread and to launch its grid of child threads and does not teach to de-allocate resources used by the parent thread. However, Nickolls teaches nested parallelism involving multithreaded streaming multiprocessors that includes launching a kernel (CUDA kernel, etc.) to execute a parent thread, launching one or more dependent child grids (dependent grids) or kernel grids, and barrier synchronization (via calling the _syncthreads() function) (pg. 40, 42-43, 45-48; Figs. A, 2, and 3). Nickolls further reinforces the teaching of Schuster by disclosing the managing of memory space visible to kernels through calls to the CUDA runtime such as cudaFree() to de-allocate resources used by the parent thread in order to free-up resources (pg. 46). Schuster and Nickolls are analogous art with the claimed invention because they are all in the same field of endeavor of parallel thread processing and all solving the same problem of synchronization. It would have been obvious to one of ordinary skill in the art at the time the invention was made to modify Schuster’s thread processing such that it would launch a kernel to execute its parent thread and launch a grid of child threads, as taught and suggested in Nickolls. The suggestion/motivation for doing so would have been to provide the predicted result of achieving very efficient and fine-grained parallelism from having fast barrier synchronization together with lightweight thread creation and zero-overhead thread scheduling (pg. 42). Schuster in view of Nickolls does not explicitly teach its grid of child threads is launched from within the kernel. However, Steffen teaches an SIMT architecture that allows for threads to be created dynamically at runtime. Large application kernels are broken down into smaller code blocks called micro-kernels that dynamically created threads can execute. These runtime micro-kernels allow for the removal of branching statements that would cause divergence within a thread group, and result in new threads being created and grouped with threads beginning execution of the same micro-kernel (Abstract; pgs. 241-242; Figures 4 and 5). Under the broadest reasonable interpretation (BRI), dynamically creating new threads during execution of a micro-kernel and grouping them for execution corresponds to launching a grid/collection of child threads from within the kernel on the multiprocessor. It would have been obvious to one of ordinary skill in the art before the effective date of the application to modify Schuster in view of Nickolls’s parallel thread processing system and method such that its grid of child threads is launched from within the kernel, as taught and suggested in Steffen and according to the BRI of the claim phrase. The suggestion/motivation for doing so would have been to provide the predicted result of improving processor utilization from having micro-kernels within the context of a single original application kernel, which allow for the removal of branching statements that would cause divergence within a thread group. This would yield the predicted result of improved processor efficiency and performance by an average of 1.4x (Steffen – Abstract; pg. 239, last paragraph before III. Global Rendering Algorithm). Therefore, Schuster, in view of Nickolls, and further in view of Steffen teaches the BRI of claim 1. As to claim 2, Schuster teaches wherein the one or more processors comprise a graphics processing unit (GPU) (Computer System 500 includes a plurality of GPU(s) 540 and CPU(s) 530) (Fig. 5). As to claim 3, Schuster teaches wherein the instructions, if performed by the one or more processors, cause the one or more processors to resume execution of the parent thread after completion of execution of the child thread (in response to a sync function call, the parent thread is blocked and waits until all of its child threads are completed before continuing/resuming execution) ([0031]). As to claim 4, Schuster teaches wherein the instructions, if performed by the one or more processors, cause the one or more processors to store execution state of the parent thread in response to the synchronization function call ([0034]; [0031]). As to claim 5, Schuster teaches wherein the blocking execution of the parent thread further comprises causing one or more processors to ensure memory coherence between the parent thread and the child thread ([0062]). As to claim 6, Schuster teaches wherein the instructions, if performed by the one or more processors, cause the one or more processors to resume execution of the parent thread in response to notification that the child thread has completed execution (in response to a sync function call, the parent thread is blocked and waits until be notified that all of its child threads are completed before continuing/resuming execution) ([0031]). As to claim 7, Schuster teaches wherein: the one or more processors comprise a graphics processing unit (GPU) (GPU(s) 540); and the instructions, if performed by the one or more processors, cause the one or more processors to: store execution state of the parent thread in response to the synchronization function call ([0034]; [0031]); receive a notification that execution of the child thread completed (in response to a sync function call, the parent thread is blocked and waits until be notified that all of its child threads are completed before continuing/resuming execution) ([0031]); and resume execution of the parent thread in response to notification that the child thread has completed execution (in response to a sync function call, the parent thread is blocked and waits until be notified that all of its child threads are completed before continuing/resuming execution) ([0031]). As to claim 8, Schuster teaches wherein the one or more processors comprise a graphics processing unit (GPU) and wherein the GPU comprises the multiprocessor (Fig. 5; [0013]; [0040]; [0118]). As to claim 9, Schuster teaches wherein the parent thread comprises an instruction following the synchronization function call and wherein the instructions of the computer-readable storage medium, if performed by the one or more processors, cause the one or more processors to continue execution at the instruction following the synchronization function call (in response to a sync function call, the parent thread is blocked and waits until be notified that all of its child threads are completed before continuing/resuming execution) ([0031]). As to claim 23, Schuster (parent thread spawns children threads with a particular spawn depth; the system supporting multi-threaded execution, a parent function of an executing thread may spawn one or more child functions suitable for execution in parallel with the parent function on one or more processors) ([0007]; [0025]; [0029]-[0031]; [0081]; [0113]) and Nickolls (Fig. 3; page 45) teaches wherein the grid of child threads is launched to perform a command stream generated by the parent thread. Claim 10 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Schuster in view of Nickolls, Sheffen, and further in view of Abdallah (US 2010/0161948 A1). Schuster, Nickolls, and Abdallah were cited in a previous PTO-892. As to claim 10, Schuster teaches a system for executing fully strict thread-level parallel programs, wherein a parent thread may spawn one or more child threads and then encounter a sync. When the parent executes a sync, Schuster teaches that if any spawned children have not returned, the parent suspends and does not resume until all of its children have returned. When all children complete, the parent continues execution immediately following the sync function call (Abstract; [0024]-[0025]; [0031]; [0047]; Fig. 1). Thus, Schuster teaches a parent thread resuming execution upon completion of a child grid. However, Schuster, Nickolls, and Sheffen do not explicitly teach to restore a saved state of the parent thread from a continuation state buffer upon completion of the child grid. Abdallah teaches saving and restoring thread state during context switches, specifically by storing thread architectural state (registers, program counters, etc.) in a LIFO memory/circuit or register cache, which serves as a buffer for saving and restoring thread state ([0028]-[0030]; Figs 1 and 5). Abdallah further teaches that “the state of the previous context has to be restored before resuming execution” ([0028]). In addition to the LIFO, Abdallah’s register file hierarchy allows quick save/restore and gradual swap of registers between local and global files, and could also be interpreted as a continuation state buffer, under the broadest reasonable interpretation (BRI). Therefore, Abdallah teaches the BRI of the claimed continuation state buffer that is used to restore a saved state. It would have been obvious to one of ordinary skill in the art before the effective date of the application to modify Schuster in view of Nickolls in view of Sheffen such that it would include to restore a saved state of the parent thread from a continuation state buffer upon completion of the child grid, as taught and suggested in Abdallah. The suggestion/motivation for doing so would have been to provide the predicted result of reducing context switch penalties, supporting efficient save/restore of parent state and enable nested parallel execution (Abdallah: Abstract; [0028]-[0031]). Claim 22 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Schuster in view of Nickolls, in view of Sheffen, and further in view of Dickson (US 2004/0158833 A1). Schuster, Nickolls, and Dickson were cited in a previous PTO-892. As to claim 22, Schuster in view of Nickolls in view of Sheffen does not teach wherein the grid of child threads has a higher priority than the parent thread. However, Dickson teaches a child task having a higher priority than the parent program ([0045]). It would have been obvious to one of ordinary skill in the art at the time the invention was made to include the teachings of wherein the grid of child threads has a higher priority than the parent thread to the existing invention of Schuster in view of Nickolls. The suggestion/motivation for doing so would have been to provide the predicted result of accomplishing a robust just-in-time response to multiple asynchronous data streams (Dickson - Abstract; [0045]-[0046]). Claims 11-19 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Schuster in view of Aingaran (US 2006/0136915 A1), and further in view of Nickolls, and further in view of Sheffen. Schuster, Aingaran, and Nickolls were cited in a previous PTO-892. As to claim 11, Schuster teaches a processor, comprising: a plurality of cores (multi-core) ([0013]; [0040]; [0118]); an L1 cache ([0122]); an instruction cache ([0122]); a scheduler ([0003]); and the processor (GPU 540, CPU(s) 530, etc.) to execute instructions (program instructions 515, library functions 505, application 520, etc.): execute a parent thread within a multiprocessor (executing a given/parent thread, wherein each number of threads, including the given/parent thread, may execute concurrently with the same number of processors or cores in a multi-core processor) (Abstract; Figs. 1, 3, and 5, items 105 and 305; [0040]; 0123]); launch a grid (set/collection of spawned child threads in deque 230 of Fig. 2 satisfies broadest reasonable interpretation of claimed “grid”) of child thread within the multiprocessor (nested parallelism by parent/given thread spawning one or more children with concurrent/parallel execution of one or more CPUs and/or GPUs, wherein a CPU or GPU processor that is a multi-core or multi-threaded processor) (Abstract; [0013]; [0023]; [0119]; Figs. 1, 2, 3, and 5, items 105 and 305; [0040]; 0123]); and in response to a synchronization function call (sync function call), block execution of the parent thread while waiting for the child thread to complete (in response to a sync function call, the parent thread is suspended and waits until all of its child threads are completed before continuing/resuming execution) ([0031]; [0095]); and de-allocate resources used by the parent thread (parent thread abandons its call/return stack, making it an orphan stack, and switches to a newly acquired empty call/return stack, which is equivalent to the claimed de-allocating the resources of the parent thread) ([0035]; steps 370 and 375 of Fig. 3). Schuster does not explicitly teach its processor to have a register file and a crossbar unit. However, Aingaran teaches a multiprocessor that includes items such as a plurality of cores 36a-h, a crossbar 34, L1 cache 42, L1 instruction cache 43, scheduler 216, register files 210, etc. (Figs. 3 and 8; [0024]; claim 1). Shuster and Aingaran are analogous art with the claimed invention because they are all in the same field of endeavor of thread processing. It would have been obvious to one of ordinary skill in the art before the invention was made to modify Shuster’s processor such that it would include a register file, crossbar unit, etc., as taught in Aingaran. The suggestion/motivation for doing so would have been to provide the predicted result of having the computer architectural structure needed for scheduling multiple threads for execution. Furthermore, Schuster does not teach to launch a kernel to execute its parent thread and to launch a grid of child threads. However, Nickolls teaches nested parallelism with multithreaded streaming multiprocessors that includes launching a kernel to execute a parent thread, launching one or more dependent child grids, and barrier synchronization (pg. 40, 42-43, 45-48; Figs. A, 2, and 3). Schuster, Aingaran, and Nickolls are analogous art with the claimed invention because they are all in the same field of endeavor of thread processing. It would have been obvious to one of ordinary skill in the art to modify Schuster in view of Aingaran’s thread processing such that it would launch a kernel to execute its parent thread and launch a grid of child threads, as taught and suggested in Nickolls. The suggestion/motivation for doing so would have been to provide the predicted result of achieving very efficient and fine-grained parallelism from having fast barrier synchronization together with lightweight thread creation and zero-overhead thread scheduling (pg. 42). Schuster, Aingaran, and Nickolls do not explicitly teach its grid of child threads is launched from within the kernel. However, Steffen teaches an SIMT architecture that allows for threads to be created dynamically at runtime. Large application kernels are broken down into smaller code blocks called micro-kernels that dynamically created threads can execute. These runtime micro-kernels allow for the removal of branching statements that would cause divergence within a thread group, and result in new threads being created and grouped with threads beginning execution of the same micro-kernel (Abstract; pgs. 241-242; Figures 4 and 5). Under the broadest reasonable interpretation (BRI), dynamically creating new threads during execution of a micro-kernel and grouping them for execution corresponds to launching a grid/collection of child threads from within the kernel on the multiprocessor. It would have been obvious to one of ordinary skill in the art before the effective date of the application to modify Schuster in view of Nickolls’s parallel thread processing system and method such that its grid of child threads is launched from within the kernel, as taught and suggested in Steffen and according to the BRI of the claim phrase. The suggestion/motivation for doing so would have been to provide the predicted result of improving processor utilization from having micro-kernels within the context of a single original application kernel, which allow for the removal of branching statements that would cause divergence within a thread group. This would yield the predicted result of improved processor efficiency and performance by an average of 1.4x (Steffen – Abstract; pg. 239, last paragraph before III. Global Rendering Algorithm). Therefore, Schuster, in view of Nickolls, and further in view of Steffen teaches the BRI of claim 11. As to claim 12, Schuster teaches wherein the processor comprises a graphics processing unit (GPU) to execute the instructions (Computer System 500 includes a plurality of GPU(s) 540 and CPU(s) 530) (Fig. 5). As to claim 13, Schuster teaches wherein the instructions, if performed by the processor, cause the processor to resume execution of the parent thread after completion of execution of the child thread (in response to a sync function call, the parent thread is blocked and waits until all of its child threads are completed before continuing/resuming execution) ([0031]). As to claim 14, Schuster teaches wherein the instructions, if performed by the processor, cause the processor to store execution state of the parent thread in response to the synchronization function call ([0034]; [0031]). As to claim 15, Schuster teaches wherein the instructions, if executed by the processor, cause the processor to ensure memory coherence between the parent thread and the child thread ([0062]). As to claim 16, Schuster teaches wherein the instructions, if executed by the processor, cause the processor to resume execution of the parent thread in response to notification that the child thread has completed execution (in response to a sync function call, the parent thread is blocked and waits until be notified that all of its child threads are completed before continuing/resuming execution) ([0031]). As to claim 17, Schuster teaches wherein: the processor comprises a graphics processing unit (GPU) (GPU(s) 540); and the instructions, if performed by the processor, cause the processor to: store execution state of the parent thread in response to the synchronization function call ([0034]; [0031]); receive a notification that execution of the child thread completed (in response to a sync function call, the parent thread is blocked and waits until be notified that all of its child threads are completed before continuing/resuming execution) ([0031]); and resume execution of the parent thread in response to notification that the child thread has completed execution (in response to a sync function call, the parent thread is blocked and waits until be notified that all of its child threads are completed before continuing/resuming execution) ([0031]). As to claim 18, Schuster teaches wherein the processor comprises a graphics processing unit (GPU) and wherein the GPU comprises the multiprocessor (Fig. 5; [0013]; [0040]; [0118]). As to claim 19, Schuster teaches wherein the parent thread comprises an instruction following the synchronization function call and wherein the instructions, if performed by the processor, cause the processor to continue execution at the instruction following the synchronization function call (in response to a sync function call, the parent thread is blocked and waits until be notified that all of its child threads are completed before continuing/resuming execution) ([0031]). Claim 20 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Schuster in view of Aingaran, in view of Nickolls, in view of Sheffen, and further in view of Dickson (US 2004/0158833 A1). Schuster, Aingaran, Nickolls, and Dickson were cited in a previous PTO-892. As to claim 22, Schuster, Aingaran, Nickolls, and Sheffen do not teach wherein the grid of child threads has a higher priority than the parent thread. However, Dickson teaches a child task having a higher priority than the parent program ([0045]). It would have been obvious to one of ordinary skill in the art at the time the invention was made to include the teachings of wherein the grid of child threads has a higher priority than the parent thread to the existing invention of Schuster, Aingaran, Nickolls, and Sheffen. The suggestion/motivation for doing so would have been to provide the predicted result of accomplishing a robust just-in-time response to multiple asynchronous data streams (Dickson - Abstract; [0045]-[0046]). Claim 21 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Schuster in view of Aingaran, in view of Nickolls, in view of Sheffen, and further in view of Abdallah (US 2010/0161948 A1). Schuster, Aingaran, and Nickolls were cited in a previous PTO-892. As to claim 21, Schuster teaches a system for executing fully strict thread-level parallel programs, wherein a parent thread may spawn one or more child threads and then encounter a sync. When the parent executes a sync, Schuster teaches that if any spawned children have not returned, the parent suspends and does not resume until all of its children have returned. When all children complete, the parent continues execution immediately following the sync function call (Abstract; [0024]-[0025]; [0031]; [0047]; Fig. 1). Thus, Schuster teaches a parent thread resuming execution upon completion of a child grid. However, Schuster, Aingaran, Nickolls, and Sheffen do not explicitly teach to restore a saved state of the parent thread from a continuation state buffer upon completion of the child grid. Abdallah teaches saving and restoring thread state during context switches, specifically by storing thread architectural state (registers, program counters, etc.) in a LIFO memory/circuit or register cache, which serves as a buffer for saving and restoring thread state ([0028]-[0030]; Figs 1 and 5). Abdallah further teaches that “the state of the previous context has to be restored before resuming execution” ([0028]-[0031]). In addition to the LIFO, Abdallah’s register file hierarchy allows quick save/restore and gradual swap of registers between local and global files, and could also be interpreted as a continuation state buffer, under the broadest reasonable interpretation (BRI). Therefore, Abdallah teaches the BRI of the claimed continuation state buffer that is used to restore a saved state. It would have been obvious to one of ordinary skill in the art before the effective date of the application to modify Schuster in view of Aingaran, in view of Nickolls, and in view of Sheffen such that it would include to restore a saved state of the parent thread from a continuation state buffer upon completion of the child grid, as taught and suggested in Abdallah. The suggestion/motivation for doing so would have been to provide the predicted result of reducing context switch penalties, supporting efficient save/restore of parent state and enable nested parallel execution (Abdallah: Abstract; [0028]-[0031]). Response to Arguments Applicant’s arguments have been fully considered but are moot in view of the new grounds of rejections. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH TANG whose telephone number is (571)272-3772. The examiner can normally be reached Monday-Friday 7AM-3PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KENNETH TANG/Primary Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Jan 17, 2020
Application Filed
Dec 04, 2020
Non-Final Rejection — §103
May 25, 2021
Response Filed
Jul 27, 2021
Final Rejection — §103
Nov 09, 2021
Interview Requested
Nov 17, 2021
Examiner Interview Summary
Nov 17, 2021
Applicant Interview (Telephonic)
Jan 21, 2022
Response after Non-Final Action
Jan 21, 2022
Request for Continued Examination
Jan 27, 2022
Non-Final Rejection — §103
Jul 19, 2022
Interview Requested
Jul 25, 2022
Applicant Interview (Telephonic)
Jul 29, 2022
Response Filed
Jul 30, 2022
Examiner Interview Summary
Nov 04, 2022
Final Rejection — §103
May 10, 2023
Request for Continued Examination
May 11, 2023
Response after Non-Final Action
May 26, 2023
Non-Final Rejection — §103
Aug 10, 2023
Examiner Interview Summary
Aug 10, 2023
Applicant Interview (Telephonic)
Dec 01, 2023
Notice of Allowance
Jul 01, 2024
Request for Continued Examination
Jul 02, 2024
Response after Non-Final Action
Jul 13, 2024
Non-Final Rejection — §103
Jan 21, 2025
Response Filed
Mar 04, 2025
Final Rejection — §103
Apr 04, 2025
Interview Requested
Apr 24, 2025
Examiner Interview Summary
Apr 24, 2025
Applicant Interview (Telephonic)
Sep 08, 2025
Request for Continued Examination
Sep 10, 2025
Response after Non-Final Action
Sep 11, 2025
Non-Final Rejection — §103
Nov 18, 2025
Interview Requested
Dec 03, 2025
Examiner Interview Summary
Dec 03, 2025
Applicant Interview (Telephonic)
Dec 15, 2025
Response Filed
Mar 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602240
REMOTE EDGE VIRTUALIZATION MANAGEMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12602241
SECURE NETWORKING ENGINE FOR A TECHNICAL SUPPORT MANAGEMENT SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12591450
FRAMEWORK FOR HIGH PERFORMANCE BLOCKCHAINS
2y 5m to grant Granted Mar 31, 2026
Patent 12561168
SCHEDULING OF A PLURALITY OF GRAPHIC PROCESSING UNITS
2y 5m to grant Granted Feb 24, 2026
Patent 12542721
MANAGING A CLOUD SERVICE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

10-11
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+19.0%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 771 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month