Prosecution Insights
Last updated: April 19, 2026
Application No. 17/937,270

ORDERED THREAD DISPATCH FOR THREAD TEAMS

Non-Final OA §102§103§112
Filed
Sep 30, 2022
Examiner
EWALD, JOHN ROBERT DAKITA
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
16 granted / 21 resolved
+21.2% vs TC avg
Strong +56% interview lift
Without
With
+55.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
24 currently pending
Career history
45
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
56.6%
+16.6% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-25 are pending in this application. Information Disclosure Statement The IDS’s filed on 1/03/2023, 5/03/2024, and 1/09/2026 have been considered. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-5, 13-16, 18-20, and 22-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2, 13, 18, and 22 all recite the same limitation focused on how the local thread team ID is generated. However, the language used to describe how the ID is generated is confusing and ambiguous. Specifically, what is meant by “a walk order corresponding to a thread team dimension direction of a thread group range of the thread group”? Additionally, the relationship between thread groups and thread teams is unclear as shown in the phrases “across the respective threads of the thread team based on a thread group of the thread team” and “corresponding to a thread team dimension direction of a thread group range of a thread group.” The entire claim appears to seemingly incorporate related concepts that somehow create thread IDs. Examiner requests clarification of claims 2, 13, 18, and 22 through amendment and/or explanation in filed remarks. For interpretation purposes, Examiner interprets the “thread team local IDs” as being sequential and multi-dimensional. Claims 3-5, 14-16, 19-20, and 23-25 are dependent claims of claims 2, 13, 18, and 22, respectively, and fail to solve the aforementioned deficiencies. Therefore, they are rejected for similar reasons. Claims 12-16 and 21-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 12 and 21 recite the limitation "allocating a thread team local identifier (ID) for respective threads of a thread team comprising a plurality of hardware threads that are to be executed solely by a processing resource of ." There is insufficient antecedent basis for the underlined limitation in the claim. Claims 13-16 and 22-25 are dependent claims of claims 12 and 21, respectively, so they are rejected for similar reasons. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, 5, 7-9, 12-14, 16-18, 20-23, and 25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Munshi et al. (US Pub. No. 2009/0307704 A1 hereinafter Munshi). As per claim 1, Munshi teaches an apparatus comprising: one or more processors including graphics processor (¶ [0030], “FIG. 1 is a block diagram illustrating one embodiment of a system 100 to configure computing devices including CPUs and/or GPUs to perform data parallel computing for applications. System 100 may implement a parallel computing architecture. In one embodiment, system 100 may be a graphics system including one or more host processors coupled with one or more central processors 117 and one or more other processors such as media processors 115 through a data bus 113.”), the graphics processor including a plurality of processing resources (¶ [0027], “Processing resources of a data processing system may be based on a plurality of physical computing devices, such as CPUs or GPUs. A physical computing device may include one or more compute units. In one embodiment, data parallel processing tasks (or data parallel tasks) may be delegated to a plurality types of processors, for example, CPUs or GPUs capable of performing the tasks.”), and wherein the graphics processor is to: allocate a thread team local identifier (ID) for respective threads of a thread team (¶ [0072], “In one embodiment, a thread that executes an executable code may be identified by two distinct thread variables, a global thread ID (identifier) and a local thread ID. A global thread ID may be specified with a multi-dimensional value starting at (0, 0, . . . , 0) and goes to (G1-1, G2-1, . . . , GN-1) based on a global thread number (G1, G2, . . . , GN). Similarly, a local thread ID may be specified with a multi-dimensional value starting at (0, 0, . . . , 0) and goes to (L1-1, L2-1, . . . , LN-1) based on a local thread group number (L1, L2, . . . ,LN). A global thread ID and a local thread ID for a thread may be multi-dimensional values of the same dimension.”) comprising a plurality of hardware threads that are to be executed solely by a processing resource of the plurality of processing resources (¶ [0035], “FIG. 2 is a block diagram illustrating an example of a computing device with multiple compute processors (e.g. compute units) operating in parallel to execute multiple threads concurrently. Each compute processor may execute a plurality of threads in parallel (or concurrently). Threads that can be executed in parallel in a compute processor or compute unit may be referred to as a thread group. A computing device could have multiple thread groups that can be executed in parallel. For example, M threads are shown to execute as a thread group in computing device 205. Multiple thread groups, e.g. thread 1 of compute processor_1 205 and thread N of compute processor_L 203, may execute in parallel across separate compute processors on one computing device or across multiple computing devices.” ¶ [0069], “At block 809, if a local thread group number has been specified, the processing logic of process 800 may designate a thread group size for a compute unit according the specified local thread group number.”); and dispatch the respective threads together into the processing resource, the respective threads having the thread team ID allocated (¶ [0071]-[0072], “At block 813, the processing logic of process 800 may partition the total number of threads according to thread group sizes to execute concurrently in multiple compute units for a data parallel task. An executable code may execute in parallel in a compute unit as a group of threads having a size equal to the corresponding thread group size. The size of a group of threads may be the number of threads in the group. The processing logic of process 800 may decompose a data parallel task into appropriate multi-dimensional blocks or thread groups that can be executed concurrently across one or more compute units. In one embodiment, a thread that executes an executable code may be identified by two distinct thread variables, a global thread ID (identifier) and a local thread ID. A global thread ID may be specified with a multi-dimensional value starting at (0, 0, . . . , 0) and goes to (G1-1, G2-1, . . . , GN-1) based on a global thread number (G1, G2, . . . , GN). Similarly, a local thread ID may be specified with a multi-dimensional value starting at (0, 0, . . . , 0) and goes to (L1-1, L2-1, . . . , LN-1) based on a local thread group number (L1, L2, . . . ,LN). A global thread ID and a local thread ID for a thread may be multi-dimensional values of the same dimension.”). As per claim 2, Munshi teaches the apparatus of claim 1. Munshi also teaches wherein the thread team local ID is generated in an ordered manner across the respective threads of the thread team based on a thread group of the thread team and a walk order corresponding to a thread team dimension direction of a thread group range of the thread group (¶ [0072], “In one embodiment, a thread that executes an executable code may be identified by two distinct thread variables, a global thread ID (identifier) and a local thread ID. A global thread ID may be specified with a multi-dimensional value starting at (0, 0, . . . , 0) and goes to (G1-1, G2-1, . . . , GN-1) based on a global thread number (G1, G2, . . . , GN). Similarly, a local thread ID may be specified with a multi-dimensional value starting at (0, 0, . . . , 0) and goes to (L1-1, L2-1, . . . , LN-1) based on a local thread group number (L1, L2, . . . ,LN). A global thread ID and a local thread ID for a thread may be multi-dimensional values of the same dimension.” See also para. 0066-0068). As per claim 3, Munshi teaches the apparatus of claim 2. Munshi also teaches wherein the thread team dimension direction comprises at least one of an X-dimension or a Y-dimension (¶ [0066]-[0068], “An optimal thread group size may be based on a dimension associated with the data parallel task for an executable code to perform. A dimension may be a number specifying an iterative computation along more than one measures for a data parallel task. For example, an image processing task may be associated with a dimension of 2 along an image width and an image height…An API request may include a global thread number having a multi-dimensional value as an array of N integers (G1, G2, . . . , GN). Integer N may be a dimension associated with the data parallel task. The processing logic of process 800 may count the number of integers in an array from a multi-dimensional value to determine a dimension…In one embodiment, the processing logic of process 800 may verify whether a local thread group number has been specified at block 807. A local thread group number may have a multi-dimensional value specified as an array of N integers (L1, L2, . . . , LN). In one embodiment, the dimension of a local thread group number may be equal to the dimension of a total thread number for performing a data parallel task.” Examiner Note: One of ordinary skill in the art would recognize that dimensions corresponding to an image width and an image height can correlate to X and Y dimensions.). As per claim 5, Munshi teaches the apparatus of claim 2. Munshi also teaches wherein the thread group, the thread group range, the thread team, and the thread team dimension direction are specified via an application programming interface (API) (¶ [0053], “The number of thread groups and total number of threads may be specified in the API calls.” ¶ [0065], “When an identified dimension differs from a dimension specified, for example, using APIs, the processing logic of process 800 may select one of the identified dimension and the specified dimension according to, for example, a system setting.” ¶ [0067]-[0068], “The processing logic of process 800 may receive an API request from an application running in a host processor (or host processing unit), such as applications 103 in hosting systems 101 of FIG. 1. An API request may include a global thread number having a multi-dimensional value as an array of N integers (G1, G2, . . . , GN). Integer N may be a dimension associated with the data parallel task. The processing logic of process 800 may count the number of integers in an array from a multi-dimensional value to determine a dimension…In one embodiment, a local thread group number may be specified using APIs to compile a source code. If a local thread group number can be retrieved, the processing logic of process 800 may verify the local thread group number has already been specified. In some embodiments, a local thread group number may be specified using an API request from an application running in a hosting processor. An API request may include both a multi-dimensional global thread number and a multi-dimensional local thread group number.” See Table 2 on pgs. 5 & 6.). As per claim 7, Munshi teaches the apparatus of claim 1. Munshi also teaches wherein the respective threads are dispatched in groups of a thread team size (¶ [0066], “At block 803, in some embodiment, the processing logic of process 800 may determine optimal thread group sizes for executing executable codes in parallel among multiple compute units according to the determined resource requirements. A group of threads may execute an executable code compiled concurrently in a target compute unit. An optimal thread group size may be the number of threads in a thread group to execute an executable code in a compute unit to maximize resource usage within the compute unit.”). As per claim 8, Munshi teaches the apparatus of claim 7. Munshi also teaches wherein the thread team size comprises four threads (¶ [0051], “In one embodiment, a compute program executable may include description data associated with, for example, the type of target physical computing devices (e.g. a GPU or a CPU), versions, and/or compilation options or flags, such as a thread group sizes and/or thread group dimensions.” ¶ [0071], “At block 813, the processing logic of process 800 may partition the total number of threads according to thread group sizes to execute concurrently in multiple compute units for a data parallel task. An executable code may execute in parallel in a compute unit as a group of threads having a size equal to the corresponding thread group size. The size of a group of threads may be the number of threads in the group.” See also Table 2 on pgs. 5 & 6. Table 2 contains several execution parameters such as CL_DEVICE_MAX_THREAD_GROUP_SIZE and CL_DEVICE_SIMD_THREAD_GROUP_SIZE that one of ordinary skill in the art would recognize as being inclusive of four.). As per claim 9, Munshi teaches the apparatus of claim 1. Munshi also teaches wherein the thread team is a sub-portion of a thread group, the thread group comprising a plurality of hardware threads to be executed by the plurality of processing resources (¶ [0053], “A compute kernel execution instance may also include an event object identifying a previous execution instance and/or expected total number of threads and number of thread groups to perform the execution. The number of thread groups and total number of threads may be specified in the API calls.” ¶ [0071], “At block 813, the processing logic of process 800 may partition the total number of threads according to thread group sizes to execute concurrently in multiple compute units for a data parallel task. An executable code may execute in parallel in a compute unit as a group of threads having a size equal to the corresponding thread group size. The size of a group of threads may be the number of threads in the group. The processing logic of process 800 may decompose a data parallel task into appropriate multi-dimensional blocks or thread groups that can be executed concurrently across one or more compute units.”). As per claim 12, it is a method claim comprising similar limitations to claim 1, so it is rejected for similar reasons. As per claim 13, it is a method claim comprising similar limitations to claim 2, so it is rejected for similar reasons. As per claim 14, it is a method claim comprising similar limitations to claim 3, so it is rejected for similar reasons. As per claim 16, it is a method claim comprising similar limitations to claim 5, so it is rejected for similar reasons. As per claim 17, it is a system claim comprising similar limitations to claim 1, so it is rejected for similar reasons. Munshi also teaches memory for storage of data including data for graphics processing (¶ [0048], “At block 501, the processing logic of process 500 may allocate one or more compute memory objects (e.g. streams) in a logical computing device to execute a compute executable. A compute memory object may include one or more data elements to represent, for example, an image memory object or an array memory object. An array memory object may be a one-dimensional collection of data element. An image memory object may be a collection to store two-dimensional, three-dimensional or other multi-dimensional data, such as a texture, a frame buffer or an image. A processing task may be performed by a compute program executable operating on compute memory objects or streams using compute memory APIs including reading from input compute memory objects and writing to output compute memory objects.”). As per claim 18, it is a system claim comprising similar limitations to claim 2, so it is rejected for similar reasons. As per claim 20, it is a system claim comprising similar limitations to claim 5, so it is rejected for similar reasons. As per claim 21, it is a product claim comprising similar limitations to claim 1, so it is rejected for similar reasons. Munshi also teaches a non-transitory computer-readable medium (¶ [0087], “This apparatus may be specially constructed for the required purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), RAMs, EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions…”). As per claim 22, it is a product claim comprising similar limitations to claim 2, so it is rejected for similar reasons. As per claim 23, it is a product claim comprising similar limitations to claim 3, so it is rejected for similar reasons. As per claim 25, it is a product claim comprising similar limitations to claim 5, so it is rejected for similar reasons. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4, 15, 19, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Munshi as applied to claims 2, 15, 18, and 22 above, and further in view of Bruestle et al. (US Pub. No. 2018/0107456 A1 hereinafter Bruestle). As per claim 4, Munshi teaches the apparatus of claim 2. Munshi fails to teach translating a tile of the thread team. However, Bruestle teaches wherein the kernel of the graphics processor translates a one-dimensional tile of the thread team on the thread team dimension direction to at least one of a two-dimensional tile or a tile of another shape (¶ [0038], “Transforming the TILE representation to optimized platform-specific code such as OpenCL, CUDA, SPIR-V, or processor-specific machine code is challenging. TILE operations are compiled in two major stages. During the first stage, simplification, a number of mathematical transforms on the original operation are performed, resulting in a new version of the operation which meets certain criteria which simplify later analysis, but otherwise performs the same operation. Specifically, the original operation is “flattened” which removes the dimensionality of tensors, keeping only stride information. This simplified and flattened version of the operation is then passed to the second stage, code generation, during which it is further analyzed and turned into code for the platform in question. It is during the code generation stage when thread assignment, memory layout, tiling (for cache optimization) and other related steps are performed.”). Munshi and Bruestle are considered to be analogous to the claimed invention because they are in the same field of thread allocation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Munshi with the tile transformation functionality of Bruestle to arrive at the claimed invention. The motivation to modify Munshi with the teachings of Bruestle is that transforming tiles can optimize thread assignment and subsequent execution because tile transformation can account for processor-specific specifications. As per claim 15, it is a method claim comprising similar limitations to claim 4, so it is rejected for similar reasons. As per claim 19, it is a system claim comprising similar limitations to claim 4, so it is rejected for similar reasons. As per claim 24, it is a product claim comprising similar limitations to claim 4, so it is rejected for similar reasons. Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Munshi as applied to claim 1 above, and further in view of Elliot et al. (US Pub. No. 2017/0003972 A1 hereinafter Elliot *cited in IDS*). As per claim 6, Munshi teaches the apparatus of claim 1. Munshi teaches wherein the thread team accesses a shared memory space of the processing resource, the shared memory space inaccessible to other threads that are outside of the thread team (¶ [0036]-[0037], “A local memory may be coupled with a compute processor. Local memory, shared among threads in a single thread group running in a compute processor, may be supported by the local memory coupled with the compute processor…In one embodiment, a local memory for a compute processor or compute unit may be used to allocate variables shared by all thread in a thread group or a thread group. A local memory may be implemented as a dedicated local storage, such as local shared memory 219 for Processor_1 and local shared memory 211 for Processor_L. In another embodiment, a local memory for a compute processor may be implemented as a read-write cache for a computing device memory for one or more compute processors of a computing device, such as data cache 215 for compute processors 205, 203 in the computing device 201. A dedicated local storage may not be shared by threads across different thread groups.”). Munshi fails to teach the shared memory being register-based memory. Thus, it is necessary to bring in an additional reference that shows shared memory among threads being register-based. Accordingly, Elliot teaches wherein the threads access a shared local register (SLR) space of the processing resource, the SLR space inaccessible to other threads outside of the processing resource (¶ [0068], “In an embodiment the result is provided to one or more storage arrangements, e.g. memory or registers, that can be read by the active threads in the thread group. The storage arrangement may comprise a shared storage arrangement, e.g. a shared memory or register, that can be read by all the active threads in the thread group. This is particularly convenient because there may be other reasons to use a storage arrangement, e.g. a register, for the thread group as a whole, e.g. to store other data used with the execution of operations for the thread group. In another embodiment the storage arrangements may comprise a separate storage arrangement, e.g. a separate (e.g. private) memory or register, for each of the active threads in the thread group.”). Munshi and Elliot are considered to be analogous to the claimed invention because they are in the same field of thread allocation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the shared local registers of Elliot with the shared memory space of Munshi to arrive at the claimed invention. This substitution would have been reasonable under MPEP § 2143 as both references allocate thread in groups to execute instructions. Claim(s) 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Munshi as applied to claim 1 above, and further in view of Kang et al. (US Pub. No. 2013/0297919 A1 hereinafter Kang *cited in IDS*). As per claim 10, Munshi teaches the apparatus of claim 1. Munshi teaches the respective threads of the thread team (¶ [0035], “FIG. 2 is a block diagram illustrating an example of a computing device with multiple compute processors (e.g. compute units) operating in parallel to execute multiple threads concurrently. Each compute processor may execute a plurality of threads in parallel (or concurrently). Threads that can be executed in parallel in a compute processor or compute unit may be referred to as a thread group. A computing device could have multiple thread groups that can be executed in parallel. For example, M threads are shown to execute as a thread group in computing device 205. Multiple thread groups, e.g. thread 1 of compute processor_1 205 and thread N of compute processor_L 203, may execute in parallel across separate compute processors on one computing device or across multiple computing devices.”). Munshi fails to teach the threads being time-slice on the processing resource. However, Kang teaches wherein the respective threads of the thread team are to wholly time-slice on the processing resource (¶ [0029], “Threads 325-n instantiated from the GPU Cm kernel 205 may operate on user-determined blocks 332-n of the data space 230 and may be dispatched by thread dispatcher 368 to run on one or more EUs 365-n in the GPU 120. There may be threads 325-n from multiple GPU Cm kernels 205 in a single GPU invocation. Only one thread 325-n can be executed on a single EU 365-n until its completion, but every EU 365-n can have multiple co-resident threads 325-n that are time-sliced to increase overall execution throughput.”). Munshi and Kang are considered to be analogous to the claimed invention because they are in the same field of thread allocation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Munshi with the time-slicing functionality of Kang to arrive at the claimed invention. The motivation to modify Munshi with the teachings of Kang is that implementing time-sliced threads allows for optimal application execution and performance by increasing overall execution throughput. As per claim 11, Munshi teaches the apparatus of claim 1. Munshi fails to explicitly teach the threads of the thread team operating as a single virtual thread on the processing resource. However, Kang teaches wherein the threads of the thread team logically operate as a single virtual thread running on the processing resource (¶ [0029], “Threads 325-n instantiated from the GPU Cm kernel 205 may operate on user-determined blocks 332-n of the data space 230 and may be dispatched by thread dispatcher 368 to run on one or more EUs 365-n in the GPU 120. There may be threads 325-n from multiple GPU Cm kernels 205 in a single GPU invocation. Only one thread 325-n can be executed on a single EU 365-n until its completion, but every EU 365-n can have multiple co-resident threads 325-n that are time-sliced to increase overall execution throughput. Each EU 365-n may include multiple SIMD lanes 374 that may be used to execute its SIMD instructions that may be part of its ISA and generated by the Cm compiler 210 from the GPU Cm kernels 205. Every EU 365-n may also have access to a large general register file (GRF) 372 to reduce memory-access overhead.”). Refer to claim 10 for motivation to combine. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Munshi et al. (US Pub. No. 2009/0307699 A1) also teaches allocating thread groups to processing resources in response to an API call. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN ROBERT DAKITA EWALD whose telephone number is (703)756-1845. The examiner can normally be reached Monday-Friday: 9:00-5:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. J.D.E./Examiner, Art Unit 2199 /LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Sep 30, 2022
Application Filed
Dec 01, 2022
Response after Non-Final Action
Jan 20, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602267
DYNAMIC APPLICATION PROGRAMMING INTERFACE MODIFICATION TO ADDRESS HARDWARE DEPRECIATION
2y 5m to grant Granted Apr 14, 2026
Patent 12572377
TRANSMITTING INTERRUPTS FROM A VIRTUAL MACHINE (VM) TO A DESTINATION PROCESSING UNIT WITHOUT TRIGGERING A VM EXIT
2y 5m to grant Granted Mar 10, 2026
Patent 12547465
METHOD AND SYSTEM FOR VIRTUAL DESKTOP SERVICE MANAGER PLACEMENT BASED ON END-USER EXPERIENCE
2y 5m to grant Granted Feb 10, 2026
Patent 12536041
SYSTEM AND METHOD FOR DETERMINING MEMORY RESOURCE CONFIGURATION FOR NETWORK NODES TO OPERATE IN A DISTRIBUTED COMPUTING NETWORK
2y 5m to grant Granted Jan 27, 2026
Patent 12524281
C²MPI: A HARDWARE-AGNOSTIC MESSAGE PASSING INTERFACE FOR HETEROGENEOUS COMPUTING SYSTEMS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+55.6%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month