Prosecution Insights
Last updated: April 19, 2026
Application No. 17/884,755

CONCURRENT COMPUTE CONTEXT

Final Rejection §103
Filed
Aug 10, 2022
Examiner
VU, KHOA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Intel Corporation
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
84%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
234 granted / 345 resolved
+5.8% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
27 currently pending
Career history
372
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 345 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect amended claims 1, 14 filed on 01/1/2026 have been considered but they are not persuasive. However, examiner found some amended limitations are taught by references previous introduced. In Remark page 9, third paragraph, applicant argued that the amended claim includes "a compute engine associated with a partition of compute circuitry," "a first command streamer configured to submit commands for first compute workloads to a first plurality of hardware command queues of the compute engine," "a second command streamer configured to submit commands for second compute workloads to a second plurality of hardware command queues of the compute engine" and "circuitry configured to schedule the commands for the first compute workloads and the second compute workloads to the compute engine for execution via the partition of compute circuitry," Vembu' s VCS is not used for scheduling to a compute engine, thus fails to teach the elements of the amened claim. The examiner respectfully disagrees with Applicant’s argument. In fact, in paragraph [0026], Vembu discloses “compute engines within the device can readily acquire new work items for execution with minimal latency” and [0028] “The software can then submit work items directly to a tile and local hardware schedulers within the tile can schedule the workload to the appropriate engine within the tile. Each engine can execute the same command buffer. When an engine is ready to execute a new work item, the engine can dynamically and atomically acquire the next chunk (e.g., partition) of work for execution” and [0183] “one or more microchips or integrated circuits interconnected using a parent-board, hardwired logic” Vembu teaches a compute engine associated with a partition of compute circuitry (local hardware). Furthermore, in paragraph [0157], Vembu discloses “Fig. 16C, engines of the engine block tile include compute command streamer (CCS 625)” and [0187] a graphics processor comprising a first tile of graphics processing engines, a second tile of graphics processing engines, and an interface between a host system and the graphics processor. The interface can be configured to receive a set of commands for a workload having a first partition, and a second partition and submit the set of commands to the first tile of graphics processing engines. The first tile of graphics processing engines can read a first partition identifier from a first hardware context, where the first partition identifier is associated with the first partition” Vembu teaches a first command streamer (e.g., CCS 625 of 1st engine of the engine block tile, Fig. 16C) to submit for first computer workload to a first plurality of hardware command queues of the computer engine (a set of command queues for a workload of first/second partitions to the first tile of graphics processing engine); and In paragraph [0157], Vembu discloses “ Fig. 16C, engines of the engine block tile include compute command streamer (CCS 625)” and [0187] a graphics processor comprising a first tile of graphics processing engines, a second tile of graphics processing engines, and an interface between a host system and the graphics processor. The interface can be configured to receive a set of commands for a workload having a first partition, and a second partition and submit the set of commands to the second tile of graphics processing engines. The second tile of graphics processing engines can read a second partition identifier from a first hardware context, where the second partition identifier is associated with the second partition” Vembu teaches a second command streamer (e.g., CCS 625 of 2nd engine of the engine block tile, Fig. 16C) to submit for second computer workload to a second plurality of hardware command queues of the computer engine (a set of command queues for a workload of first/second partitions to the second tile of graphics processing engine); and In paragraph [0133], Vembu discloses “FIG. 12, integrated circuit 1200 includes at least one graphics processor 1210” and [0028], “The software can then submit work items directly to a tile and local hardware schedulers within the tile can schedule the workload to the appropriate engine within the tile. Each engine can execute the same command buffer. When an engine is ready to execute a new work item, the engine can dynamically and atomically acquire the next chunk (e.g., partition) of work for execution” and [0187] a graphics processor comprising a first tile of graphics processing engines, a second tile of graphics processing engines, and an interface between a host system and the graphics processor. The interface can be configured to receive a set of commands for a workload having a first partition, and a second partition, submit the set of commands to the first tile of graphics processing engines, and submit the set of commands to the second tile of graphics processing engines. The first tile of graphics processing engines can read a first partition identifier from a first hardware context. The first tile can then conditionally execute commands of the first partition while bypassing commands of the second partition. The second tile of graphics processing engines can read a second partition identifier from a second hardware context, where the second partition identifier is associated with the second partition. The second tile can then conditionally execute commands of the second partition” Vembu teaches a circuitry (include a graphics processor) is configured to schedule the commands for the first compute workloads (execute command of the first partition) and the second compute workloads (execute command of the first partition) to the compute engine for execution via the partition of compute circuitry. Independent claims 14 has been amended similarly to claim 1 and are rejected as the explanation above. Dependent claims 2-8, 10-13,15-20 depend on independent claims 1, 9 and 14 and rejected as current rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 14-18, are rejected under 35 U.S.C. 103 as being unpatentable by Vembu et al. (U.S. 2020/0219223 A1) in view of Evans et al. (U.S. 2021/0294707 A1). Regarding Claim 1, Vembu discloses an apparatus (Vembu, [0033] “FIG.1, a processing system 100”) comprising: a system interface (Vembu, Fig. 16A, [0153] “workloads can be submitted directly to an engine block tile 1605A-1605D via a doorbell 1603A-1603D within a system graphics interface 1602A-1602D” Vembu teaches a system interface 1602A-1602D); and a general-purpose graphics processor coupled with the system interface (Vembu, Fig. 15, [0149] “The GPGPU 1520 can also include a set of resources that can be shared by the engine block tiles 1524A-1524N” and [0150] “Software that executes on a host processor can submit work items to the global scheduler 1522, which can distribute the various work items to one or more engine block tiles 1524A-1524N” and [0152] “in FIG. 16A, the graphics processing system 1600 includes an application and/or graphics driver (app/driver 1601) that can send workloads 1604A-1604D to one or more engine block tiles 1605A-1605D and Fig. 16A, [0153] “workloads can be submitted directly to an engine block tile 1605A-1605D via a doorbell 1603A-1603D within a system graphics interface 1602A-1602D” Vembu teaches a GPGPU (1520) can use the global scheduler (1522, Fig. 15) to distribute workloads of work items to engine block tiles (1524A-1524N) via system interface, (1602A -1602F, 16A) the general- purpose graphics processor comprising: a plurality of graphics processor hardware resources configured to be partitioned into a plurality of isolated partitions (Vembu, Fig. 4, [0056] “the graphics core array 414 include one or more blocks of graphics cores (e.g., graphics core(s) 415A, graphics core(s) 415B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources” and [0139] “FIG. 14A. The graphics core 1400 can include multiple slices 1401A-1401N or partition for each core, and a graphics processor can include multiple instances of the graphics core 1400” Vembu teaches a plurality of graphics processor hardware resources (graphics cores), each graphics processor is partitioned into plurality of partitions or slices , each of the plurality of isolated partitions including: a compute engine associated with a partition of compute circuitry (Vembu,[0026] “compute engines within the device can readily acquire new work items for execution with minimal latency” and [0028] “The software can then submit work items directly to a tile and local hardware schedulers within the tile can schedule the workload to the appropriate engine within the tile. Each engine can execute the same command buffer. When an engine is ready to execute a new work item, the engine can dynamically and atomically acquire the next chunk (e.g., partition) of work for execution” and [0183] “one or more microchips or integrated circuits interconnected using a parent-board, hardwired logic” Vembu teaches a compute engine associated with a partition of compute circuitry (local hardware); a first command streamer configured to submit commands for first compute workloads to a first plurality of hardware command queues of the compute engine (Vembu, [0157] “Fig. 16C, engines of the engine block tile include compute command streamer (CCS 625)” and [0187] a graphics processor comprising a first tile of graphics processing engines, a second tile of graphics processing engines, and an interface between a host system and the graphics processor. The interface can be configured to receive a set of commands for a workload having a first partition, and a second partition and submit the set of commands to the first tile of graphics processing engines. The first tile of graphics processing engines can read a first partition identifier from a first hardware context, where the first partition identifier is associated with the first partition” Vembu teaches a first command streamer (e.g., CCS 625 of 1st engine of the engine block tile, Fig. 16C) to submit for first computer workload to a first plurality of hardware command queues of the computer engine (a set of command queues for a workload of first/second partitions to the first tile of graphics processing engine); a second command streamer configured to submit commands for second compute workloads to a second plurality of hardware command queues of the compute engine (Vembu, [0157] “ Fig. 16C, engines of the engine block tile include compute command streamer (CCS 625)” and [0187] a graphics processor comprising a first tile of graphics processing engines, a second tile of graphics processing engines, and an interface between a host system and the graphics processor. The interface can be configured to receive a set of commands for a workload having a first partition, and a second partition and submit the set of commands to the second tile of graphics processing engines. The second tile of graphics processing engines can read a second partition identifier from a first hardware context, where the second partition identifier is associated with the second partition” Vembu teaches a second command streamer (e.g., CCS 625 of 2nd engine of the engine block tile, Fig. 16C) to submit for second computer workload to a second plurality of hardware command queues of the computer engine (a set of command queues for a workload of first/second partitions to the second tile of graphics processing engine); and circuitry configured to schedule the commands for the first compute workloads and the second compute workloads to the compute engine for execution via the partition of compute circuitry (Vembu, [0133], “FIG. 12, integrated circuit 1200 includes at least one graphics processor 1210” and [0028] “The software can then submit work items directly to a tile and local hardware schedulers within the tile can schedule the workload to the appropriate engine within the tile. Each engine can execute the same command buffer. When an engine is ready to execute a new work item, the engine can dynamically and atomically acquire the next chunk (e.g., partition) of work for execution” and [0187] a graphics processor comprising a first tile of graphics processing engines, a second tile of graphics processing engines, and an interface between a host system and the graphics processor. The interface can be configured to receive a set of commands for a workload having a first partition, and a second partition, submit the set of commands to the first tile of graphics processing engines, and submit the set of commands to the second tile of graphics processing engines. The first tile of graphics processing engines can read a first partition identifier from a first hardware context. The first tile can then conditionally execute commands of the first partition while bypassing commands of the second partition. The second tile of graphics processing engines can read a second partition identifier from a second hardware context, where the second partition identifier is associated with the second partition. The second tile can then conditionally execute commands of the second partition” Vembu teaches a circuitry (includes a graphics processor) is configured to schedule the commands for the first compute workloads (execute command of the first partition) and the second compute workloads (execute command of the first partition) to the compute engine for execution via the partition of compute circuitry. However, Vembu does not explicitly teach a plurality of graphics processor hardware resources configured to be partitioned into a plurality of isolated partitions. Evan teaches a plurality of graphics processor hardware resources configured to be partitioned into a plurality of isolated partitions (Evans, [0047] “FIG. 1 is a block diagram illustrating a partitioned 112, 114, 116, 118 graphics processing unit 110, a partition 112, 114, 116, 118 is isolated by hardware and software features on a GPU 110, such that each partition 112, 114, 116, 118 appears as an independent GPU 110, a physically isolated computing slice on a GPU 110 is a set of processor resources” and [0108] “FIG. 10, “one or more graphics processors 1008” Evans teaches a plurality of graphics processor hardware resources configured to be partitioned into a plurality of isolated partitions (112, 114, 116, 118 partitions). Vembu and Evans are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Vembu to combine with partitioning graphics processors into a plurality of isolated partitions (as taught by Evans) in order to partition graphics processors into a plurality of isolated partitions because Evans can provide a plurality of graphics processor hardware resources configured to be partitioned into a plurality of isolated partitions (112, 114, 116, 118 partitions) (Evans, [0047]). Doing so, it may provide when executed, facilitate each partition to directly address and access memory resources on a GPU using traditional and new direct memory access (DMA) techniques without using software on said partition (Evans, [0049]). Regarding Claim 2, the apparatus as in claim 1, Vemu does not explicitly teach wherein each of the plurality of isolated partitions includes a cluster of graphics processor cores. However, Evans teaches each of the plurality of isolated partitions includes a cluster of graphics processor cores (Evans, Fig. 1, [0047] “a partition 112, 114, 116, 118 is isolated by hardware and software features on a GPU 110” and Fig. 19B, [0170] “GPGPU 1930 includes memory 1944A-1944B coupled with compute clusters 1936A-1936H via a set of memory controllers 1942A-1942B” and [0171] “compute clusters 1936A-1936H each include a set of graphics cores, such as graphics core 1900 of FIG. 19A” Evans teaches each of the plurality of isolated partitions includes a cluster of graphics processor cores. Vembu and Evans are combinable see rationale in claim 1. Regarding Claim 3, Vembu discloses the apparatus as in claim 1, wherein each of the plurality of isolated partitions includes a tile of graphics processor engines (Vembu, [0159] “FIG. 17 illustrates a tile work portioning and scheduling system 1700, according to embodiments described herein. The tile work partitioning and scheduling system 1700 enables a workload to be distributed across multiple GPUs 1730A-1730D, where each of the multiple GPUs can be an instance of an engine block tile 1605A-1605D as in FIG. 16A” Vembu teaches a tile of graphics processor (engine block tile 1605A-1605D) for the plurality of partitions. However, Vembu does not teach each of the plurality of isolated partitions includes a tile of graphics processor engines. Evans teaches each of the plurality of isolated partitions includes a tile of graphics processor engines (Evans, Fig. 1, [0047] “a partition 112, 114, 116, 118 is isolated by hardware and software features on a GPU 110” and [0166] “graphics processor 1840 includes a thread dispatcher to dispatch execution threads to one or more shader cores 1855A-1855N and a tiling unit 1858 to accelerate tiling operations for tile-based rendering” Evans teaches each of the plurality of isolated partitions includes a tile of graphics processor engines (a tiling unit 1858). A combination between Vembu and Evans can be used to teaches each of the plurality of isolated partitions includes a tile of graphics processor engines. Vembu and Evans are combinable see rationale in claim 1. Regarding Claim 4, Vembu discloses the apparatus as in claim 3, wherein each of the plurality of isolated partitions resides on a separate semiconductor die (Vembu, [0129] “FIG. 11B, Each unit of logic 1172, 1174 can be implemented within a semiconductor die and coupled with the substrate 1180 via an interconnect structure 1173” and [0139] FIG. 14A. The graphics core 1400 can include multiple slices 1401A-1401N or partition for each core” Vembu teaches each of the plurality of partitions resides on a separate semiconductor die. However, Vembu does not explicitly teach each of the plurality of isolated partitions resides on a separate semiconductor die. Evans teaches each of the plurality of isolated partitions resides on a separate semiconductor die (Evans, Fig. 1, [0047] “a partition 112, 114, 116, 118 is isolated by hardware and software features on a GPU 110” and [0255] “PPU is embodied on a single semiconductor substrate. PPU may be an integrated GPU (“iGPU”) included in chipset of motherboard” each of the plurality of isolated partitions resides on a separate semiconductor die (semiconductor substrate). Vembu and Evans are combinable see rationale in claim 1. Regarding Claim 5, Vembu discloses the apparatus as in claim 1, wherein the plurality of isolated partitions include a first isolated partition and a second isolated partition, wherein the first isolated partition and the second isolated partition include separate functional units, separate cache memory, and separate paths to local memory of the general-purpose graphics processor (Vembu, Fig. 19, [0167] As shown at block 1902. At block 1904 the operations partition the set of commands into a first partition and a second partition. At block 1912, the operations additionally execute the first partition via the first graphics processing engine tile and execute the second partition via the second graphics processing engine tile” and [0159] “FIG. 17, The tile work partitioning and scheduling system 1700 enables a workload to be distributed across multiple GPUs 1730A-1730D” and [0160] “a separate hardware context 1720A-1720D can be created and associated with each respective GPU 1730A-1730D…A batch buffer start command is inserted into the command ring buffers 1710A-1710D associated with the respective GPU 1730A-1730D” Vembu teaches plurality of partitions includes a first partition and second partition with separated function units (first graphics processing engine tile, second graphics processing engine tile), separate cache memory, separate paths to local memory (the command ring buffers 1710A-1710D with separated paths, Fig. 17). However, Vembu does not explicitly teach first isolated partition and second isolated partition. Evans teaches first isolated partition and second isolated partitions (Evans, Fig. 1, [0047] “a partition 112, 114, 116, 118 is isolated by hardware and software features on a GPU 110” Evans teaches a first isolated partition (112) and a second isolated partition (114). A combination of Vembu and Evans can be used to teach the plurality of isolated partitions include a first isolated partition and a second isolated partition, wherein the first isolated partition and the second isolated partition include separate functional units, separate cache memory, and separate paths to local memory of the general-purpose graphics processor. Vembu and Evans are combinable see rationale in claim 1. Regarding Claim 6, Vembu discloses the apparatus as in claim 5, wherein the first isolated partition is presented via the system interface as a first sub-device and the second isolated partition is presented via the system interface as a second sub-device (Vembu, Fig. 16A, [0153] “workloads can be submitted directly to an engine block tile 1605A-1605D via a doorbell 1603A-1603D within a system graphics interface 1602A-1602D” Vembu teaches a system interface 1602A-1602D)” Vembu teaches the first partition is presented via the system interface as a first sub-device (1602A, Fig. 16A) and the second partition is presented via the system interface as a second sub-device (1602B, Fig. 16A). However, Vembu does not explicitly teach first isolated partition and second isolated partition. Evans teaches first isolated partition and second isolated partitions (Evans, Fig. 1, [0047] “a partition 112, 114, 116, 118 is isolated by hardware and software features on a GPU 110” Evans teaches a first isolated partition (112) and a second isolated partition (114). The combination of Vembu and Evans can be used to teach the first isolated partition is presented via the system interface as a first sub-device and the second isolated partition is presented via the system interface as a second sub-device. Vembu and Evans are combinable see rationale in claim 1. Regarding Claim 14 (Currently amended), a combination of Vembu and Evans discloses a graphics processing system (Vembu, [0001] “Computing systems may include a graphics processor to perform graphics processing”) comprising: a system interface; a memory device (Vembu, [0038] “The memory device 120 can be a dynamic random-access memory (DRAM) device, a static random-access memory (SRAM) device, flash memory device”; and a general-purpose graphics processor coupled with the system interface and the memory device (Vembu, [0038] “the memory device 120 can operate as system memory for the system 100, to store data 122 and instructions 121 for use when the one or more processors 102 executes an application or process”, the general-purpose graphics processor comprising: a plurality of graphics processor hardware resources configured to be partitioned into a plurality of isolated partitions, each of the plurality of isolated partitions including: a compute engine associated with a partition of compute circuitry; a first command streamer configured to submit commands for first compute workloads to a first plurality of hardware command queues of the compute engine; a second command streamer configured to submit commands for second compute workloads to a second plurality of hardware command queues of the compute engine; and circuitry configured to schedule the commands for the first compute workloads and the second compute workloads to the compute engine for execution via the partition of compute circuitry. Claim 14 is substantially similar to claim 1 is rejected based on similar analyses. Regarding Claim 15, a combination of Vembu and Evans discloses the graphics processing system as in claim 14, wherein each of the plurality of isolated partitions includes a cluster of graphics processor cores or a tile of graphics processor engines. Claim 15 is substantially similar to claim 2 is rejected based on similar analyses. Regarding Claim 16, a combination of Vembu and Evans discloses the graphics processing system as in claim 15, wherein each of the plurality of isolated partitions resides on a separate semiconductor die. Claim 16 is substantially similar to claim 4 is rejected based on similar analyses. Regarding Claim 17, a combination of Vembu and Evans discloses the graphics processing system as in claim 14, wherein the plurality of isolated partitions include a first isolated partition and a second isolated partition, wherein the first isolated partition and the second isolated partition include separate functional units, separate cache memory, and separate paths to local memory of the general-purpose graphics processor. Claim 17 is substantially similar to claim 5 is rejected based on similar analyses. Regarding Claim 18, a combination of Vembu and Evans discloses the graphics processing system as in claim 17, wherein the first isolated partition is presented via the system interface as a first sub-device and the second isolated partition is presented via the system interface as a second sub-device. Claim 18 is substantially similar to claim 6 is rejected based on similar analyses. Claims 7, 8, 19, 20 are rejected under 35 U.S.C. 103 as being unpatentable by Vembu et al. (U.S. 2020/0219223 A1) in view of Evans et al. (U.S. 2021/0294707 A1) and further in view of Raganathan et al. (U.S. 2022/0138895 A1). Regarding Claim 7, the apparatus as in claim 6, a combination of Vembu and Evans does not explicitly teach wherein the first isolated partition includes a first compute partition and a second compute partition, the first compute partition and the second compute partition configurable to execute separate compute contexts while sharing graphics processor hardware resources of the first isolated partition. However, Reganathan teaches the first isolated partition includes a first compute partition and a second compute partition, the first compute partition and the second compute partition configurable to execute separate compute contexts while sharing graphics processor hardware resources of the first isolated partition (Reganathan, Fig. 28, [0372] “the GPU walker is to select tiles of the multi-tile GPU 2800 for the performance of each partition unit of the compute work 2850, with an appropriate tile being assigned to each work unit. In the illustrated example, the GPU is to assign work unit PO to Tile-0, work unit P1 to Tile-2, work unit P2 to Tile-6, and work unit P3 to Tile-7” Reganathan teaches the first partition includes a first compute partition (partition unit P0) and a second compute partition (partition unit P01), the first compute partition and the second compute partition configurable to execute separate compute contexts (separate Tile-0, Tile-1,Fig. 28) ,while sharing graphics processor hardware resources (Mult-Tile GPU 2800) of the first partition. A combination of Vembu, Evans and Reganathan can be used to teach the first isolated partition includes a first compute partition and a second compute partition, the first compute partition and the second compute partition configurable to execute separate compute contexts while sharing graphics processor hardware resources of the first isolated partition. Vembu and Evans and Reganathan are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Vembu to combine with compute partition (as taught by Reganathan) in order to apply the first compute partition and the second compute partition because Reganathan can provide the first partition includes a first compute partition (partition unit P0) and a second compute partition (partition unit P01), the first compute partition and the second compute partition configurable to execute separate compute contexts (separate Tile-0, Tile-1,Fig. 28) ,while sharing graphics processor hardware resources (Mult-Tile GPU 2800) of the first partition (Reganathan, Fig. 28, [0372]). Doing so, it may provide sufficient area efficiency for cache, and for providing efficient compute operation in a multi-tile GPU architecture (Reganathan, [0369]). Regarding Claim 8, the apparatus as in claim 7, Vembu teaches engines of the engine block tile include compute command streamers (CCS 625) (Vembu, [0157], Fig. 16C). However, a combination of Vembu and Evans does not explicitly teach wherein the first command streamer of the first isolated partition is associated with the first compute partition and the second command streamer of the first isolated partition is associated with the second isolated partition. Reganathan teaches command streamer of the first isolated partition is associated with the first compute partition and the command streamer of the first isolated partition is associated with the second isolated partition (Reganathan, [0247] “Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine” and [0263] “The graphics processor 1620 has a tiled architecture, may include a graphics processing engine cluster 1622 having multiple instances of the graphics processing engine 1610 of FIG. 16A within a graphics engine tile 1610A-1610D” and Fig. 28, [0372] “the GPU walker is to select tiles of the multi-tile GPU 2800 for the performance of each partition unit of the compute work 2850, with an appropriate tile being assigned to each work unit. In the illustrated example, the GPU is to assign work unit PO to Tile-0, work unit P1 to Tile-2, work unit P2 to Tile-6, and work unit P3 to Tile-7” Reganathan teaches command streamer of the first partition associated with the first compute partition (partition unit P0) and the command streamer of the first partition is associated with the second partition and the command streamer of the first partition is associated with the second isolated partition (partition unit P1). A combination of Vembu, Evans and Reganathan can be used to teach the first command streamer of the first isolated partition (as taught by Vembu, Evan) is associated with the first compute partition (as taught by Reganathan) and the second command streamer of the first isolated partition (as taught by Vembu, Evan) is associated with the second isolated partition (as taught by Reganathan). Vembu and Evans and Reganathan are combinable see rationale in claim 8. Regarding Claim 19, a combination of Vembu, Evans and Reganathan discloses the graphics processing system as in claim 18, wherein the first isolated partition includes a first compute partition and a second compute partition, the first compute partition and the second compute partition configurable to execute separate compute contexts while sharing graphics processor hardware resources of the first isolated partition. Claim 19 is substantially similar to claim 7 is rejected based on similar analyses. Regarding Claim 20, a combination of Vembu, Evans and Reganathan discloses the graphics processing system as in claim 19, wherein the first command streamer of the first isolated partition is associated with the first compute partition and the second command streamer of the first isolated partition is associated with the second isolated partition. Claim 20 is substantially similar to claim 8 is rejected based on similar analyses. Allowable Subject Matter Claims 9-13 are allowed. The following is a statement of reasons for the indication of allowable subject matter: Regarding to independent claims 1, 14 the closest prior art references the examiner found are Vembu et al. (U.S. 2020/0219223 A1) in view of Evans et al. (U.S. 2021/0294707 A1) have been made of record as teaching: a plurality of graphics processor hardware resources configured to be partitioned into a plurality of partitions (Vembu, Fig. 4, [0056], [0139]); a first command streamer, a second command streamer (Vembu, Fig. 16C, [0157]); configured to schedule general-purpose graphics compute workloads submitted to a first plurality of command queues associated with the first command streamer and a second plurality of command queues associated with the second command streamer (Vembu, [0067], Fig. 9B, [0120]); a plurality of graphics processor hardware resources configured to be partitioned into a plurality of isolated partitions (Evans, Fig. 1, [0047]), recite on claims 1, 14. However, the art of record did not teach or suggest the claim taken as a whole and particular the limitation pertaining acquiring at least a minimum number of graphics processor hardware resources of the first isolated partition to associate with the first command queue, wherein the first command queue includes commands associated with a workload and the workload specifies the minimum number of graphics processor hardware resources; dispatching threads of the workload to physical thread slots of at least the minimum number of graphics processor hardware resources; and load balancing a number of graphics processor hardware resources assigned to the workload between the minimum number of graphics processor hardware resources and a maximum number of graphics processor hardware resources, recited in independent claim 9. Dependent claims 10-14 are allowed because they depend on independent claim 9. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance”. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA VU whose telephone number is (571)272-5994. The examiner can normally be reached 8:00- 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KHOA VU/Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 10, 2022
Application Filed
Sep 21, 2022
Response after Non-Final Action
Oct 18, 2025
Non-Final Rejection — §103
Jan 13, 2026
Response Filed
Jan 13, 2026
Applicant Interview (Telephonic)
Jan 18, 2026
Examiner Interview Summary
Feb 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598266
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597087
HIGH-PERFORMANCE AND LOW-LATENCY IMPLEMENTATION OF A WAVELET-BASED IMAGE COMPRESSION SCHEME
2y 5m to grant Granted Apr 07, 2026
Patent 12578941
TECHNIQUE FOR INTER-PROCEDURAL MEMORY ADDRESS SPACE OPTIMIZATION IN GPU COMPUTING COMPILER
2y 5m to grant Granted Mar 17, 2026
Patent 12567181
SYSTEMS AND METHODS FOR REAL-TIME PROCESSING OF MEDICAL IMAGING DATA UTILIZING AN EXTERNAL PROCESSING DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12548431
CONTEXTUALIZED AUGMENTED REALITY DISPLAY SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
84%
With Interview (+15.8%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 345 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month