DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to applicant’s amendment filed on 01/20/2026.
Claims 1-20 are pending and examined.
Response to Arguments
Applicant's arguments filed 01/20/2026 with respect to 35 U.S.C. 101 have been fully considered and are persuasive. The rejections under 35 U.S.C. 101 have been withdrawn.
Applicant's arguments filed 01/20/2026 with respect to 35 U.S.C. 112 have been fully considered and are persuasive. The rejections under 35 U.S.C. 112 have been withdrawn.
Applicant's arguments filed 01/20/2026 with respect to 35 U.S.C. 102 and 103 have been fully considered but they are not persuasive. Applicant argues that Salah “cannot teach or suggest the above features of claims 1, 10, and 19” because “It appears the Office Action is equating the task queue entries or task list entries (collectively "task entries") as the recited "execution units"” and “that the execution units are now described in claims 1, 10, and 19 as one of a workgroup or a kernel instance, excluding the cited "task entries" that are grouped into wavefronts.”” Examiner respectfully disagrees, see 35 U.S.C. 102 rejections below for a detailed analysis. Examiner would like to clarify that the task queue entries or task list entries of Saleh is not equated to the execution units, but rather the execution items. Additionally, the examiner interprets Saleh’s work-items being executed simultaneously as a wavefront, where multiple wavefronts are included and executed by identifying their work group, as each execution item having a type that is a workgroup. Saleh’s command processor launching wavefronts based on the task lists and their associated work group and the scheduler scheduling various wavefronts on different compute and SIMD units correlates to scheduling the plurality of execution items based on their type for execution together on an execution unit comprising one of a compute engine. The processor and APD accessing the memory for operation, which can include a cache, correlates to cache lines of data in a cache. The SIMD executing a first function and completing a particular code segment and storing a new task list entry for subsequent execution correlates to at least two of the plurality of execution items. The completion of a particular code segment may call another function using parameters or return a value in a return instruction and therefore would include data referenced in the task list entry that needs to be maintained between the completion of the first function and the start of the related task list entry. Therefore, the completion of a first execution of a function and the execution of a subsequent task list entry correlates to at least two of the plurality of execution items executing at points in time preventing a cache line of the data from being evicted from a cache. Therefore, the 35 U.S.C. 102 and 103 rejections are maintained.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “A device, comprising: … a scheduler configured to: … identify an execution unit of the processor for executing the plurality of execution items together” in claim 10. The scheduler is the generic placeholder which is coupled with functional language “configured to identify an execution unit of the processor for executing…”
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 7-8, 10-12, 16-17, and 19-20 are rejected under 35 U.S.C. 102(a)(2) as being unpatentable by Saleh et al. (U.S. Patent No. US 20200004585 A1), hereinafter “Saleh.”
With regards to Claim 1, Saleh teaches:
A method, comprising:
identifying a plurality of execution items that share data, wherein the execution items have matching commonality metadata (Paragraphs 17, 27 and 30, “In one example, each SIMD unit 138 includes sixteen lanes, where each lane executes the same instruction at the same time as the other lanes in the SIMD unit 138 but can execute that instruction with different data. Lanes can be switched off with predication if not all lanes need to execute a given instruction… In response to execution of the function, the SIMD unit 138 stores a task queue entry 506 into a task list corresponding to each of the lanes executing the function and exits the shader, the task lists being in a task queue data structure 504… The purpose of the task list is to allow the command processor 137 to aggregate tasks that can be launched together as a wavefront at a later time, thereby taking advantage of the parallelism of the SIMD architecture. Each task list stores task list entries 506 for execution of a particular task—a portion of code. For example, function 1 task list stores task list entries 506 for execution of function 1, function 2 task list stores task list entries 506 for execution of function 2, and so on… In some examples, the task list entries store stack pointers for the lane. The stack pointer uniquely identifies the lane for which the task list entry is created. More specifically, each lane has a stack at a location in memory. The stack holds data for the current code segment for the lane, such as local variables, function parameters, and return values. The stack pointer thus uniquely identifies a particular lane.” The SIMD unit storing task queue entries into a task list corresponds to identifying a plurality of execution items. The function 1 and 2 task lists storing task list entries specifically for execution of function 1 and 2 respectively correlates to identifying a plurality of execution items that share data. The task list entries having stack pointers which uniquely identifies a particular lane of a task list and further store data such as the current code segment, which are the same between entries of the same task list due to executing the same function, correlates to the execution items having matching commonality metadata) and each execution item having a type that is one of workgroups or kernel instances (Paragraph 18, “The basic unit of execution in compute units 132 is a work-item. Each work-item represents a single instantiation of a shader program that is to be executed in parallel in a particular lane of a wavefront. Work-items can be executed simultaneously as a “wavefront” on a single SIMD unit 138. Multiple wavefronts may be included in a “work group,” which includes a collection of work-items designated to execute the same program. A work group can be executed by executing each of the wavefronts that make up the work group.” The work-items being executed simultaneously as a wavefront, where multiple wavefronts are included and executed by identifying their work group, correlates to each execution item having a type that is a workgroup); and
scheduling the plurality of execution items, based on their type, for execution together on an execution unit comprising one of a compute unit or a shader engine (Paragraph 18, “Work-items can be executed simultaneously as a “wavefront” on a single SIMD unit 138. Multiple wavefronts may be included in a “work group,” which includes a collection of work-items designated to execute the same program. A work group can be executed by executing each of the wavefronts that make up the work group… A command processor 137 is present in the compute units 132 and launches wavefronts based on work (e.g., execution tasks) that is waiting to be completed. A scheduler 136 is configured to perform operations related to scheduling various wavefronts on different compute units 132 and SIMD units 138.” The command processor launching wavefronts based on the task lists and their associated work group and the scheduler scheduling various wavefronts on different compute and SIMD units correlates to scheduling the plurality of execution items based on their type for execution together on an execution unit comprising one of a compute engine), where at least two of the plurality of execution items execute at points in time that prevent a cache line of the data from being evicted from a cache (Fig. 1, paragraphs 12, 14-15, 17, and 27-28, “The memory 104 is located on the same die as the processor 102, or may be located separately from the processor 102. The memory 104 includes a volatile or non-volatile memory, for example, random access memory (RAM), dynamic RAM, or a cache… The output driver 114 includes an accelerated processing device (APD) 116 which is coupled to a display device 118. The APD is configured to accept compute commands and graphics rendering commands from processor 102, to process those compute and graphics rendering commands, and to provide pixel output to display device 118 for display… The processor 102 maintains, in system memory 104, one or more control logic modules for execution by the processor 102. The control logic modules include an operating system 120, a driver 122, and applications 126, and may optionally include other modules not shown. These control logic modules control various aspects of the operation of the processor 102 and the APD 116… The APD 116 includes compute units 132 (which may collectively be referred to herein as “programmable processing units”) that include one or more SIMD units 138 that are configured to perform operations in a parallel manner according to a SIMD paradigm… In response to execution of the function, the SIMD unit 138 stores a task queue entry 506 into a task list corresponding to each of the lanes executing the function and exits the shader, the task lists being in a task queue data structure 504. The shader is exited because the subsequent control flow will be handled by the command processor 137 examining task lists and scheduling wavefronts based on those task lists… Once a particular code segment ends, such as through a call to another function in a divergent manner (e.g., with a function pointer, where the function pointer for multiple different work-items point to different functions), or through a return instruction, the SIMD unit 138 again stores a task list entry in an appropriate task list for later execution and exits the shader. In an example, a wavefront executes with a function pointer call. All of the lanes store a task list entry into an appropriate task list and the wavefront ends execution. At a later time, the command processor 137 causes a wavefront for one of the task lists to execute. That wavefront executes the function and then executes a return instruction.” The processor and APD accessing the memory for operation, which can include a cache, correlates to cache lines of data in a cache. The SIMD executing a first function and completing a particular code segment and storing a new task list entry for subsequent execution correlates to at least two of the plurality of execution items. The completion of a particular code segment may call another function using parameters or return a value in a return instruction and therefore would include data referenced in the task list entry that needs to be maintained between the completion of the first function and the start of the related task list entry. Therefore, the completion of a first execution of a function and the execution of a subsequent task list entry correlates to at least two of the plurality of execution items executing at points in time preventing a cache line of the data from being evicted from a cache).
With regards to Claims 10 and 19, the method of Claim 1 performs the same steps as the machine and manufacture of Claims 10 and 19 respectively, and Claims 10 and 19 are therefore rejected using the same rationale set forth above in the rejection of Claim 1.
With regards to Claim 2, Saleh teaches the method of Claim 1 above. Saleh further teaches:
wherein the plurality of execution items includes a first execution item having first commonality metadata and a second execution item having second commonality metadata (Paragraphs 17, 27 and 30, “In one example, each SIMD unit 138 includes sixteen lanes, where each lane executes the same instruction at the same time as the other lanes in the SIMD unit 138 but can execute that instruction with different data. Lanes can be switched off with predication if not all lanes need to execute a given instruction… The purpose of the task list is to allow the command processor 137 to aggregate tasks that can be launched together as a wavefront at a later time, thereby taking advantage of the parallelism of the SIMD architecture. Each task list stores task list entries 506 for execution of a particular task—a portion of code. For example, function 1 task list stores task list entries 506 for execution of function 1, function 2 task list stores task list entries 506 for execution of function 2, and so on… In some examples, the task list entries store stack pointers for the lane. The stack pointer uniquely identifies the lane for which the task list entry is created. More specifically, each lane has a stack at a location in memory. The stack holds data for the current code segment for the lane, such as local variables, function parameters, and return values. The stack pointer thus uniquely identifies a particular lane.” The function 1 task list storing task list entries specifically for execution of function 1 correlates to a first and second execution item. The task list entries having stack pointers which uniquely identifies a particular lane of a task list and further store data such as the current code segment, which are the same between entries of the same task list due to executing the same function, correlates to a first and second commonality metadata).
With regards to Claims 11 and 20, the method of Claim 2 performs the same steps as the machine and manufacture of Claims 11 and 20 respectively, and Claims 11 and 20 are therefore rejected using the same rationale set forth above in the rejection of Claim 2.
With regards to Claim 3, Saleh teaches the method of Claim 2 above. Saleh further teaches:
wherein identifying the plurality of execution items having matching commonality metadata includes identifying that a first commonality indicator of the first commonality metadata is the same as a second commonality indicator of the second commonality metadata (Paragraphs 17 and 30, “In one example, each SIMD unit 138 includes sixteen lanes, where each lane executes the same instruction at the same time as the other lanes in the SIMD unit 138 but can execute that instruction with different data. Lanes can be switched off with predication if not all lanes need to execute a given instruction… In some examples, the task list entries store stack pointers for the lane. The stack pointer uniquely identifies the lane for which the task list entry is created. More specifically, each lane has a stack at a location in memory. The stack holds data for the current code segment for the lane, such as local variables, function parameters, and return values. The stack pointer thus uniquely identifies a particular lane” The task list entries each having stack pointers which uniquely identifies a particular lane of a task list and further store data such as the current code segment, local variables, function parameters, and return values correlates to a first and second commonality metadata. Each lane identified by the identifier executing the same instruction at the same time as other lanes within the same SIMD unit correlates to a first and second commonality indicator of the first and second commonality metadata being the same).
With regards to Claim 12, the method of Claim 3 performs the same steps as the machine of Claim 12, and Claim 12 is therefore rejected using the same rationale set forth above in the rejection of Claim 3.
With regards to Claim 7, Saleh teaches the method of Claim 1 above. Saleh further teaches:
wherein scheduling the plurality of execution items for execution together comprises scheduling the plurality of execution items to execute in a first time period (Paragraphs 17-18, “In one example, each SIMD unit 138 includes sixteen lanes, where each lane executes the same instruction at the same time as the other lanes in the SIMD unit 138 but can execute that instruction with different data… The basic unit of execution in compute units 132 is a work-item. Each work-item represents a single instantiation of a shader program that is to be executed in parallel in a particular lane of a wavefront. Work-items can be executed simultaneously as a “wavefront” on a single SIMD unit 138… A scheduler 136 is configured to perform operations related to scheduling various wavefronts on different compute units 132 and SIMD units 138.” The scheduler scheduling various wavefronts, which include multiple work-items executed in parallel in a particular lane of a wavefront, correlates to scheduling the plurality of execution items to execute in a first time period).
With regards to Claim 16, the method of Claim 7 performs the same steps as the machine of Claim 16, and Claim 16 is therefore rejected using the same rationale set forth above in the rejection of Claim 7.
With regards to Claim 8, Saleh teaches the method of Claim 7 above. Saleh further teaches:
wherein scheduling the plurality of execution items to execute in the first time period includes scheduling the plurality of execution items to each execute at the same time (Paragraphs 17-18, “In one example, each SIMD unit 138 includes sixteen lanes, where each lane executes the same instruction at the same time as the other lanes in the SIMD unit 138 but can execute that instruction with different data… The basic unit of execution in compute units 132 is a work-item. Each work-item represents a single instantiation of a shader program that is to be executed in parallel in a particular lane of a wavefront. Work-items can be executed simultaneously as a “wavefront” on a single SIMD unit 138… A scheduler 136 is configured to perform operations related to scheduling various wavefronts on different compute units 132 and SIMD units 138.” The scheduler scheduling various wavefronts, which include multiple work-items executed in parallel in a particular lane of a wavefront, correlates to scheduling the plurality of execution items to execute at the same time).
With regards to Claim 17, the method of Claim 8 performs the same steps as the machine of Claim 17, and Claim 17 is therefore rejected using the same rationale set forth above in the rejection of Claim 8.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4-6, 9, 13-15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Saleh in view of Chagam et al. (U.S. Patent No. US 20190050255 A1), hereinafter “Chagam.”
With regards to Claim 4, Saleh teaches the method of Claim 2 above. Saleh does not explicitly teach:
wherein identifying the execution unit comprises identifying the execution unit based on the first commonality metadata and the second commonality metadata.
However, Chagam teaches:
wherein identifying the execution unit comprises identifying the execution unit based on the first commonality metadata and the second commonality metadata (Paragraph 80, “The lockless-mode controller is further configured to receive a plurality of object I/O messages from one or more clients, each to perform an object I/O task, divide each object I/O task into a plurality of sub-tasks, identify a specific sub-task type for each sub-task, and send each sub-task for each specific sub-task type to a preassigned storage resource through a specific processor core preassigned to the storage resource for processing the specific sub-task type in a lock-less mode.” The plurality of sub-tasks each having a specific sub-task type, which can include two sub-tasks with the same specific sub-task type, correlates to a first and second commonality metadata. The sub-tasks with the same specific sub-task type being assigned to a specific processor core for processing the specific sub-task type correlates to identifying the execution unit based on the first and second commonality metadata).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Saleh with wherein identifying the execution unit comprises identifying the execution unit based on the first commonality metadata and the second commonality metadata as taught by Chagam because initializing particular cores with sub-task type assignments before messages are received allows each sub-task to be pre-assigned to a particular core when a message is received. Core assignments can also be dynamically updated based on system loads to ensure proper allocation of task assignments. Distributed data object stores can also avoid hot spots by spreading sub-tasks across a number of cores (Chagam: paragraphs 33 and 50).
With regards to Claim 13, the method of Claim 4 performs the same steps as the machine of Claim 13, and Claim 13 is therefore rejected using the same rationale set forth above in the rejection of Claim 4.
With regards to Claim 5, Saleh in view of Chagam teaches the method of Claim 4 above. Chagam further teaches:
wherein the first commonality metadata and the second commonality metadata include an execution item type identifier (Paragraph 80, “The lockless-mode controller is further configured to receive a plurality of object I/O messages from one or more clients, each to perform an object I/O task, divide each object I/O task into a plurality of sub-tasks, identify a specific sub-task type for each sub-task, and send each sub-task for each specific sub-task type to a preassigned storage resource through a specific processor core preassigned to the storage resource for processing the specific sub-task type in a lock-less mode.” The plurality of sub-tasks each having a specific sub-task type, which can include two sub-tasks with the same specific sub-task type, correlates to the first and second commonality metadata including an execution item type identifier).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Saleh with wherein the first commonality metadata and the second commonality metadata include an execution item type identifier as taught by Chagam because initializing particular cores with sub-task type assignments before messages are received allows each sub-task to be pre-assigned to a particular core when a message is received. Core assignments can also be dynamically updated based on system loads to ensure proper allocation of task assignments. Distributed data object stores can also avoid hot spots by spreading sub-tasks across a number of cores (Chagam: paragraphs 33 and 50).
With regards to Claim 14, the method of Claim 5 performs the same steps as the machine of Claim 14, and Claim 14 is therefore rejected using the same rationale set forth above in the rejection of Claim 5.
With regards to Claim 6, Saleh in view of Chagam teaches the method of Claim 5 above. Chagam further teaches:
wherein identifying the execution unit comprises identifying the execution unit as an execution unit that correlates to the execution item type of the execution item type identifier (Paragraph 80, “The lockless-mode controller is further configured to receive a plurality of object I/O messages from one or more clients, each to perform an object I/O task, divide each object I/O task into a plurality of sub-tasks, identify a specific sub-task type for each sub-task, and send each sub-task for each specific sub-task type to a preassigned storage resource through a specific processor core preassigned to the storage resource for processing the specific sub-task type in a lock-less mode.” The plurality of sub-tasks each having a specific sub-task type, which can include two sub-tasks with the same specific sub-task type, correlates to the execution item type identifier. The sub-tasks with the same specific sub-task type being assigned to a specific processor core for processing the specific sub-task type correlates to identifying the execution unit which correlates to the execution item type of the execution item type identifier).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Saleh with wherein identifying the execution unit comprises identifying the execution unit as an execution unit that correlates to the execution item type of the execution item type identifier as taught by Chagam because initializing particular cores with sub-task type assignments before messages are received allows each sub-task to be pre-assigned to a particular core when a message is received. Core assignments can also be dynamically updated based on system loads to ensure proper allocation of task assignments. Distributed data object stores can also avoid hot spots by spreading sub-tasks across a number of cores (Chagam: paragraphs 33 and 50).
With regards to Claim 15, the method of Claim 6 performs the same steps as the machine of Claim 15, and Claim 15 is therefore rejected using the same rationale set forth above in the rejection of Claim 6.
With regards to Claim 9, Saleh teaches the method of Claim 7 above. Saleh does not explicitly teach:
wherein scheduling the plurality of execution items to execute in the first time period includes scheduling the plurality of execution items to execute within a threshold amount of time.
However, Chagam teaches:
wherein scheduling the plurality of execution items to execute in the first time period includes scheduling the plurality of execution items to execute within a threshold amount of time (Fig. 6, paragraphs 73 and 75, “Initializing the queue system can include determining the specific characteristics of the device (e.g., information concerning the size of memory transactions the device can perform, the number of memory transactions that the device can perform in a particular time frame, the difference between read and write transactions, if any, and the difference between sequential memory transactions and random memory transactions, if any). The task scheduler can also determine any existing system preferences (e.g., the maximum number of different operation types). The task scheduler (604) determines a budget that includes a maximum input/output operation for a particular period of time, which can include a maximum number of operations that can be calculated for each queue, for each queue type, and for each device in the object node… When the message has been processed, the task scheduler can determine (615) whether the number of messages processed from the non-I/O queues has exceeded the predetermined cap or budget.” The memory transactions for a particular queue and device correlate to a plurality of execution items. The task scheduler determining a budget for the maximum number of operations to be executed for each queue in a particular period of time and scheduling tasks based on the budget correlates to scheduling the plurality of execution items to execute within a threshold amount of time).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Saleh with wherein scheduling the plurality of execution items to execute in the first time period includes scheduling the plurality of execution items to execute within a threshold amount of time as taught by Chagam because different devices can have specific characteristics including the number of memory transactions the device can perform in a particular time frame. The device characteristics can be used in combination with existing system preferences by a task scheduler to determine a customized budget for each queue, queue type, and device in an object node (Chagam: paragraph 73).
With regards to Claim 18, the method of Claim 9 performs the same steps as the machine of Claim 18, and Claim 18 is therefore rejected using the same rationale set forth above in the rejection of Claim 9.
Prior Art Made of Record
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Lyashevsky et al. (U.S. Patent No. US 9274904 B2); teaching a method of executing a first and second work group which each comprise signature variables. The first and second work groups are mapped to an identifier to ensure they execute the same data and same code without changes to the hardware, and also have one or more related work items. The identifiers also include a local identifier of one of the adjacent work group items and a global identifier of one of the adjacent work group items.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SELINA HU whose telephone number is (571)272-5428. The examiner can normally be reached Monday-Friday 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at (571) 272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SELINA ELISA HU/ Examiner, Art Unit 2193
/Chat C Do/Supervisory Patent Examiner, Art Unit 2193