DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7, 8, 14, 15 are rejected under 35 U.S.C. 103 as being unpatentable by Liu et al. (U.S. 2024/0370966 A1) in view of Spreij et al. (U.S. 2025/0117328 A1).
Regarding Claim 1, Liu discloses a method (Liu, [0042] “a method”) of operating a graphics processor ([0042] “Rendering pipeline: that performs calculation by using a graphics processing unit (GPU))” Liu teaches a method of operation a GPU) the graphics processor comprising:
one or more processing circuits configured to execute a plurality of pipeline stages for a processing pipeline, wherein the operation of the one or more processing circuits to execute the processing pipeline is controlled based on a set of pipeline configuration information (Liu, [0090] “structure information of the pipeline configuration, including the correspondences between the following three: the to-be-rendered pipeline stage, the to-be-rendered resource position, and the to-be-rendered resource identifier of the to-be-rendered resource. The to-be-rendered pipeline stage is a pipeline stage to be rendered, such as a geometry stage and a rasterization stage” and [0135] “the rendering is implemented by a graphics processing unit (GPU) of the rendering device” Liu teaches a GPU execute (render) a plurality of pipeline stages (geometry stage, rasterization stage), the operation is controlled based on a set of pipeline configuration information (the to-be-rendered pipeline stage, the to-be-rendered resource position, and the to-be-rendered resource identifier of the to-be-rendered resource).
wherein a copy of the full set of pipeline configuration information including respective portions of pipeline configuration information relating to individual pipeline stages defining the processing pipeline is stored in association with the processing pipeline as a whole (Liu, [0090] “structure information of the pipeline configuration, including the correspondences between the following three: the to-be-rendered pipeline stage, the to-be-rendered resource position, and the to-be-rendered resource identifier of the to-be-rendered resource. The to-be-rendered pipeline stage is a pipeline stage to be rendered, such as a geometry stage and a rasterization stage. To be specific, the to-be-rendered resource is at least one rendering resource in current rendering, such as a texture, a map, and a model. For example, the to-be-rendered resource of the geometry stage is 16 textures, and the to-be-rendered resource position is a rendering position to which each texture corresponds” Liu teaches a copy of full set of pipeline configuration information (the to-be-rendered pipeline stage, the to-be-rendered resource position, and the to-be-rendered resource identifier of the to-be-rendered resource) includes portions of pipeline relating to individual pipeline stages e.g., geometry stage and rasterization stage, the to-be-rendered resource of the geometry stage is 16 textures…is stored in the rendering resource) and
wherein an individual pipeline stage of the plurality of pipeline stages is operable and configured to separately maintain a local copy of a respective portion of the pipeline configuration information relating to that pipeline stage (Liu, [0090] “The to-be-rendered pipeline stage is a pipeline stage to be rendered, such as a geometry stage and a rasterization stage. For example, the to-be-rendered resource of the geometry stage is 16 textures, and the to-be-rendered resource position is a rendering position to which each texture corresponds” and Fig. 8, [0146] “ a rasterize stage (RS) configuration interface 8-5, and a tessellation stage (TS) configuration interface 8-8 (referred to as a plurality of sub virtual pipeline interfaces). The virtual pipeline interface 7-12 generates, in response to the call instruction of the pipeline configuration interface” Liu teaches an individual pipeline stage (geometry stage, rasterization stage, tessellation stage) of pipelines stages is operable separately maintain a local copy of a portion of the pipeline configuration information in response to the call instruction of each pipeline configuration interface (Fig. 8).
the method comprising:
to update some or all of the set of pipeline configuration information (Liu, [0120] “Operation 412: The rendering device updates the to-be-rendered resource into the rendering resource library” Liu teaches updating some of the set of pipeline configuration information e.g., the rendering device updates the to-be-rendered resource into the rendering resource library;
issuing a corresponding state update command to update the set of pipeline configuration information into the processing pipeline (Liu, [0121] “because the to-be-rendered resource is not included in the rendering resource library of the rendering device, the rendering device obtains the to-be-rendered resource from the control device. Therefore, after obtaining the to-be-rendered resource from the control device, the rendering device updates the to-be-rendered resource into the rendering resource library for obtaining a rendering resource next time based on the updated rendering resource library, to achieve reuse of the to-be-rendered resource” Liu teaches issuing a corresponding state update command ( because the to-be-rendered resource is not included in the rendering resource library) to update the set of pipeline configuration information (the rendering device updates the to-be-rendered resource into the rendering resource library for obtaining a rendering resource next time based on the updated rendering resource library); and
once the state update command has passed through the processing pipeline, updating the copy of the full set of pipeline configuration information that is stored in association with the processing pipeline as a whole (Liu, [0165] “[0165] an instruction restoration module 2552, configured to generate a second pipeline configuration instruction based on the to-be-rendered pipeline stage, the to-be-rendered resource position” and [0168] the instruction restoration module 2552 is further configured to: obtain, in a rendering resource library, a rendering resource based on the to-be-rendered resource identifier, the rendering resource library including a rendering resource in the rendering device that is obtained from the control device” Liu teaches once the state update has passed through (updates the to-be-rendered resource into the rendering resource library), restore the copy of the set of pipeline configuration information (obtain, a rendering resource library) with the processing pipeline via an instruction restoration module;
Liu discloses the to-be-rendered pipeline stage is a pipeline stage to be rendered, such as a geometry stage ([0090]) and the rendering resource including shader ([0086]) and pipeline interfaces including vertex shader, hull shader, domain shader, geometry shader, pixel shader configuration interfaces in response to the render instruction, ([0146], Fig. 8).
However, Liu does not explicitly teach the method further comprising:
when the state update command is to update a respective portion of pipeline configuration information relating to a particular pipeline stage, the particular pipeline stage when processing the state update command updating its respective local copy of that respective portion of the pipeline configuration information accordingly.
Spreij teaches when the state update command is to update a respective portion of pipeline configuration information relating to a particular pipeline stage, the particular pipeline stage when processing the state update command updating its respective local copy of that respective portion of the pipeline configuration information accordingly (Spreij, [0051] “FIGS. 1 and 2 is an image rendering system (e.g. a ray tracing system), the hash function may be applied to shader type information associated with the incoming work item 102” and [0075] “instead of the stage registers 710, 712 storing addresses requested by work items that have passed through the pipeline, these registers may calculate these addresses by applying the hash function to the shader information which is stored in each of the stage registers” and [0090] a computing system comprising a maximum of two registers 708, 908. It is advantageous for the number of registers in the computing system to be a low number, such as two, as these registers are associated with a large hardware area, when compared to the hardware area consumed by the memory. Thus, a low number of registers reduces the hardware size of the computing system. It is appreciated that alternative examples of the computing system may comprise less than two registers (e.g., FIGS. 7 and 8) or more than two registers. The number of local copy registers may be less than the number of stages in the pipeline. In particular the number of local copy registers may be one less than the number of stages in the pipeline, as this is sufficient to allow RAW hazards to be avoided in the pipeline without any need for stalling” and [0069] “That is, once the first work item has written its updated data to the register 708 at stage P2, that updated data is retained in the register 708” Spreij teaches a rendering system applies the hash function to the shader type information which is stored in a maximum of two registers (708, 908, Fig. 8). The number of local copy registers may be less than the number of stages in the pipeline. In particular stage (register stages) is retained, the number of local copy registers may be one less than the number of stages in the pipeline, as this is sufficient to allow RAW hazards to be avoided in the pipeline without any need for stalling, reduces the hardware size of the computing system.
Liu and Spreij are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Liu to combine with the particular pipeline stage when processing the state update command updating its respective local copy of that respective portion of the pipeline (as taught by Spreij ) in order to apply the particular pipeline stage when processing the state update command updating its respective local copy of that respective portion of the pipeline because Spreij can provide a rendering system applies the hash function to the shader type information which is stored in a maximum of two registers (708, 908, Fig. 8). The number of local copy registers may be less than the number of stages in the pipeline. In particular stage, the number of local copy registers may be one less than the number of stages in the pipeline, as this is sufficient to allow RAW hazards to be avoided in the pipeline without any need for stalling, reduces the hardware size of the computing system (Spreij, [0050], [0075], [0090]). Doing so, it may provide a low number of registers reduces the hardware size of the computing system, the number of local copy registers may be less than the number of stages in the pipeline. In particular the number of local copy registers may be one less than the number of stages in the pipeline, as this is sufficient to allow RAW hazards to be avoided in the pipeline without any need for stalling”(Spreij, [0090]).
Regarding Claim 7, a combination of Liu and Spreij discloses the method of claim 1, wherein the plurality of pipeline stages are executed by a corresponding set of plural generic pipeline stages, wherein each generic pipeline stage of the set of generic pipeline stages can be configured as a respective, different shader stage to be executed as part of a processing pipeline (Liu [0090] “The to-be-rendered pipeline stage is a pipeline stage to be rendered, such as a geometry stage and a rasterization stage” and ([0086]) “the rendering resource including shader” and Fig. 8, [0146] “pipeline interfaces 7-12 includes a vertex shader (VS) configuration interface 8-2, a hull shader (HS)/domain shader (DS)/geometry shader (GS) configuration interface 8-3, a pixel shader (PS) configuration interface 8-6 and a tessellation stage (TS) configuration interface 8-8 …in response to the render instruction” Liu a corresponding set of generic pipeline stages (geometry, rasterization, tessellation stages) can be configured as different stage to be executed as part of processing pipeline such as a vertex shader (VS), a hull shader (HS), geometry shader (GS), a pixel shader (PS) etc.
Regarding Claim 8, Liu discloses a graphics processor (Liu, [0045] “a graphics processing unit (GPU)” comprising:
one or more processing circuits (Liu, [0062] The first processor 310 may be an integrated circuit chip”) that configured to execute a plurality of pipeline stages for a processing pipeline, wherein the operation of the one or more processing circuits to execute the processing pipeline is controlled based on a set of pipeline configuration information; and
a post pipeline stage that is operable and configured to maintain a copy of a full set of pipeline configuration information including respective portions of pipeline configuration information relating to individual pipeline stages defining the processing pipeline, such that a copy of the full set of pipeline configuration information is stored for the processing pipeline as a whole,
wherein an individual pipeline stage of the plurality of pipeline stages is operable and configured to separately maintain a local copy of a respective portion of the pipeline configuration information relating to that pipeline stage,
and wherein the graphics processor operable and configured to:
in response to a state update command to update some or all of the set of pipeline configuration information being issued into the processing pipeline:
once the state update command has passed through the processing pipeline, update the copy of the full set of pipeline configuration information that is stored in the post pipeline stage for the processing pipeline as a whole; and
when the state update command is to update a respective portion of pipeline configuration information relating to a particular pipeline stage, process the state update command by that particular pipeline stage to update the local copy of that respective portion of the pipeline configuration information that is maintained by that particular pipeline stage.
Claim 8 is substantially similar to claim 1 is rejected based on similar analyses.
Regarding Claim 14, a combination of Liu and Spreij discloses the graphics processor of claim 8, wherein the plurality pipeline stages are executed by a corresponding set of plural generic pipeline stages, wherein each generic pipeline stage of the set of generic pipeline stages can be configured as a respective, different shader stage to be executed as part of a processing pipeline.
Claim 14 is substantially similar to claim 7 is rejected based on similar analyses.
Regarding Claim 15, a combination of Liu and Spreij discloses a non-transitory computer readable storage medium (Liu, [0056] “a computer-readable storage medium”) storing computer software code which when executing on one or more processors (Liu, [0024] “the computer-executable instructions implementing, when being executed by a second processor”) performs a method of operating a graphics processor, the graphics processor comprising:
one or more processing circuits configured to execute a plurality of pipeline stages for a processing pipeline, wherein the operation of the one or more processing circuits to execute the processing pipeline is controlled based on a set of pipeline configuration information,
wherein a copy of the full set of pipeline configuration information including respective portions of pipeline configuration information relating to individual pipeline stages defining the processing pipeline is stored in association with the processing pipeline as a whole, and
wherein an individual pipeline stage of the plurality of pipeline stages is operable and configured to separately maintain a local copy of a respective portion of the pipeline configuration information relating to that pipeline stage,
the method comprising:
to update some or all of the set of pipeline configuration information:
issuing a corresponding state update command to update the set of pipeline configuration information into the processing pipeline ; and
once the state update command has passed through the processing pipeline, updating the copy of the full set of pipeline configuration information that is stored in association with the processing pipeline as a whole;
the method further comprising:
when the state update command is to update a respective portion of pipeline configuration information relating to a particular pipeline stage, the particular pipeline stage when processing the state update command updating its respective local copy of that respective portion of the pipeline configuration information accordingly.
Claim 15 is substantially similar to claim 1 is rejected based on similar analyses.
Claims 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable by Liu et al. (U.S. 2024/0370966 A1) in view of Spreij et al. (U.S. 2025/0117328 A1) and further in view of Livesley et al. (U.S. 2025/0182235 A1).
Regarding Claim 2, the method of claim 1, a combination of Liu and Spreij does not explicitly teach wherein when one or more state update commands are received to update pipeline configuration information for which a corresponding descriptor is stored in memory for use by a subsequent instance of processing pipeline execution, this is tracked, and prior to performing the subsequent instance of processing pipeline execution, information indicative of a new descriptor for the updated pipeline configuration information is created to allow the new descriptor to be written to memory.
However, Livesley teaches when one or more state update commands are received to update pipeline configuration information for which a corresponding descriptor is stored in memory for use by a subsequent instance of processing pipeline execution, this is tracked, and prior to performing the subsequent instance of processing pipeline execution, information indicative of a new descriptor for the updated pipeline configuration information is created to allow the new descriptor to be written to memory (Livesley, [0004] “the descriptor may have been constructed by a hardware pipeline running a previous task (e.g. which is how a fragment pipeline may work, running on data structures previously written by a geometry pipeline) and [0007] “In the two-stage approach, the geometry pipeline writes a piece of data (e.g. a control stream) to memory for each tile to be processed by the fragment stage” and [0014] “the processor may comprise a fragment register bank to which software can write a first fragment task descriptor specifying the fragment processing of the first task, and a second fragment task descriptor specifying the fragment processing of the second task, the fragment processing logic being arranged to perform the fragment processing of the first and second tasks based on the first and second fragment task descriptors, respectively” Livesley teaches a corresponding descriptor (write a fragment pipeline of a geometry stage) is stored in memory for use by a subsequent instance of instance of processing pipeline execution (write a first fragment task descriptor of the first task), information indicate of a new descriptor for updated pipeline information (write a second fragment task descriptor of the second task) to memory.
Liu, Spreij and Livesley are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Liu to combine with a descriptor is used by a subsequent instance of processing pipeline execution and stored in memory (as taught by Livesley) in order to apply a descriptor is used by a subsequent instance of processing pipeline execution and stored in memory because Livesley can provide a corresponding descriptor (write a fragment pipeline of a geometry stage) is stored in memory for use by a subsequent instance of instance of processing pipeline execution (write a first fragment task descriptor of the first task), information indicate of a new descriptor for updated pipeline information (write a second fragment task descriptor of the second task) to memory (Livesley, [0004], [0007], [0014]). Doing so, it may provide a register bank could be implemented as a buffer-type structure such as a circular buffer for queuing task descriptors (Livesley, Fig. 3, [0107]).
Regarding Claim 9, a combination of Liu, Spreij and Livesley discloses the graphics processor of claim 8, wherein when one or more state update commands are received to update pipeline configuration information for which a corresponding descriptor is to be stored in memory for use by a subsequent instance of processing pipeline execution, prior to performing the subsequent instance of processing pipeline execution, is created to allow the new descriptor to be written to memory.
Claim 9 is substantially similar to claim 2 is rejected based on similar analyses.
Regarding Claim 16, a combination of Liu, Spreij and Livesley discloses the non-transitory computer readable storage medium of claim 15, wherein when one or more state update commands are received to update pipeline configuration information for which a corresponding descriptor is stored in memory for use by a subsequent instance of processing pipeline execution, this is tracked, and prior to performing the subsequent instance of processing pipeline execution, information indicative of a new descriptor for the updated pipeline configuration information is created to allow the new descriptor to be written to memory.
Claim 16 is substantially similar to claim 2 is rejected based on similar analyses.
Claims 3, 4, 10, 11 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable by Liu et al. (U.S. 2024/0370966 A1) in view of Spreij et al. (U.S. 2025/0117328 A1) and further in view of Livesley et al. (U.S. 2025/0182235 A1) and further in view of Hervin et al. (U.S. 5805879).
Regarding Claim 3, the method of claim 2, a combination of Liu, Spreij and Livesley does not explicitly teach wherein prior to writing out a new descriptor for any updated pipeline configuration information to memory, it is determined whether the new descriptor would correspond to a descriptor that is already stored in memory, and when it is determined that the new descriptor would correspond to a descriptor that is already stored in memory, the information indicative of the new descriptor is discarded and the descriptor that is already stored in memory is used for the subsequent instance of processing pipeline execution.
However, Hervin teaches prior to writing out a new descriptor for any updated pipeline configuration information to memory, it is determined whether the new descriptor would correspond to a descriptor that is already stored in memory, and when it is determined that the new descriptor would correspond to a descriptor that is already stored in memory, the information indicative of the new descriptor is discarded and the descriptor that is already stored in memory is used for the subsequent instance of processing pipeline execution (Hervin, Col. 4, lines 33-40 “After the segment descriptor is retrieved from memory, the status checking circuitry examines status bits within the segment descriptor, the status bits indicating various statuses with respect to the segment. The segment access indicator, one of the status bits, indicates whether the segment described by the segment descriptor has, or has not, been accessed” and Col. 12, lines 19-21 “If the segment access indicator is in a zero state, indicating that the segment has not previously been accessed” and Col. 12, lines 26-33 “Exception handling circuitry 630, invoked by processor 10 in response to generation of the exception, flushes the pipeline of instructions following a segment load instruction that caused the loading of segment descriptor 430. Next, exception handling circuitry 630 calls a microcoded routine stored in microROM that sets the segment access indicator to a one state, loads the descriptor into the descriptor cache (27 of FIG. 1a)” Hervin teaches the status checking within the segment descriptor is receive from memory, indicates whether the segment described by the segment descriptor has, or has not, been accessed. If the segment access indicator to a one state, indicating that the segment has previously been accessed that means a descriptor that is already stored in memory. Then, Exception handling circuitry (360) flushes the pipeline of instructions following a segment load instruction that caused the loading of segment descriptor (430) (referred to as discard the descriptor is already stored in memory).
Liu, Spreij, Livesley and Hervin are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Liu to combine with determine whether the new descriptor would correspond to a descriptor that is already stored in memory (as taught by Hervin) in order to determine whether the new descriptor would correspond to a descriptor that is already stored in memory because Hervin can provide the status checking within the segment descriptor is receive from memory, indicates whether the segment described by the segment descriptor has, or has not, been accessed. If the segment access indicator to a one state, indicating that the segment has previously been accessed that means a descriptor that is already stored in memory. Then, Exception handling circuitry (360) flushes the pipeline of instructions following a segment load instruction that caused the loading of segment descriptor (430) (referred to as discard the descriptor is already stored in memory) (Hervin, Col. 4, lines 33-40, Col. 12, lines 19-21, Col. 12, lines 26-33). Doing so, it may provide each processing stage is optimized to perform a particular processing function, thereby causing the processor as a whole to become faster (Hervin, Col. 1, lines 59-62).
Regarding Claim 4, the method of claim 3, a combination of Liu, Spreij and Livesley does not explicitly teach wherein the determining whether the new descriptor would correspond to a descriptor that is already stored in memory comprises checking a respective value calculated based on some or all of the information indicative of the new descriptor with corresponding values calculated for one or more descriptors that are already stored in memory.
However, Hervin teaches checking a respective value calculated based on some or all of the information indicative of the new descriptor with corresponding values calculated for one or more descriptors that are already stored in memory (Hervin, Col. 12, lines 26-33 “Exception handling circuitry 630, invoked by processor 10 in response to generation of the exception, flushes the pipeline of instructions following a segment load instruction that caused the loading of segment descriptor 430. Next, exception handling circuitry 630 calls a microcoded routine stored in microROM that sets the segment access indicator to a one state, loads the descriptor into the descriptor cache (27 of FIG. 1a)” Hervin teaches the status checking within the segment descriptor is receive from memory, If the segment access indicator to a value one state, indicating that the segment has previously been accessed that means a descriptor that is already stored in memory.
Liu, Spreij, Livesley and Hervin are combinable see rationale in claim 3.
Regarding Claim 10, a combination of Liu, Spreij, Livesley and Hervin discloses the graphics processor of claim 9, wherein prior to writing out any new descriptors for any updated pipeline configuration information to memory, it is determined whether the new descriptor would correspond to a descriptor that is already stored in memory, and when it is determined that the new descriptor would correspond to a descriptor that is already stored in memory, the information indicative of the new descriptor is discarded and the descriptor that is already stored in memory is used for the subsequent instance of processing pipeline execution .
Claim 10 is substantially similar to claim 3 is rejected based on similar analyses.
Regarding Claim 11, Liu discloses the graphics processor of claim 10, wherein the determining whether the new descriptor corresponds to a descriptor that is already stored in memory comprises checking a respective value calculated based on some or all of the information indicative of the new descriptor with corresponding values calculated for one or more descriptors that are already stored in memory.
Claim 11 is substantially similar to claim 4 is rejected based on similar analyses.
Regarding Claim 17, a combination of Liu, Spreij, Livesley and Hervin discloses the non-transitory computer readable storage medium of claim 16, wherein prior to writing out a new descriptor for any updated pipeline configuration information to memory, it is determined whether the new descriptor would correspond to a descriptor that is already stored in memory, and when it is determined that the new descriptor would correspond to a descriptor that is already stored in memory, the information indicative of the new descriptor is discarded and the descriptor that is already stored in memory is used for the subsequent instance of processing pipeline execution.
Claim 17 is substantially similar to claim 3 is rejected based on similar analyses.
Regarding Claim 18, Liu discloses the non-transitory computer readable storage medium of claim 17, wherein the determining whether the new descriptor would correspond to a descriptor that is already stored in memory comprises checking a respective value calculated based on some or all of the information indicative of the new descriptor with corresponding values calculated for one or more descriptors that are already stored in memory.
Claim 18 is substantially similar to claim 4 is rejected based on similar analyses.
Allowable Subject Matter
Dependent claims 5, 6, 12, 13, 19, 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding to dependent claims 5, 6, 12, 13, 19, 20 the closest prior art references the examiner found are Liu et al. (U.S. 2024/0370966 A1) in view of Spreij et al. (U.S. 2025/0117328 A1) have been made of record as teaching: wherein a copy of the full set of pipeline configuration information including respective portions of pipeline configuration information relating to individual pipeline stages defining the processing pipeline is stored in association with the processing pipeline as a whole (Liu, [0090]); wherein an individual pipeline stage of the plurality of pipeline stages is operable and configured to separately maintain a local copy of a respective portion of the pipeline configuration information relating to that pipeline stage (Liu, [0090], Fig. 18, [0146]); to update some or all of the set of pipeline configuration information (Liu, [0120]); issuing a corresponding state update command to update the set of pipeline configuration information into the processing pipeline (Liu, [0121]); once the state update command has passed through the processing pipeline, updating the copy of the full set of pipeline configuration information that is stored in association with the processing pipeline as a whole (Liu, [0165], [0168]); when the state update command is to update a respective portion of pipeline configuration information relating to a particular pipeline stage, the particular pipeline stage when processing the state update command updating its respective local copy of that respective portion of the pipeline configuration information accordingly (Spreij, [0051], [0075, [0090]), recited in claims 1, 8, 15.
However, the art of record did not teach or suggest the claim taken as a whole and particular the limitation pertaining
“wherein in response to a command to suspend a current instance of processing pipeline execution, the method comprising suspending the current instance of processing pipeline execution by: stopping issuing any new commands to the processing pipeline; saving a copy of the full set of pipeline configuration information that is stored in association with the processing pipeline as a whole; and saving information indicative of any commands that are currently being processed within the processing pipeline” recited in claims 5, 12, 19.
Dependent claims 6, 13, 20 are allowed because they depend on dependent claims 5, 12, 19.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance”.
Conclusion
The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure Polzin et al. (U.S. 2023/0205613 A1) and Treichler et al. (U.S. 20130120413 A1).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA VU whose telephone number is (571)272-5994. The examiner can normally be reached 8:00- 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
/KHOA VU/Examiner, Art Unit 2611