DETAILED ACTION
Claims 1-3, 5-11, 14-17, 19-20, 23-24, 26-30, and 32-34 are amended. Claim 37 is new. Claims 1-37 are pending in the application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Examiner’s Notes
The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Response to Amendment
Amendments to claims 1, 10, 19, and 28 are fully considered and are satisfactory to overcome the rejections under 35 U.S.C. §112(a) directed to claims 1-36 in the previous Office Action.
Amendments to claims 1, 10, 19, and 28 are fully considered and are satisfactory to overcome the rejections under 35 U.S.C. §101 directed to claims 1-36 in the previous Office Action.
Claim Objections
Claims 1-9 are objected to because of the following informalities:
Claim 1: “to application” (lines 3-4) should have been –to an application—.
Claims 2-9 inherit the features of claim 1 and are objected to accordingly.
Appropriate corrections are required. Applicant is advised to review the entire claims for further needed corrections.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 6-8, 10, 15-17, 19, 24-26, 28, 33, and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Catalano et al. (US 2017/0236322 A1; hereinafter Catalano) in view of Nachimuthu et al. (US 2018/0026908 A1; hereinafter Nachimuthu).
With respect to claim 1, Catalano teaches: A non-transitory machine-readable medium (see e.g. Catalano, paragraph 60: “a non-transitory computer-readable medium”) having stored thereon executable instructions, which if performed by one or more processors, causes the one or more processors to (see e.g. Catalano, paragraph 60: “computer program products stored in a non-transitory computer-readable medium that can cause, when executed, processors such as the processors 112 and 200 of FIGS. 1 and 2, to perform one, multiple or all of the steps of the above-described methods or functions”), in response to application programming interface (API) call (see e.g. Catalano, Fig. 1: “API 120”; paragraph 27: “API 120 is configured to facilitate the communication between the rendering core 110, the accelerator 130), at least:
at least two processors (see e.g. Catalano, Fig. 1: “Rendering Core 110”¸ “Accelerator 130”, “Custom Shader 140”) from a plurality of processors (see e.g. Catalano, paragraph 21: “production renderer 100 and a custom shader 140. In the illustrated embodiment, the production renderer 100 includes a rendering core 110, an application programming interface (API) 120 and an accelerator 130”; and Fig. 1) to perform two or more workloads (see e.g. Catalano, paragraph 27: “API 120 is configured to facilitate the communication between the rendering core 110, the accelerator 130 and the custom shader 140. For example, the API 120 may transfer the scene geometry and the point cloud data from the memory 114 of the rendering core 110 to the accelerator before the rendering process and deliver the rendering function call from the custom shader 140 to the accelerator 130 and the computed results from the accelerator 130 to the rendering core 110”)
cause the two or more workloads to be scheduled to be performed by the at least two processors (see e.g. Catalano, paragraph 24: “Once the rendering process starts, the processor 112 can record and forward a particular rendering core function call generated by the custom shader 140 to the accelerator 130 for computing the result of the function call”; paragraph 27: “the API 120 may transfer the scene geometry and the point cloud data from the memory 114 of the rendering core 110 to the accelerator before the rendering process and deliver the rendering function call from the custom shader 140 to the accelerator 130 and the computed results from the accelerator 130 to the rendering core 110”);
cause information… to be shared between the two or more different processors (see e.g. Catalano, Fig. 1: “Rendering Core 110”¸ “Accelerator 130”; paragraph 26: “memory 114 is shared between the rendering core 110 and the accelerator 130”; and paragraph 27: “API 120 is configured to facilitate the communication between the rendering core 110, the accelerator 130… API 120 may transfer the scene geometry and the point cloud data from the memory 114 of the rendering core 110 to the accelerator”)
wherein information corresponding to intermediate results (see e.g. Catalano, paragraph 24: “accelerator-computed result”) computed by at least one of the at least two processors (see e.g. Catalano, Fig. 1: “Accelerator 130”; paragraph 28: “accelerator 130 is configured to compute the result of the forwarded rendering core function calls”; and Fig. 3, step 340) in performing the two or more workloads is shared between the at least two processors (see e.g. Catalano, paragraph 24: “When the custom shader 140 calls the particular rendering core function again, e.g., when the rendering process restarts, the processor 112 may return the accelerator-computed result to the custom shader 140 for the computation of the correct final render result”; and Fig. 3, step 360).
Even though Catalano discloses utilizing two or more processors (e.g. a rendering core 110, an accelerator 130, a custom shader 140) to perform two or more workloads (e.g. starting a rendering process by the rendering core, computing rendering core functions by the accelerator, computing a final rendering result by the customer shader, etc.), Catalano does not explicitly discloses selecting these processors based on the characteristics of the workloads.
However, Nachimuthu teaches:
select (see e.g. Nachimuthu, paragraph 91: “determining two or more processing units of a plurality of processing units to process a workload”; and Fig. 19, step 1905)… based at least in part on respective characteristics of the two or more workloads (see e.g. Nachimuthu, paragraph 25: “number of processing units to process workloads may be provided to a switching controller or determined by the switching controller based on the requirements”; paragraph 30: “predicts resource usage for different types of workloads based on past resource usage, and dynamically reallocates the resources based on this information”; paragraph 86: “determining a number of processing units to process the workload. The number of processing units may be based on a processing requirement for the workload”; paragraph 91: “determining two or more processing units of a plurality of processing units to process a workload. As previously discussed, the determination may be based on one or more processing requirement(s) and SLA for the workload”; and Fig. 18, steps 1802, 1804); and
Catalano and Nachimuthu are analogous art because they are in the same field of endeavor: distributing computational tasks amongst multiple processors. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Catalano with the teachings of Nachimuthu. The motivation/suggestion would be to improve the resource utilization efficiency (see e.g. Nachimuthu, paragraph 4).
With respect to claim 6, Catalano as modified teaches: The non-transitory machine-readable medium of claim 1, wherein the API implements a logical device that provides interfaces to both a first processor and a second processor of the at least two processors (see e.g. Catalano, paragraph 27: “API 120 is configured to facilitate the communication between the rendering core 110, the accelerator 130… API 120 may transfer the scene geometry and the point cloud data from the memory 114 of the rendering core 110 to the accelerator”).
With respect to claim 7, Catalano as modified teaches: The non-transitory machine-readable medium of claim 1, wherein a first processor or a second processor of the at least two processors is a field programmable gate array, an application specific integrated circuit, a digital signal processor, a graphics processing unit (see e.g. Catalano, paragraph 12: “an accelerator, e.g., a Graphics Processing Unit (GPU)”), or a central processing unit (see e.g. Catalano, paragraph 23: “processor 112 is a CPU”).
With respect to claim 8, Catalano as modified teaches: The non-transitory machine-readable medium of claim 1, wherein the information corresponding to the intermediate results further includes instructions to be performed by a second processor (see e.g. Catalano, Fig. 1: “Custom Shader 140”) of the at least two processors (see e.g. Catalano, paragraph 47: “In step 340, accelerator computes the results of the forwarded rendering core function for each Query Point using the approximate shading”; and paragraph 54: “rendering core returns the computed results from the step 340 to the custom shader. The custom shader uses the computed results to compute the final render result ”).
With respect to claims 10 and 15: Claims 10 and 15 are directed to a computer system comprising one or more processors and machine-readable media to store executable instructions corresponding to the machine-readable media having stored thereon an API as disclosed in claims 1 and 6, respectively; please see the rejections directed to claims 1 and 6 above which also cover the limitations recited in claims 10 and 15. Note that, Catalano also discloses a computer system with various processors and machine-readable media (see e.g. Catalano, paragraphs 61-64) corresponding to the machine-readable medium disclosed in claims 1 and 6.
With respect to claim 16, Catalano as modified teaches: The computer system of claim 10, wherein a first processor or a second processor of the two processors perform portions of a workflow in parallel (see e.g. Catalano, paragraph 53: “parallel computing environments… rendering process uses a quasi-Monte Carlo rendering method as used in Tray® and mental ray® from NVIDIA Corporation. This deterministic method makes the rendering process exactly repeatable and allows for efficient parallelization”).
With respect to claim 17: Claim 17 is directed to a computer system comprising one or more processors and machine-readable media to store executable instructions corresponding to the machine-readable media having stored thereon an API as disclosed in claim 8; please see the rejection directed to claim 8 above which also covers the limitations recited in claim 17.
With respect to claims 19 and 24: Claims 19 and 24 are directed to a computer-implemented method corresponding to the active functions performed by executing the API stored on the machine-readable medium recited in claims 1 and 6, respectively; please see the rejections directed to claims 1 and 6 above which also cover the limitations recited in claims 19 and 24.
With respect to claim 25, Catalano as modified teaches: The computer-implemented method of claim 24, wherein a first workload (see e.g. Catalano, paragraph 14: “primary shading is still done on the rendering core”) and a second workload (see e.g. Catalano, paragraph 14: “executing one or more of time-consuming processes of the rendering core on an accelerator using approximate shading”) are performed serially by the first processor (see e.g. Catalano, paragraph 24: “processor 112 may generate… a scene geometry and a point cloud”; paragraph 37: “geometry data of an image, e.g. a scene geometry, and a point cloud are created. The point cloud is created by placing points on surfaces of the scene geometry”; and paragraph 38: “the placement process of the points is organized in passes, where each pass refines the result of the previous pass by adaptively adding new points”) and the second processor (see e.g. Catalano, paragraph 48: “accelerator employs a single generic approximate shader, e.g., a generic BRDF model in case of irradiance calculation, that is fed with the data of the pre-sampled points that are closest to the hit points”; and paragraph 49: “At each hit point, the result of the computation of the approximate shader is: result=P.diffuse_color*(P.incoming_direct_light+.pi.diffuse_ray)”).
With respect to claim 26: Claim 26 is directed to a computer-implemented method corresponding to the active functions performed by executing the API stored on the machine-readable medium recited in claim 8; please see the rejection directed to claim 8 above which also covers the limitations recited in claim 26.
With respect to claims 28 and 33: Claims 28 and 33 are directed to a processor comprising one or more circuits that implements an API corresponding to the API stored in the machine-readable medium recited in claims 1 and 6, respectively; please see the rejections directed to claims 1 and 6 above which also cover the limitations recited in claims 28 and 33. Note that, Catalano also discloses a processor with one or more circuits to implement an API (see e.g. Catalano, Fig. 1) corresponding to the API stored in the machine-readable medium recited in claims 1 and 6.
With respect to claim 37, Catalano as modified teaches: The non-transitory machine-readable medium of claim 1,
Catalano does not but Nachimuthu teaches:
wherein the at least two processors comprise at least two different accelerators (see e.g. Nachimuthu, paragraph 28: “Physical resources 106 may include resources of multiple types, such as… accelerators, field-programmable gate arrays (FPGAs)”; and paragraph 30: “accelerators (e.g., graphics accelerators, FPGAs, ASICs, etc.)”), and
wherein the information corresponding to the intermediate results is shared between the at least two different accelerators via one or more direct communication channels (see e.g. Nachimuthu, paragraph 92: “configuring a circuit switch to link the two or more processing units to process the workload, the two or more processing units each linked to each other via dual paths of communication”; and Fig. 19, step 1910).
Catalano and Nachimuthu are analogous art because they are in the same field of endeavor: distributing computational tasks amongst multiple processors. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Catalano with the teachings of Nachimuthu. The motivation/suggestion would be to improve the resource utilization efficiency (see e.g. Nachimuthu, paragraph 4).
Claims 2-4, 9, 11-14, 18, 20-23, 27, 29-32, and 34-35 are rejected under 35 U.S.C. 103 as being unpatentable over Catalano in view of Nachimuthu as applied to claims 1, 10, 19, and 28 above, and further in view of Saillet et al. (US 2020/0379803 A1; hereinafter Saillet).
With respect to claim 2, Catalano as modified teaches: The non-transitory machine-readable medium of claim 1, wherein performance of the executable instructions further causes the one or more processors to, in response to the API call:
cause performance of a first workload (see e.g. Catalano, paragraph 14: “primary shading… Depth-of-Field, Motion Blur, multiple Reflections and Refractions”) of the plurality of workloads on the first processor (see e.g. Catalano, paragraph 14: “primary shading is still done on the rendering core of the production renderer as before and thus important rendering features, like for example Depth-of-Field, Motion Blur, multiple Reflections and Refractions, would continue to work as before”); and
cause performance of a second workload (see e.g. Catalano, paragraph 15: “rendering core functions for determining the irradiance for global illumination (GI), looking up values for Image Based Lighting lookup (IBL), casting shadow rays and determining occlusion or ambient occlusion or obscurance, or executing a light loop for illumination computations”) of the plurality of workloads on the second processor (see e.g. Catalano, paragraph 15: “executing the requests in large batches on an accelerator”).
Catalano does not but Saillet teaches:
remove a workflow generated by an application from a queue (see e.g. Saillet, paragraph 41: “in a queue, a list of workflows to execute”; and paragraph 2: “execution of a new workflow… after one of the running workflows has completed, and will start the next workflow in the queue… workflows in the queue have to wait until they are at the top of the queue and the conditions allowing a new workflow to be executed are met”), the workflow having a plurality of workloads (see e.g. Saillet, paragraph 53: “workflows may be data processing workflows that perform any number of processing operations on the data. For example, a workflow may fetch data, sort the data, apply a filter to the data, and transform the data so as to gain insight from the data”);
Catalano and Saillet are analogous art because they are in the same field of endeavor: managing workload distribution for processing. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Catalano with the teachings of Saillet. The motivation/suggestion would be to improve resource utilization (see e.g. Saillet, paragraph 41).
With respect to claim 3, Catalano as modified teaches: The non-transitory machine-readable medium of claim 2, wherein:
each workload in the plurality of workloads has an associated acceleration profile (see e.g. Catalano, paragraph 14: “one or more of time-consuming processes” and “primary shading is still done on the rendering core”) provided by the application (see e.g. Catalano, paragraph 14: “rendering process of an image by executing one or more of time-consuming processes of the rendering core on an accelerator using approximate shading. As such, the primary shading is still done on the rendering core of the production renderer”; and paragraph 15: “rendering acceleration is based on the recognition that custom shaders call or invoke functions of the rendering core and these calls or invocations may include procedures that are very time consuming when executed in the rendering core”); and
the API directs an individual workload of the plurality of workloads to a particular processor (see e.g. Catalano, paragraph 27: “API 120 is configured to facilitate the communication between the rendering core 110, the accelerator 130 and the custom shader 140”; and paragraph 14: “executing one or more of time-consuming processes of the rendering core on an accelerator… the primary shading is still done on the rendering core”) based at least in part on an acceleration profile associated with the individual workload (see e.g. Catalano, paragraph 14: “executing one or more of time-consuming processes of the rendering core on an accelerator using approximate shading. As such, the primary shading is still done on the rendering core of the production renderer as before”; and paragraph 15: “recognition that custom shaders call or invoke functions of the rendering core and these calls or invocations may include procedures that are very time consuming”).
With respect to claim 4, Catalano as modified teaches: The non-transitory machine-readable medium of claim 3, wherein:
the first workload has a first acceleration profile (see e.g. Catalano, paragraph 14: “primary shading is still done on the rendering core of the production renderer as before and thus important rendering features, like for example Depth-of-Field, Motion Blur, multiple Reflections and Refractions, would continue to work as before”) and the second workload has a second acceleration profile (see e.g. Catalano, paragraph 14: “one or more of time-consuming processes of the rendering core on an accelerator using approximate shading”; and paragraph 15: “rendering core functions for determining the irradiance for global illumination (GI), looking up values for Image Based Lighting lookup (IBL), casting shadow rays and determining occlusion or ambient occlusion or obscurance, or executing a light loop for illumination computations”); and
the first acceleration profile (see e.g. Catalano, paragraph 14: “primary shading is still done on the rendering core of the production renderer as before and thus important rendering features, like for example Depth-of-Field, Motion Blur, multiple Reflections and Refractions, would continue to work as before”) is different than the second acceleration profile (see e.g. Catalano, paragraph 14: “executing one or more of time-consuming processes of the rendering core on an accelerator”; and paragraph 28: “accelerator 130 can compute the rendering function calls in an accelerated manner using a generic shader model and the data from the point cloud. This is called approximate shading”).
With respect to claim 9, Catalano as modified teaches: The non-transitory machine-readable medium of claim 2,
Catalano does not but Saillet teaches:
wherein the API obtains the workflow from the queue in a single dequeue operation (see e.g. Saillet, paragraph 41: “in a queue, a list of workflows to execute”; paragraph 2: “execution of a new workflow… after one of the running workflows has completed, and will start the next workflow in the queue”; and paragraph 99: “scheduling (block 501) a number of workflows includes scheduling workflows based on a first-come first-serve system”).
Catalano and Saillet are analogous art because they are in the same field of endeavor: managing workload distribution for processing. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Catalano with the teachings of Saillet. The motivation/suggestion would be to improve resource utilization (see e.g. Saillet, paragraph 41).
With respect to claim 11, Catalano teaches: The computer system of claim 10, wherein performance of the API further causes the one or more processors to:
perform a first workload (see e.g. Catalano, paragraph 14: “primary shading… Depth-of-Field, Motion Blur, multiple Reflections and Refractions”) of the plurality of workloads on a first processor (see e.g. Catalano, paragraph 14: “primary shading is still done on the rendering core of the production renderer as before and thus important rendering features, like for example Depth-of-Field, Motion Blur, multiple Reflections and Refractions, would continue to work as before”); and
cause the first processor to (see e.g. Catalano, paragraph 27: “transfer the scene geometry and the point cloud data from the memory 114 of the rendering core 110 to the accelerator”; and paragraphs 37-42) perform a second workload (see e.g. Catalano, paragraph 15: “rendering core functions for determining the irradiance for global illumination (GI), looking up values for Image Based Lighting lookup (IBL), casting shadow rays and determining occlusion or ambient occlusion or obscurance, or executing a light loop for illumination computations”) of the plurality of workloads on a second processor (see e.g. Catalano, paragraph 15: “executing the requests in large batches on an accelerator”).
Catalano does not but Saillet teaches:
remove, from a queue of workflows, a plurality of workloads (see e.g. Saillet, paragraph 53: “workflows may be data processing workflows that perform any number of processing operations on the data. For example, a workflow may fetch data, sort the data, apply a filter to the data, and transform the data so as to gain insight from the data”) in a form of a single workflow submitted by an application (see e.g. Saillet, paragraph 41: “in a queue, a list of workflows to execute”; and paragraph 2: “execution of a new workflow… after one of the running workflows has completed, and will start the next workflow in the queue… workflows in the queue have to wait until they are at the top of the queue and the conditions allowing a new workflow to be executed are met”);
Catalano and Saillet are analogous art because they are in the same field of endeavor: managing workload distribution for processing. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Catalano with the teachings of Saillet. The motivation/suggestion would be to improve resource utilization (see e.g. Saillet, paragraph 41).
With respect to claim 12, Catalano as modified teaches: The computer system of claim 11, wherein each workload in the plurality of workloads has an associated acceleration profile (see e.g. Catalano, paragraph 14: “one or more of time-consuming processes” and “primary shading is still done on the rendering core”; and paragraph 15: “recognition that custom shaders call or invoke functions of the rendering core and these calls or invocations may include procedures that are very time consuming”) that identifies a capability of an accelerator (see e.g. Catalano, paragraph 14: “approximate shading”) required to perform the associated workload (see e.g. Catalano, paragraph 14: “executing one or more of time-consuming processes of the rendering core on an accelerator using approximate shading”; and paragraph 28: “accelerator 130 can compute the rendering function calls in an accelerated manner using a generic shader model and the data from the point cloud. This is called approximate shading”).
With respect to claims 13-14: Claim 13-14 are directed to a computer system comprising one or more processors and machine-readable media to store executable instructions corresponding to the machine-readable media having stored thereon an API as disclosed in claims 4 and 3, respectively; please see the rejections directed to claims 3-4 above which also cover the limitations recited in claims 13-14.
With respect to claim 18, Catalano as modified teaches: The computer system of claim 11, wherein the first processor or the second processor (see e.g. Catalano, paragraph 14: “executing one or more of time-consuming processes of the rendering core on an accelerator using approximate shading”) perform portions of a workflow serially (see e.g. Catalano, paragraph 48: “accelerator employs a single generic approximate shader, e.g., a generic BRDF model in case of irradiance calculation, that is fed with the data of the pre-sampled points that are closest to the hit points”; and paragraph 49: “At each hit point, the result of the computation of the approximate shader is: result=P.diffuse_color*(P.incoming_direct_light+.pi.diffuse_ray)”).
With respect to claims 20-23 and 27: Claims 20-23 and 27 are directed to a computer-implemented method corresponding to the active functions performed by the machine-readable medium and the computer system recited in claims 2, 12, 4, 3, and 9, respectively; please see the rejections directed to claims 2-4, 9, and 12 above which also cover the limitations recited in claims 20-23 and 27.
With respect to claims 29-30: Claims 29-30 are directed to a processor comprising one or more circuits that implements an API corresponding to the API stored in the machine-readable medium recited in claims 2-3, respectively; please see the rejections directed to claims 2-3 above which also cover the limitations recited in claims 29-30.
With respect to claim 31, Catalano as modified teaches: The processor of claim 30, wherein:
individual workloads in the plurality of workloads have different acceleration profiles (see e.g. Catalano, paragraph 14: “executing one or more of time-consuming processes of the rendering core on an accelerator using approximate shading. As such, the primary shading is still done on the rendering core”; and paragraph 15: “rendering core functions for determining the irradiance for global illumination (GI), looking up values for Image Based Lighting lookup (IBL), casting shadow rays and determining occlusion or ambient occlusion or obscurance, or executing a light loop for illumination computations”); and
the different acceleration profiles cause the plurality of workloads to be performed by different types of accelerators (see e.g. Catalano, paragraph 28: “In one embodiment, the accelerator 130 is a GPU. In another embodiment, the accelerator 130 is a reconfigurable/programmable accelerator”).
With respect to claim 32: Claim 32 is directed to a processor comprising one or more circuits that implements an API corresponding to the API stored in the machine-readable medium recited in claim 3; please see the rejection directed to claim 3 above which also covers the limitations recited in claim 32.
With respect to claim 34, Catalano as modified teaches: The processor of claim 29, wherein the information corresponding to the intermedia results further includes an intermediate result produced by the first processor (see e.g. Catalano, paragraph 24: “the processor 112 may generate and export a scene geometry and a point cloud to the accelerator 130”; paragraph 37: “geometry data of an image, e.g. a scene geometry, and a point cloud are created. The point cloud is created by placing points on surfaces of the scene geometry, and sampling properties at each point”; and paragraph 42: “Once the scene geometry and the point cloud are created, they are exported to an accelerator”).
With respect to claim 35: Claim 35 is directed to a processor comprising one or more circuits that implements an API corresponding to the API stored in the machine-readable medium recited in claim 9; please see the rejection directed to claim 9 above which also covers the limitations recited in claim 35.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Catalano in view of Nachimuthu as applied to claim 1 above, and further in view of Kaminski et al. (US 2011/0161620 A1; hereinafter Kaminski).
With respect to claim 5, Catalano teaches: The non-transitory machine-readable medium of claim 1, wherein the information is transferred from a first processor of the two processors to a second processor of the two processors (see e.g. Catalano, paragraph 27: “transfer the scene geometry and the point cloud data from the memory 114 of the rendering core 110 to the accelerator”)
Catalano does not but Kaminski teaches:
using direct memory access (see e.g. Kaminski, paragraph 4: “One technique commonly used to share memory between a main CPU and accelerator devices is called Direct Memory Access (DMA)”).
Catalano and Kaminski are analogous art because they are in the same field of endeavor: data sharing between a processor and an accelerator. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Catalano with the teachings of Kaminski. The motivation/suggestion would be to improve data processing efficiency by increasing the memory access speed (see e.g. Kaminski, paragraph 4).
Claim 36 is rejected under 35 U.S.C. 103 as being unpatentable over Catalano in view Nachimuthu and Saillet as applied to claim 29 above, and further in view of Balle et al. (US 2018/0024861 A1; hereinafter Balle).
With respect to claim 36, Catalano teaches: The processor of claim 29,
Catalano does not but Balle teaches:
wherein the first processor is a virtual processor (see e.g. Balle, paragraph 46: “execute one or more applications or processes (i.e., workloads), such as in virtual machines or containers”).
Catalano and Balle are analogous art because they are in the same field of endeavor: workload management processors and accelerators. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Catalano with the teachings of Balle. The motivation/suggestion would be to improve resource utilization (see e.g. Balle, paragraph 44).
Response to Arguments
Applicant's arguments filed 09/29/2025 have been fully considered but they are not persuasive. In detail:
(i) Regarding Applicant’s arguments with respect to the rejections under 35 U.S.C. §103 directed to claims 8, 17, 26, and 34 (Remarks, page 15), note that Catalano discloses an accelerator that performs approximate shading calculations and provides the results of these calculations (i.e. an intermediary results) to a custom shader as part of instructions for the custom shader to perform final rendering calculations (see e.g. Catalano, paragraph 47: “In step 340, accelerator computes the results of the forwarded rendering core function for each Query Point using the approximate shading”; and paragraph 54: “rendering core returns the computed results from the step 340 to the custom shader. The custom shader uses the computed results to compute the final render result ”).
That is, Catalano discloses providing approximate shading results of the accelerator as instructions to the custom shader (i.e. a second processor) to produce a final render result.
Consequently, Catalano teaches the limitation “the information corresponding to the intermediate results further includes instructions to be performed by a second processor” as recited in claim 8, and the similar limitations recited in claims 17, 26, and 34. For more details, please see the rejections directed to claims 8, 17, 26, and 34 above.
Applicant’s arguments with respect to the limitations “select… based at least in part on respective characteristics of the two or more workloads” recited in claim 1, and the similar limitations recited in claims 10, 19, and 28, and the limitations recited in newly added claim 37 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
More specifically, even though Catalano discloses utilizing two or more processors (e.g. a rendering core 110, an accelerator 130, a custom shader 140) to perform two or more workloads (e.g. starting a rendering process by the rendering core, computing rendering core functions by the accelerator, computing a final rendering result by the customer shader, etc.), which in return teaches the limitations “at least two processors from a plurality of processors to perform two or more workloads”, Catalano does not explicitly discloses selecting these processors based on the characteristics of the workloads.
However, Nachimuthu teaches these features. For more details, please see the Claim Rejections – 35 USC §103 section above.
CONCLUSION
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Guim Bernat (US 2019/0121671 A1) discloses a system that receives a workload request including workload characteristics (e.g. instructions that are to be performed, acceleration type, service level agreement (SLA) definitions, model type, performance requirements, workload definition, etc.) and selects a compute resource or accelerator using telemetry data from compute platforms and accelerators to determine which compute resource or accelerator to select to perform the workload request (see paragraph 15).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Umut Onat whose telephone number is (571)270-1735. The examiner can normally be reached M-Th 9:00-7:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin L Young can be reached on (571) 270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/UMUT ONAT/Primary Examiner, Art Unit 2194