Prosecution Insights
Last updated: April 19, 2026
Application No. 18/751,915

PARALLEL MULTI-CLIENT RAY TRACING TASK PROCESSING

Non-Final OA §102§103§112
Filed
Jun 24, 2024
Examiner
TRAN, JENNY NGAN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Advanced Micro Devices, Inc.
OA Round
1 (Non-Final)
20%
Grant Probability
At Risk
1-2
OA Rounds
2y 6m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 20% of cases
20%
Career Allow Rate
1 granted / 5 resolved
-42.0% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-20 are currently pending in the present application, with claims 1, 10, and 15 being independent. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/10/2024, 12/18, 2024, and 11/04/2025 have been considered by the examiner. Specification The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter. See 37 CFR 1.75(d)(1) and MPEP § 608.01(o). Correction of the following is required: Claim 7: “cause the first stream of frames to be sent to the first client device prior to finishing generating the second stream of frame” Claim 8: “subsequent to generating at least a portion of the first stream of frames and prior to causing the first stream of frames to being sent, receiving a request to depict at least a third portion of the scene for a third client device” Claim 19: “subsequent to sending the second stream of frames to the second client device, generate at least a portion of the first stream of frames” Claim 20: “in parallel with generating the first stream of frames but subsequent to sending the second stream of frames to the second client device, generate at least a portion of a third stream of frames of the scene” Claim 20: “generate at least a portion of a third stream of frames of the scene for communication to the second client device via the network interface” Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 7-8, and 19-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification does not reasonably convey to one of ordinary skill in the art that the inventor had possession of the claimed ordering constraints governing multi-client stream generation and transmission, including: Claim 7: “cause the first stream of frames to be sent to the first client device prior to finishing generating the second stream of frame” Claim 8: “subsequent to generating at least a portion of the first stream of frames and prior to causing the first stream of frames to being sent, receiving a request to depict at least a third portion of the scene for a third client device” Claim 19: “subsequent to sending the second stream of frames to the second client device, generate at least a portion of the first stream of frames” Claim 20: “in parallel with generating the first stream of frames but subsequent to sending the second stream of frames to the second client device, generate at least a portion of a third stream of frames of the scene” In particular, the disclosure fails to describe any such mechanism, or state management sufficient to implement these ordering requirements across generated streams, but rather the originally filed disclosure describes only generalized generation and communication of streams in Par. 0018 “in some cases, requests 140, 142, and 144 are received simultaneously or at similar times. In other cases, one or more of requests 140, 142, and 144 are received before or later than others of requests 140, 142, and 144” and Par. 0020 “Although sorting and compacting is described in the above example with sorting occurring before compacting, in some implementations, compacting occurs before sorting. In some implementations, some or all of the sorting, compacting, or both occurs concurrently” without the claimed sequencing. Therefore, the disclosure does not demonstrate possession of the specific ordering limitations claimed. Additionally, the specification does not reasonably convey to one of ordinary skill in the art that the inventor had possession of the claimed “generating a third stream of frames of the scene for communication to the second client device” as recited in claim 20. The disclosure describes generating and communicating streams of frames to different client devices (e.g., a first stream to a first client and a second stream to a second client), but does not describe generating multiple different streams for the same device, nor does it describe any architecture, control logic, or use case in which the second client receives a second stream, and then subsequently receives a third, different stream of frames of the scene. Accordingly, the originally filed disclosure does not demonstrate possession of the “third stream of frames of the scene for communication to the second client device” limitation recited in claim 20. Thus, one of ordinary skill in the art would not know how to make and/ or use the claimed the invention without undue experimentation. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1,2, 4-6, 9-10, 13, and 15 is/are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Urbach (US 8803892 B2). Regarding claim 1, Urbach discloses a processing server (Fig. 1; server 120) comprising: a memory configured to store (Fig. 2 and Col. 4, lines 43-59; rendering target 200 is a memory storage, such as a frame buffer, within GPU 121…) ray tracing data for a scene (Col. 10, lines 4-10; Server 120 may store the information received from each client…3D model and rendering parameters are to be used for that rendering. Col. 11, lines 24-36; …rendering technique…implement ray tracing algorithms on the server and perform the renderings using ray tracing. Examiner's note: model data and rendering parameters stored by the server is the ray tracing data for the scene) and a processor (Fig. 1; GPU 121, CPU 122) configured to: receive requests to perform ray tracing tasks (Col. 10, lines 55-58; the client may notify the server of the program instance and request that the server perform renderings for this program instance) to depict at least a first portion of the scene for a first client device and at least a second portion of the scene for a second client device, wherein the first client device is different from the second client device (Fig. 1 and Col. 7, lines 1-19; multiple clients 130 may execute multiple instances of the same computer program and yet have different rendering parameters 123. For example, suppose clients 130A and 130D execute two instances of the same game application…the two players may see different images on the displays of clients 130A and 130D…); generate for communication to the first client device (Col. 4, lines 15-28; server 120 may transmit data to clients 130 in the form of video streams; and clients 130 may each transmit data to server 120…), based on the ray tracing data (Col. 11, lines 24-36; …rendering technique…implement ray tracing algorithms on the server and perform the renderings using ray tracing), a first stream of frames of the scene (Col. 7, lines 54-67; the first rendered image may be encoded as a frame of a first video stream and transmitted to client 130A…); and generate for communication to the second client device (Col. 4, lines 15-28; server 120 may transmit data to clients 130 in the form of video streams; and clients 130 may each transmit data to server 120…), in parallel with generating the first stream of frames (Col. 7, lines 24-51; server 120 may concurrently perform four renderings for the four program instances 131 executing on clients 130 respectively based on the four sets of rendering parameters 123)…server 120 may concurrently encode the four rendered images stored in rendering target 200), based on ray tracing data (Col. 11, lines 24-36; …rendering technique…implement ray tracing algorithms on the server and perform the renderings using ray tracing), at least a portion of a second stream of frames of the scene (Col. 7, lines 54-67; …The second rendered image may be encoded as a frame of a second video stream and transmitted to client 130B…), wherein the second stream of frames is different from the first stream of frames (Col. 7, lines 53-54; each of the rendered images may be encoded as a single frame of a different video stream). Regarding claim 2, Urbach discloses the processing server of claim 1, and further discloses wherein generating the second stream of frames Col. 7, lines 24-51; server 120 may concurrently perform four renderings for the four program instances 131 executing on clients 130 respectively based on the four sets of rendering parameters 123)…lines 54-67; …The second rendered image may be encoded as a frame of a second video stream and transmitted to client 130B…) is performed in response to determining that the scene of the first stream of frames is the same as the scene of the second stream of frames (Col. 7, lines 11-14; If the two players play the same game interactively, at a particular time, even if the two players are both in the same game scene, they may view the game scene from different positions and different angle. Examiner's note: server identifies that both renderings are derived from the same game scene, and responsive to that identification and rendering parameters, performs concurrent generation of the respective frame streams). Regarding claim 4, Urbach discloses the processing server of claim 1, and further discloses a network interface configured to communicate with the first client device and the second client device via a network (Fig. 1 and Col. 3, lines 61-67; server 120 is connected with each of clients 130 via separate connections 150. In particular embodiments, connections 150 between server 120 and clients 130 may be network connections via a computer network). Regarding claim 5, Urbach discloses the processing server of claim 4, and further discloses wherein the network interface is configured to communicate with the first client device via a first network connection and the network interface is configured to communicate with the second client device via a second network connection (Fig. 1; TCP Socket 124A-124D, connections 150A-150D and Col. 4, lines 1-9; each of network connections 150 may be a Transport Control Protocol (TCP) connection…server 120 may have multiple TCP sockets 124, and each of clients 130 may be connected to a different TCP socket 124 via a separate TCP connection 150. For example, client 130A may be connected to TCP socket 124A of server 120 via TCP connection 150A). Regarding claim 6, Urbach discloses the processing server of claim 1, and further discloses wherein the processor is further configured to: save the first stream of frames in a first portion of the memory allocated to the first client device (FIG. 2-3 and Col. 5, lines 12-57; each of clients 130 is allocated one or more rendering-target units…suppose client 130A is a notebook computer with a relatively low-resolution display (e.g., 1024 pixels-by-768 pixels). In this case, a single rendering-target unit may have sufficient memory space to store rendered images of 1024 pixels-by-768 pixels or smaller. Thus, client 130A may be allocated one rendering-target unit (e.g., rendering-target unit 211) …); and save the second stream of frames in a second portion of the memory allocated to the second client device (FIG. 2-3 and Col. 5, lines 12-57; each of clients 130 is allocated one or more rendering-target units; …suppose client 130B is a desktop computer having a display of 1920 pixels-by-1680 pixels. In this case, four rendering-target units may be needed to store images of 1920 pixels-by-1680 pixels or smaller. Thus, client 130B may allocated four rendering-target units (e.g., rendering-tart units 212, 213, 222, and 223) …). Regarding claim 9, Urbach discloses the processing server of claim 6, and further discloses wherein the first portion of the memory is a first frame buffer allocated to the first client device, and wherein the second portion of the memory is a second frame buffer allocated to the second client device (Fig. 3 and Col. 10, lines 16-52; the rendering target may be a frame buffer…allocate one or more rendering-target units to each of the clients currently connected to and supported by the server (step 302)…The number of rendering-target units allocated to a client may depend on the size or the resolution of the video frame buffer or the display of that particular client (e.g., a client having a high-resolution display may be allocated more number of rendering-target units than a client having a low-resolution display)). Regarding claim 10, claim 10 is the method claim of system claim 1, and is accordingly rejected using substantially similar rationale as to that which is set forth with respect to claim 1. Regarding claim 13, Urbach discloses the method of claim 10, and further discloses sending the first stream of frames to the first client device via a first network connection (Fig. 1; TCP socket 124A <- Connection 150A -> Client 130A. Col. 4, lines 5-15; server 120 and client 130A may exchange data bi-directionally via connection 150A…server 120 may transmit data to clients 130 in the form of video streams); and sending the second stream of frames to the second client device via a second network connection (Fig. 1; TCP socket 124B <- Connection 150B -> Client 130B. Col. 4, lines 5-15; server 120 and client 130A may exchange data bi-directionally via connection 150A…server 120 may transmit data to clients 130 in the form of video streams). Regarding claim 15, Urbach discloses a processing server (Fig. 1; server 120) comprising: a memory configured to store (Fig. 2 and Col. 4, lines 43-59; rendering target 200 is a memory storage, such as a frame buffer, within GPU 121…) ray tracing data of a scene (Col. 10, lines 4-10; Server 120 may store the information received from each client…3D model and rendering parameters are to be used for that rendering. Col. 11, lines 24-36; …rendering technique…implement ray tracing algorithms on the server and perform the renderings using ray tracing. Examiner's note: model data and rendering parameters stored by the server is the ray tracing data for the scene) and a processor (Fig. 1; GPU 121, CPU 122) configured to: compare first ray tracing tasks to be performed for a first client device to the ray tracing data (Col. 6, lines 56-67; multiple clients 130 may execute multiple instances of the same computer program and yet have different rendering parameters 123…Col. 11, lines 11-24; maintain one more sets of rendering parameters for each of the clients…each set of rendering parameters corresponds to an instance of a computer program executing on a particular client and indicates how rendering is to be performed for that particular program instance… Examiner's note; determining how to render the stored scene using client-specific rendering parameters reads on comparing ray tracing tasks to ray tracing data); in response to the first ray tracing tasks corresponding to the ray tracing data (Col. 11, lines 24-36; …rendering technique…implement ray tracing algorithms on the server and perform the renderings using ray tracing), generate a first stream of frames of the scene (Col. 7, lines 54-67; the first rendered image may be encoded as a frame of a first video stream and transmitted to client 130A…) for communication to the first client device (Col. 4, lines 15-28; server 120 may transmit data to clients 130 in the form of video streams; and clients 130 may each transmit data to server 120…) via a network interface (Fig. 1 and Col. 3, lines 61-67; server 120 is connected with each of clients 130 via separate connections 150. In particular embodiments, connections 150 between server 120 and clients 130 may be network connections via a computer network); compare second ray tracing tasks to be performed for a second client device to the ray tracing data (Col. 6, lines 56-67; multiple clients 130 may execute multiple instances of the same computer program and yet have different rendering parameters 123…Col. 11, lines 11-24; maintain one more sets of rendering parameters for each of the clients…each set of rendering parameters corresponds to an instance of a computer program executing on a particular client and indicates how rendering is to be performed for that particular program instance… Examiner's note; determining how to render the stored scene using client-specific rendering parameters reads on comparing ray tracing tasks to ray tracing data), wherein the first client device is different from the second client device (Fig. 1 and Col. 7, lines 1-19; multiple clients 130 may execute multiple instances of the same computer program and yet have different rendering parameters 123. For example, suppose clients 130A and 130D execute two instances of the same game application…the two players may see different images on the displays of clients 130A and 130D…); and in response to the second ray tracing tasks corresponding to the ray tracing data (Col. 11, lines 24-36; …rendering technique…implement ray tracing algorithms on the server and perform the renderings using ray tracing), in parallel with generating the first stream of frames (Col. 7, lines 24-51; server 120 may concurrently perform four renderings for the four program instances 131 executing on clients 130 respectively based on the four sets of rendering parameters 123)…server 120 may concurrently encode the four rendered images stored in rendering target 200), generate at least a portion of a second stream of frames of the scene (Col. 7, lines 54-67; …The second rendered image may be encoded as a frame of a second video stream and transmitted to client 130B…) for communication to the second client device (Col. 4, lines 15-28; server 120 may transmit data to clients 130 in the form of video streams; and clients 130 may each transmit data to server 120…) via the network interface (Fig. 1 and Col. 3, lines 61-67; server 120 is connected with each of clients 130 via separate connections 150. In particular embodiments, connections 150 between server 120 and clients 130 may be network connections via a computer network), wherein the second stream of frames is different from the first stream of frames (Col. 7, lines 53-54; each of the rendered images may be encoded as a single frame of a different video stream). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3 and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Urbach (US 8803892 B2), in view of Saleh et al. (US 20220198739), hereinafter referred to as “Saleh”. Regarding claim 3, Urbach discloses the processing server of claim 2, and further discloses determining that the scene of the first stream of frames is the same as the scene of the second stream of frames (Col. 7, lines 11-14; If the two players play the same game interactively, at a particular time, even if the two players are both in the same game scene, they may view the game scene from different positions and different angle). Urbach does not disclose determining that the ray tracing tasks for the first client device share a same bounding volume hierarchy (BVH) of the ray tracing data as the ray tracing tasks for the second client device. In the same art of ray tracing a scene, Saleh discloses determining that the ray tracing tasks for the first client device share a same bounding volume hierarchy (BVH) of the ray tracing data as the ray tracing tasks for the second client device (Par. 0011; performing bounding volume hierarchy (“BVH”) traversal in multiple accelerated processing devices (“APDs”). Par. 0044; The APDs 116 store a copy of the bounding volume hierarchy data 602 for the scene being rendered. In some examples, the bounding volume hierarchy data 602 the is stored in each APD 116 is the same). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Saleh’s BVH ray-tracing data reuse into Urbach’s concurrent multi-stream rendering server. Doing so provides a common acceleration structure that avoids redundant scene processing, reduces memory/bandwidth overhead, and improves latency for concurrently rendered views of the same scene. Regarding claim 16, Urbach discloses the processing server of claim 15, but does not disclose wherein generating the second stream of frames in parallel with the first stream of frames is performed in response to determining that the first ray tracing tasks share a same bounding volume hierarchy (BVH) of the ray tracing data as the second ray tracing tasks. In the same art of ray tracing a scene, Saleh discloses wherein generating the second stream of frames in parallel with the first stream of frames (Par. 0019; The APD 116 includes compute units 132 (together, parallel processing units 202) that include one or more SIMD units 138 that perform operations at the request of the processor 102 in a parallel manner according to a SIMD paradigm) is performed in response to determining that the first ray tracing tasks share a same bounding volume hierarchy (BVH) of the ray tracing data as the second ray tracing tasks (Par. 0044; The APDs 116 store a copy of the bounding volume hierarchy data 602 for the scene being rendered. In some examples, the bounding volume hierarchy data 602 the is stored in each APD 116 is the same). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Saleh’s parallel processing technique into Urbach’s concurrent multi-stream rendering server. Doing so allows the server to recognize the same underlying scene structures so that tasks can be scheduled together for more efficient SIMD/GPU execution, yielding predictable results in improved load balancing and utilization while reducing per-stream render time. Regarding claim 17, Urbach in view of Saleh discloses the processing server of claim 16, and Urbach further discloses wherein a first ray (Col. 11, lines 24-36; …rendering technique…implement ray tracing algorithms on the server and perform the renderings using ray tracing) for the first client device traces a different path through the scene than a second ray (Col. 11, lines 24-36; …rendering technique…implement ray tracing algorithms on the server and perform the renderings using ray tracing) for the second client device (Col. 6, lines 56-67; server 120 may maintain a different set of rendering parameters 123 for each of clients 130 currently connected to server 120. For example, rendering parameter set 123A corresponds to client 130A. Each set of rendering parameters 123 may be obtained from the corresponding instance of computer program 131, and describe how renderings are to be performed for that instance of computer program 131. For example, rendering parameter set 123A may include rendering parameters that describe how renderings are to be performed for program instance 131A and may be updated based on the current state of program instance 131A), and wherein the first ray and the second ray are processed in parallel (Col. 7, lines 24-51; server 120 may concurrently perform four renderings for the four program instances 131 executing on clients 130 respectively based on the four sets of rendering parameters 123)…server 120 may concurrently encode the four rendered images stored in rendering target 200). Urbach and Saleh are combined for the reasons set forth above with respect to claim 16. Regarding claim 18, Urbach discloses the processing server of claim 15, but does not disclose wherein the first ray tracing tasks corresponding to the ray tracing data comprises determining that the scene of the first stream of frames corresponds to the ray tracing data. In the same art of ray tracing a scene, Saleh discloses wherein the first ray tracing tasks corresponding to the ray tracing data comprises determining that the scene of the first stream of frames corresponds to the ray tracing data (Par. 0011; utilizing bounding volume hierarchy data copies in memories local to the multiple APDs; rendering primitives determined to be intersected based on the BVH traversal, using geometry information and texture data spread across the memories local to the multiple APDs; and storing results of rendered primitives for a set of tiles assigned to the multiple APDs into tile buffers stored in APD memories local to the APDs). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Saleh’s rendering from defined scene geometry and acceleration data into Urbach’s concurrent multi-stream rendering server. Doing so allows the system to reliably select and validate the correct stored scene data for each client’s render request, yielding predictable results in improved accuracy and reduced unnecessary computation or mismatches when generating and streaming frames in multi-client environments. Claim(s) 7-8, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Urbach (US 8803892 B2), in view of Cerny (US 20220083384). Regarding claim 7, Urbach discloses processing server of claim 6, but does not disclose wherein the processor is further configured to: cause the first stream of frames to be sent to the first client device prior to finishing generating the second stream of frames. In the same art of cloud gaming servers, Cerny discloses a multi-client scheduling/ordering mechanism (Par. 0004; cloud gaming server may be configured to provide resources to multiple clients and/or applications…resources may be shared between multiple applications. Par. 0027; includes the execution of a video game at the server to generate game rendered video frames, which are then sent to a client for display. In particular, system 100 is configured for multi-tenancy for real-time applications, and more specifically to sharing of a graphics processing unit (GPU) between multiple applications…Par. 0059-0060; FIG. 4B-2 illustrates GPU resource usage timing when the GPU resource is equally shared between multiple applications by suspending and/or halting commands for an application at the end of one allocation period and saving a state of a corresponding GPU configuration in order to allow for resuming the execution of the commands from a corresponding rendering command buffer at the next allocation period…). Although the reference doesn’t explicitly mention cause the first stream of frames to be sent to the first client device prior to finishing generating the second stream of frames. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Cerny’s multi-tenant scheduling/ordering mechanism into Urbach’s multi-client rendering server. Urbach already establishes the concurrent rendering and video streaming outputs for multiple clients, and by applying Cerny’s time-sliced or suspend/resume style scheduling to work across concurrent render tasks provides predictable benefits of preventing heavy workload from shared resources, and stabilizes GPU efficiency across clients. The particular ordering policy (transmit any stream’s frames as soon as they are “finished” rather than waiting for other renderings to complete, temporarily pause incomplete work, prioritize the stream with a prior delivery or new request, resume work after, etc.) is a routine engineering choice as a person of ordinary skill in the art would recognize that how GPU/encoder network resources are allocated across clients directly determines latency and frame pacing, and maintaining a steady frame pacing prevents any single client from monopolizing resources, which are routine scheduling policies in multi-tenant real-time systems, like cloud-based gaming servers, that predictably improves yielding predictable results of improves responsiveness and utilization without changing the underlying rendered scene. Regarding claim 8, Urbach in view of Cerny discloses the processing server of claim 7. Urbach further discloses receiving a request to depict at least a third portion of the scene for a third client device (Col. 10, lines 55-58; the client may notify the server of the program instance and request that the server perform renderings for this program instance), wherein the third client device is different from the first client device and from the second client device (Fig. 1; Client 130A-130D); and generate, in parallel with generating the first stream of frames (Col. 7, lines 24-51; server 120 may concurrently perform four renderings for the four program instances 131 executing on clients 130 respectively based on the four sets of rendering parameters 123)…server 120 may concurrently encode the four rendered images stored in rendering target 200, based on the ray tracing data (Col. 11, lines 24-36; …rendering technique…implement ray tracing algorithms on the server and perform the renderings using ray tracing), at least a portion of a third stream of frames of the scene for communication to the third client device, wherein the third stream of frames is different from the first stream of frames and from the second stream of frames (Col. 7, lines 53-57; each of the rendered images may be encoded as a single frame of a different video stream…The third rendered image may be encoded as a frame of a third video stream and transmitted to client 130C). Urbach does not disclose subsequent to generating at least a portion of the first stream of frames and prior to causing the first stream of frames to being sent. In the same art of cloud gaming servers, Cerny discloses a multi-client scheduling/ordering mechanism (Par. 0004; cloud gaming server may be configured to provide resources to multiple clients and/or applications…resources may be shared between multiple applications. Par. 0027; includes the execution of a video game at the server to generate game rendered video frames, which are then sent to a client for display. In particular, system 100 is configured for multi-tenancy for real-time applications, and more specifically to sharing of a graphics processing unit (GPU) between multiple applications…Par. 0059-0060; FIG. 4B-2 illustrates GPU resource usage timing when the GPU resource is equally shared between multiple applications by suspending and/or halting commands for an application at the end of one allocation period and saving a state of a corresponding GPU configuration in order to allow for resuming the execution of the commands from a corresponding rendering command buffer at the next allocation period…). Although Cerny does not explicitly disclose subsequent to generating at least a portion of the first stream of frames and prior to causing the first stream of frames to being sent, Urbach and Cerny are combined for the reasons set forth above with respect to claim 7. Regarding claim 19, Urbach discloses the processing server of claim 15, but does not disclose wherein the processor is further configured to: subsequent to sending the second stream of frames to the second client device, generate at least a portion of the first stream of frames. In the same art of cloud gaming servers, Cerny discloses a multi-client scheduling/ordering mechanism (Par. 0004; cloud gaming server may be configured to provide resources to multiple clients and/or applications…resources may be shared between multiple applications. Par. 0027; includes the execution of a video game at the server to generate game rendered video frames, which are then sent to a client for display. In particular, system 100 is configured for multi-tenancy for real-time applications, and more specifically to sharing of a graphics processing unit (GPU) between multiple applications…Par. 0059-0060; FIG. 4B-2 illustrates GPU resource usage timing when the GPU resource is equally shared between multiple applications by suspending and/or halting commands for an application at the end of one allocation period and saving a state of a corresponding GPU configuration in order to allow for resuming the execution of the commands from a corresponding rendering command buffer at the next allocation period…). Although Cerny does not explicitly disclose subsequent to sending the second stream of frames to the second client device, generate at least a portion of the first stream of frames, Urbach and Cerny are combined for the reasons set forth above with respect to claim 7. Regarding claim 20, Urbach in view of Cerny discloses the processing server of claim 19. Urbach further discloses in parallel with generating the first stream of frames (Col. 7, lines 24-51; server 120 may concurrently perform four renderings for the four program instances 131 executing on clients 130 respectively based on the four sets of rendering parameters 123)…server 120 may concurrently encode the four rendered images stored in rendering target 200), generate at least a portion of a third stream of frames of the scene (Col. 7, lines 54-67; …The third rendered image may be encoded as a frame of a third video stream and transmitted to client 130C…) for communication to the second client device (Col. 4, lines 15-28; server 120 may transmit data to clients 130 in the form of video streams; and clients 130 may each transmit data to server 120…) via the network interface (Fig. 1 and Col. 3, lines 61-67; server 120 is connected with each of clients 130 via separate connections 150. In particular embodiments, connections 150 between server 120 and clients 130 may be network connections via a computer network), wherein the third stream of frames is different from the first stream of frames and from the second stream of frames (Col. 7, lines 53-57; each of the rendered images may be encoded as a single frame of a different video stream…). Urbach does not disclose subsequent to sending the second stream of frames to the second client device. In the same art of cloud gaming servers, Cerny discloses a multi-client scheduling/ordering mechanism (Par. 0004; cloud gaming server may be configured to provide resources to multiple clients and/or applications…resources may be shared between multiple applications. Par. 0027; includes the execution of a video game at the server to generate game rendered video frames, which are then sent to a client for display. In particular, system 100 is configured for multi-tenancy for real-time applications, and more specifically to sharing of a graphics processing unit (GPU) between multiple applications…Par. 0059-0060; FIG. 4B-2 illustrates GPU resource usage timing when the GPU resource is equally shared between multiple applications by suspending and/or halting commands for an application at the end of one allocation period and saving a state of a corresponding GPU configuration in order to allow for resuming the execution of the commands from a corresponding rendering command buffer at the next allocation period…). Although Cerny does not explicitly disclose subsequent to sending the second stream of frames to the second client device, Urbach and Cerny are combined for the reasons set forth above with respect to claim 7. Claim(s) 11 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Urbach (US 8803892 B2), in view of Laine et al. "Megakernels considered harmful: Wavefront path tracing on GPUs." In Proceedings of the 5th High-Performance Graphics Conference, pp. 137-143. 2013, hereinafter referred to as “Laine”. Regarding claim 11, Urbach discloses the method of claim 10, but does not disclose batch processing shadow rays for the second stream of frames with shadow rays for the first stream of frames. In the same art of ray tracing, Laine discloses wherein generating the second stream of frames comprises batch processing shadow rays for the second stream of frames with shadow rays for the first stream of frames (Pg. 137, Section 1, Right Column; wavefront path tracer that keeps a large pool of paths alive at all times, which allows executing the ray casts and the material evaluations in coherent chunks over large sets of rays by splitting the path tracer into multiple specialized kernels. Pg. 139, Section 4; The megakernel always processes a batch of paths to completion, and includes path generation, light sampling, ray casters for both extension rays and shadow rays. Pg. 140, Section 4.2; the collected extension and shadow rays are cast using the ray cast kernels). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Laine’s wavefront path tracing architecture of batch processing shadow rays into Urbach’s multi-client ray-tracing server. In video streams, calculating shadows for every frame can be taxing, therefore batching shadow rays that are well-suited to queue-based processing predictably improves GPU efficiency and frame latency in high-concurrency workloads. Organizing work into coherent pools and executing stages over large sets of rays is a known technique and yields predictably results of improved latency and GPU occupancy when simultaneously producing multiple outputs. Regarding claim 12, Urbach discloses the method of claim 10, but does not disclose batch processing fifth ray tracing bounces for the second stream of frames with second ray tracing bounces for the first stream of frames. In the same art of ray tracing, Laine discloses wherein generating the second stream of frames comprises batch processing fifth ray tracing bounces for the second stream of frames with second ray tracing bounces for the first stream of frames (Pg. 139, Section 4; eight path segments. Pg. 139, Section 4; The megakernel always processes a batch of paths to completion. Pg. 140, Section 4.2; On each iteration, every path is advanced by one segment…Logic stage…whose task is to advance the path by one segment…Fig. 2. Examiner's note: each segment is a bounce, eight path segments shows computing deep ray paths, and the logic stage is the iterative bounce pipeline). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Laine’s wavefront path tracing architecture into Urbach’s multi-client ray-tracing server. Urbach’s system already executes multiple concurrent renderings on shared GPU resources, and Laine teaches a GPU-friendly scheduling approach when many rays are simultaneously traced. Therefore, combining Urbach and Laine would yield predictable results in GPU efficiency when multiple client views are rendered, increases SIMD utilization, and reduces overhead in high workloads. Additionally, although Laine is not explicit to batch processing fifth ray tracing bounces with second ray tracing bounces, selecting which bounce depths to include in the batching process would have been an obvious engineering choice based on profiling and system goals by grouping rays at comparable path stages to maximize coherence and throughput. Whether it’s the eighth bounce or fifth bounce number still corresponds to the same class of a “deep path” processing in a path tracer, and the same concept for whether it’s the first bounce or second bounce still corresponds to the same class of “basic” path, and would both result in the same rendering semantics while simply shifting the optimization to a different path stage based on ray population size, divergence, and cache behavior. Therefore, the obvious engineering choice of selecting which bounce depths to include in the batching would provide the same predictable performance-driven advantage when combined with Urbach’s concurrent rendering server. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Urbach (US 8803892 B2), in view of Aila et al. "Understanding the efficiency of ray traversal on GPUs." In Proceedings of the conference on high performance graphics 2009, pp. 145-149. 2009, hereinafter referred to as “Aila”. Regarding claim 14, Urbach discloses the method of claim 10, but does not disclose a same persistent wavefront kernel. In the same art of ray tracing, Aila discloses wherein the first stream of frames and the second stream of frames are generated within a same persistent wavefront kernel (Section 3.3, Par. 2; efficiency of trace() kernels…achieved by launching only enough threads to fill the machine once. These long-running persistent threads can then fetch work from a global pool using an atomic counter until the pool is empty…implementation of persistent threads is given in Appendix A.). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Aila’s persistent-kernel work queue model into Urbach’s concurrent multi-client ray-tracing server. Enqueue rays/path work for both streams into the same persistent wavefront kernel keeps the GPU busy by continuously pulling from a shared pool rather than launching separate kernels per task yields predictable results in improving workload efficiency and reducing scheduling overhead. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNY NGAN TRAN whose telephone number is (571)272-6888. The examiner can normally be reached Mon-Thurs 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNY N TRAN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Jun 24, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499589
SYSTEMS AND METHODS FOR IMAGE GENERATION VIA DIFFUSION
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
20%
Grant Probability
70%
With Interview (+50.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month