Prosecution Insights
Last updated: April 19, 2026
Application No. 18/424,594

MULTI-USER MULTI-GPU RENDER SERVER APPARATUS AND METHODS

Non-Final OA §103
Filed
Jan 26, 2024
Examiner
RICKS, DONNA J
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Pme Ip Pty Ltd.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
387 granted / 502 resolved
+15.1% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
58.3%
+18.3% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 502 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Challen U.S. Pub. No. 2007/0038939 in view of Stauffer U.S. Pub. No. 2007/0097133, Nicholas et al. WO 01/98925 A2 and Boyd et al. U.S. Pub. No. 2003/0234791. Re: claim 1 and 9 (which are rejected under the same rationale), Challen teaches 1. A method for rendering images comprising: A) executing, on a server digital data processor, a render server; (“Display server 10 may include a processor... configured to execute server program 20... In the system of Fig. 1A, server program 20 is configured to receive drawing requests from client program 40.”; Challen, [0068], [0071], Fig. 1A) The display server includes a processor (server digital processor) that executes a server program (render server), which receives drawing requests. (“Graphics controller G10 may be implemented as an expansion card configured to plug into a bus of display server D100... Graphics controller G10 includes a processing unit 310 configured to execute rendering commands S20a,b and to output corresponding rendered pixel values.”; Challen, [0113], [0114], Fig. 11) The graphics controller of the display server includes a processing unit that executes rendering commands and outputs corresponding rendered pixel data. B) issuing from the server one or more interleaved commands in response to one or more render requests from one or more client digital data processors; (“Server program 20 may be configured to receive drawing requests that define graphics primitives at a high level and to output corresponding pixel-level or bitmap descriptions in the form of data and/or commands that are executable by graphics hardware G30.”; Challen, [0073]) In response to receiving drawing requests from clients (client digital data processors), the server program of the display server outputs (issues) bitmap descriptions in the form of commands (commands in response to one or more render requests from one or more client digital data processors) that are executable by graphics hardware. Challen is silent regarding interleaved commands, however, Stauffer teaches this limitation. (“... the kernel driver 101 uses graphics semaphores that cause the graphics hardware to suspend processing of one buffer and switch to processing another buffer, thus interleaving the processing of the command buffers from different clients, and creating multiple inter-dependent linear timelines as illustrated in Fig. 3C.”; Stauffer, [0031]-[0033], Fig. 3C) The kernel driver, uses graphics semaphores to interleave the processing of command buffers from different clients (one or more interleaved commands). C) rendering an image with the server digital data processor from a first client digital data processor of the one or more client digital data processors on one or more graphics processing units in response to one or more interleaved commands, (“Display server 10 includes graphics hardware 30 that is configured to receive the graphics information and to output a corresponding video signal to display device 60...”; Challen, [0069]) The display server outputs a video signal (renders an image) in response to graphics information, such as a drawing request (one or more commands), received from a client (from a first client digital data processor of the one or more digital data processors on one or more graphics processing units). Stauffer is combined with Challen such that the interleaved commands of Stauffer are drawing requests of Challen. (“... the kernel driver 101 uses graphics semaphores that cause the graphics hardware to suspend processing of one buffer and switch to processing another buffer, thus interleaving the processing of the command buffers from different clients, and creating multiple inter-dependent linear timelines as illustrated in Fig. 3C.”; Stauffer, [0031]-[0033], Fig. 3C) The kernel driver, uses graphics semaphores to interleave the processing of command buffers from different clients (one or more interleaved commands). Stauffer is combined with Challen such that the interleaved commands of Stauffer are stored in the command buffers of Challen. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Challen by adding the feature of B) issuing from the server one or more interleaved commands in response to one or more render requests from one or more client digital data processors; C) rendering an image with the server digital data processor from a first client digital data processor of the one or more client digital data processors on one or more graphics processing units in response to one or more interleaved commands, in order to allow the graphics hardware and the CPU to operate asynchronously, keeping both busy even though they typically operate at different speeds, as taught by Stauffer, ([0029]). Challen and Stauffer are silent regarding where the image is broken down into two or more sub-images, however, Nicholas teaches where the image is broken down into two or more sub-images, (“Referring again to Fig. 7, at step 745, the assigned primary back end content server 136 and, if necessary, auxiliary back end content server(s) 139 processes the job. The results are aggregated, re-projected to a common projection, and mosaiced together. The results are not only sent to content generate 148 that made the request, it is also partitioned into image tiles and stored into image tile cache 127 for subsequent requests.”; Nicholas, p. 33, lines 3-8, Fig. 7) The image is partitioned into image tiles (the image is broken down into two or more sub-images). Nichols is combined with Challen and Stauffer such that the images of Challen are partitioned using the method of Nicholas. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Challen by adding the feature of where the image is broken down into two or more sub-images, in order to enable the user to quickly and easily visualize any area of the image, as taught by Nicholas (p. 3, lines 11-13). Challen, Stauffer, and Nicholas are silent regarding where at least one of the two or more sub-images are rendered using a break-down approach for each of the n pixels by m pixels in the at least one of the two or more sub-images are blended over each other in order from a back picture to a front picture, using a formula, where the formula is given by (1 - afront) * cback + afront * cfront, where, afront denotes an opacity of the front picture and cfront denotes a first color of the front picture, and cback denotes a second color of the back picture, where the two or more sub-images are rendered, however Boyd teaches where at least one of the two or more sub-images are rendered using a break-down approach for each of the n pixels by m pixels in the at least one of the two or more sub-images are blended over each other in order from a back picture to a front picture, using a formula, where the formula is given by (1 - afront) * cback + afront * cfront, where, afront denotes an opacity of the front picture and cfront denotes a first color of the front picture, and cback denotes a second color of the back picture, where the two or more sub-images are rendered; (“Unlike opaque objects, transparent objects must generally be depth sorted in back-to-front order to ensure that the underlying colors that are blended with the transparent objects are available when the blending operation is performed. There are several formulae that get used to calculate an object’s translucency, but a common formula is: c0 =αcs+(1-α)*cd”; Boyd, [0034]), wherein c0=final color of pixel, α=alpha value (between 0 and 1), cs=color of the transparent pixel (called the source) and cd=color of the occluded pixel (called the destination).”; Boyd, [0034-[0039]) Transparent objects must be depth sorted in back-to-front order to ensure that underlying colors that are blended with the transparent objects are available when the blending operation is performed (where at least one of two or more sub-images are rendered using a break-down approach for each of the n pixels by m pixels in the at least one of the two or more sub-images are blended over each other from a back picture to a front picture). A formula, such as c0 =αcs+(1-α)*cd, is used to calculate the objects translucency, where α = alpha value (afront denotes an opacity of the front picture), cs = color of the transparent pixel (cfront denotes a first color of the front picture), cd = color of the occluded pixel (cback denotes a second color of the back picture, where the two or more sub-images are rendered). Boyd is combined with Challen, Stauffer and Bo and Nicholas such that the image tiles of Nicholas are blended using the equation of Boyd. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Challen by adding the feature of where at least one of the two or more sub-images are rendered using a break-down approach for each of the n pixels by m pixels in the at least one of the two or more sub-images are blended over each other in order from a back picture to a front picture, using a formula, where the formula is given by (1 - afront) * cback + afront * cfront, where, afront denotes an opacity of the front picture and cfront denotes a first color of the front picture, and cback denotes a second color of the back picture, where the two or more sub-images are rendered, in order to ensure that the underlying colors that are blended with the transparent objects are available when the blending operation is performed, as taught by Boyd ([0034]). Challen, Stauffer and Boyd are silent regarding D) combining at the server digital data processor the rendered two or more sub-images to generate the image; and E) sending from the server digital data processor the image to the first client digital data processor, however Nicholas teaches D) combining at the server digital data processor the rendered two or more sub-images to generate the image; and E) sending from the server digital data processor the image to the first client digital data processor. (“Referring again to Fig. 7, at step 745, the assigned primary back end content server 136 and, if necessary, auxiliary back end content server(s) 139 processes the job. The results are aggregated, re-projected to a common projection, and mosaiced together. The results are not only sent to content generator 148 that made the request, it is also partitioned into image tiles and stored into image tile cache 127 for subsequent requests... primary and auxiliary back end content servers 136 and 139 transcode the wavelet images into 24 bit JPEG images for delivery to the user”; Nicholas, p. 33, lines 3-10, Fig. 7) The sub-images are mosaiced together (combining at the server digital data processor the rendered two or more sub-images to generate the image) and sent to the content generator that made the request and converted to JPEG for delivery to the user (sending from the server digital data processor the image to the first client digital data processor). Nichols is combined with Challen and Stauffer such that the images of Challen are the partitioned images of Nicholas, which are combined and sent using the method of Nicholas. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Challen by adding the feature of D) combining at the server digital data processor the rendered two or more sub-images to generate the image; and E) sending from the server digital data processor the image to the first client digital data processor, in order to enable the user to quickly and easily visualize any area of the image, as taught by Nicholas (p. 3, lines 11-13). Re: claim 2, Challen, Stauffer, Nicholas and Boyd teach 2. The method of claim 1, where in step C) the image is loaded from a host memory. (“In this example, a graphics bus B10 carries communications between the memory hub and the graphics controller G10.”; Challen, [0113], Fig. 11) Fig. 11 illustrates that the memory hub carries communications between the system memory (host memory), storing image data to be rendered, and the graphics controller (the image is loaded from a host memory). Re: claim 3, Challen, Stauffer, Nicholas and Boyd teach 3. The method of claim 2, where in step C) the at least one of the two or more sub-images is stored in a graphics memory. (“For example, processing unit 310 may be configured to store the rendered pixel values to display buffers in local or system memory.”; Challen, [0114], Figs. 11-12) Fig. 12A illustrates that rendered pixel values are stored to local memory 330 (at least one of the two or more sub-images is stored in a graphics memory). Re: claim 4, Challen, Stauffer, Nicholas and Boyd teach 4. The method of claim 3, where in step C) the one or more graphics processing units processing the two or more sub-images do not swap data out of the graphics memory into the host memory. (“For example, processing unit 310 may be configured to store the rendered pixel values to display buffers in local or system memory”; Challen, [0114], Figs. 11-12) The processing unit, in the graphics controller, stores rendered pixel values to display buffers (the one or more graphics processing units processing the two or more sub-images) in local memory. (“Fig. 12A shows a block diagram of implementation G20 of graphics controller G10 that includes a local memory 330... In the example of Fig. 12A, processing unit 310 is configured to retrieve rendering commands from local memory 330 and/or to store values of rendered pixels to local memory 330. Information stored in and/or retrieved from local memory 330 may also indicate display configuration parameters such as bit depth and screen size in pixels.”; Challen, [0115]) Fig. 12A illustrates that the graphics controller (graphics processing unit) includes a processing unit that retrieves rendering commands from local memory and stores rendered pixel values to local memory. This information, which includes display configuration parameters, is retrieved for display (do not swap data out of the graphics memory into host memory). Re: claim 5, Challen, Stauffer, Nicholas and Boyd teach 5. The method of claim 1, further comprising maintaining the one or more render requests received from the one or more client digital data processors in one or more queues associated with the server digital data processor. (“The device independent (DIX) layer handles communications with clients (via the X protocol) and manages one or more queues of events such as incoming drawing requests and outgoing input events.”; Challen, [0131]) Incoming drawing requests (render requests), from clients, are stored in queues are managed (maintain the one or more render requests received from the one or more client digital data processors in one or more queues associated with the server digital data processor). Re: claim 6, Challen, Stauffer, Nicholas and Boyd teach 6. The method of claim 1, further comprising removing a request of the one or more render requests once completed. (“For example, a server program 200 may store rendering commands to a command buffer in system memory, with contents of the buffer being copied periodically... to a command buffer in local memory 330. One or both of these buffers may be implemented as a circular queue...”; Challen, [0149]) Rendering commands are stored to a command buffer, such as a circular queue, which operates as a first-in first-out buffer. (“Graphics controller G10 may be configured to execute rendering commands directly upon reading them or to store the rendering commands to a command buffer in local memory 330.”; Challen, [0151]) The graphics controller executes the rendering commands directly upon reading them (removing a request of the one or more render requests once completed). Re: claim 7, Challen, Stauffer, Nicholas and Boyd teach 7. The method of claim 1, further comprising prioritizing the one or more render requests. (“For example, a server program 200 may store rendering commands toa command buffer in system memory, with contents of the buffer being copied periodically... to a command buffer in local memory 330. One or both of these buffers may be implemented as a circular queue...”; Challen, [0149]) The circular queue is first-in, first out and prioritizes rendering requests based on the order they are received. (“Delivery of rendering commands 520a,b to graphics controller G10 for execution may occur by any path or procedure suitable for the particular implementation of display server D100... For example, a server program 200 may store rendering commands to a command buffer in system memory, with contents of the buffer being copied periodically... to a command buffer in local memory 330. One or both of these buffers may be implemented as a circular queue.”; Challen, [0149]) The rendering commands are then delivered from the circular queue to the graphics controller based on the order in which they are received (prioritizing the one or more render requests). Re: claim 8, Challen, Stauffer, Nicholas and Boyd teach 8. The method of claim 7, where a prioritizing step includes a prioritization function that takes into account an order in which a request of the one or more render requests was received, resources currently allocated on the one or more graphics processing units, whether the request is an interactive rendering, and the first client digital data processor. (“The device independent (DIX) layer handles communications with clients... and manages one or more queues of events such as incoming drawing requests and outgoing input events.”; Challen, [0131]) Queues storing incoming drawing requests (an order in which a request of the one or more render requests was received) are managed. (“For example, a server program 200 may store rendering commands toa command buffer in system memory, with contents of the buffer being copied periodically... to a command buffer in local memory 330. One or both of these buffers may be implemented as a circular queue...”; Challen, [0149]) The circular queue is first-in, first out and prioritizes rendering requests based on the order they are received. (“For example, different areas of local memory 330 may be reserved as command buffers, with each buffer holding a queue or rendering commands from a different server program 200... It may be desirable for a command buffer to occupy a contiguous portion of memory, which may simplify implementation of a circular queue...”; Challen, [0154], Fig. 13) Different areas of the local memory are reserved as command buffers (resources currently allocated on the one or more graphics processing units), with each buffer holding a queue or rendering commands, from different server programs, which receive rendering requests from different clients (the first client digital data processor). (“Fig. 3B shows an example of a system including a dual-head implementation 12D of display server 12 that has an integrated display device 62. The Isona display station is one example of a superior display server that supports connection to a low-resolution display device for dual-head display.”; Challen, [0079], Fig. 3B) Fig. 3B illustrates a dual-head display server that has an integrated display device. The system determines whether the image is low resolution and whether it is to be displayed on the low-resolution integrated display (whether the request is an interactive rendering). Re: claim 10, Challen, Stauffer, Nicholas and Boyd teach 10. The system of claim 9, where the server digital data processor further comprises one or more central processing units, in communications coupling with the render server, the one or more central processing units processing image data in response to plural interleaved commands from the render server. (“As shown in Fig. 11, this implementation includes a central processing unit (CPU) configured to execute the server programs 200a,b...The CPU as shown in Fig. 11 is configured to communicate with a memory hub over a high-speed front side bus (FSB)... In this example, a graphics bus B10 carries communications between the memory hub and the graphics controller G10.”; Challen, [0110], [0113], Fig. 11) Fig. 11 illustrates that the display server (render server) includes a CPU (one or more central processing units in communications coupling with the render server). The CPU executes server programs and communicates with the graphics bus via the memory hub to process image data. Challen is silent regarding the one or more central processing units processing image data in response to plural interleaved commands from the render server, however, Stauffer teaches this limitation. (“... the kernel driver 101 uses graphics semaphores that cause the graphics hardware to suspend processing of one buffer and switch to processing another buffer, thus interleaving the processing of the command buffers from different clients, and creating multiple inter-dependent linear timelines as illustrated in Fig. 3C.”; Stauffer, [0031]-[0033], Fig. 3C) The kernel driver, which executes on the CPU, uses graphics semaphores to interleave the processing of command buffers from different clients (plural interleaved commands form the render server). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Challen by adding the feature of the central processing units processing image data in response to plural interleaved commands from the render server, in order to allow the graphics hardware and the CPU to operate asynchronously, keeping both busy even though they typically operate at different speeds, as taught by Stauffer, ([0029]). Re: claim 11, Challen, Stauffer, Nicholas and Boyd teach 11. The system of claim 9, where the server digital data processor comprises a host memory, in communications coupling with the render server, where the host memory is adapted to store to be rendered one or more data sets. (“For example, processing unit 310 may be configured to store the rendered pixel values to display buffers in local or system memory.”; Challen, [0114], Fig. 11) Fig. 11 illustrates that the display server (render server) includes a system memory (host memory, in communications coupling with the render server), where the system memory stores rendered pixel values (the host memory is adapted to store to be rendered one or more data sets). Re: claim 12, Challen, Stauffer, Nicholas and Boyd teach 12. The system of claim 9, where the server digital data processor comprises one or more queues in communications coupling with the render server and with the one or more graphics processing units, and the render server maintaining render requests in the one or more queues. (“The device independent (DIX) layer handles communications with clients (via the X protocol) and manages one or more queues of events such as incoming drawing requests and outgoing input events... For example, a server program 200 may store rendering commands toa command buffer in a system memory, with contents of the buffer being copied periodically... to a command buffer in local memory 330. One or both of these buffers may be implemented as a circular queue...”; Challen, [0131], [0149]) The display server (render server) receives incoming drawing requests (render requests), from clients, which are stored in queues (one or more queues in, for example the system memory or local memory (in communication coupling with the render server) and which are managed (maintaining render requests in one or more queues). Re: claim 13, Challen, Stauffer, Nicholas and Boyd teach 13. The system of claim 12, where the render server prioritizes one or more render requests in the one or more queues. (“For example, a server program 200 may store rendering commands toa command buffer in system memory, with contents of the buffer being copied periodically... to a command buffer in local memory 330. One or both of these buffers may be implemented as a circular queue...”; Challen, [0149]) The circular queue is first-in, first out and prioritizes rendering requests based on the order they are received. (“Delivery of rendering commands 520a,b to graphics controller G10 for execution may occur by any path or procedure suitable for the particular implementation of display server D100... For example, a server program 200 may store rendering commands to a command buffer in system memory, with contents of the buffer being copied periodically... to a command buffer in local memory 330. One or both of these buffers may be implemented as a circular queue.”; Challen, [0149]) The rendering commands are then delivered from the circular queue to the graphics controller based on the order in which they are received (prioritizing the one or more render requests in the one or more queues). Re: claim 14, Challen, Stauffer, Nicholas and Boyd teach 14. The system of claim 13, where the render server prioritizes one or more render requests based on at least one of a rendering mode associated with a render request, a client digital data processor associated with a render request, an order of receipt of a render request, and available resources. (“For example, a server program 200 may store rendering commands toa command buffer in system memory, with contents of the buffer being copied periodically... to a command buffer in local memory 330. One or both of these buffers may be implemented as a circular queue...”; Challen, [0149]) The circular queue is first-in, first out and prioritizes rendering requests based on the order they are received (an order of receipt of a render request). (“Delivery of rendering commands 520a,b to graphics controller G10 for execution may occur by any path or procedure suitable for the particular implementation of display server D100... For example, a server program 200 may store rendering commands to a command buffer in system memory, with contents of the buffer being copied periodically... to a command buffer in local memory 330. One or both of these buffers may be implemented as a circular queue.”; Challen, [0149]) The rendering commands are then delivered from the circular queue to the graphics controller based on the order in which they are received ((an order of receipt of a render request). Re: claim 15, Challen, Stauffer, Nicholas and Boyd teach 15. The system of claim 9, where the one or more graphics processing unit renders the image at a rendering resolution determined by one or more parameters, including, at least one of a user interaction type, a network speed, and available processing resources. (“A server program 200 may be configured to render more than one image at once (possibly having different resolutions or color depth)... display server D100 is configured to switch its display context from a state corresponding to one of the server programs 200 to a state corresponding to another of the server programs 200 according to a display context switch command. The display context switch command may be issued by a client 40, by an operator of the display server (e.g., via an input device) or by another process such as a faut detection process (e.g., a resource manager...).”; Challen, [0187], [0188]) The server program of the GPU renders more than one image at different resolutions. The display sever switches its display context based on, for example, a command issued by a client (user interaction type), an operator of the display server using an input device (user interaction type) or a fault detection process resource manager (available processing resources). Re: claim 16, Challen, Stauffer, Nicholas and Boyd teach 16. The system of claim 15, where the render server monitors at least one of user interaction type, the network speed, and available processing resources, and generates the one or more parameters in response thereto. (“A server program 200 may be configured to render more than one image at once (possibly having different resolutions or color depth)... display server D100 is configured to switch its display context from a state corresponding to one of the server programs 200 to a state corresponding to another of the server programs 200 according to a display context switch command. The display context switch command may be issued by a client 40, by an operator of the display server (e.g., via an input device) or by another process such as a faut detection process (e.g., a resource manager...).”; Challen, [0187], [0188]) The server program of the GPU renders more than one image at different resolutions. The display sever monitors the system and switches its display context based on, for example, a command issued by a client (user interaction type), an operator of the display server using an input device (user interaction type) or a fault detection process resource manager (available processing resources). Re: claim 17, Challen, Stauffer, Nicholas and Boyd teach 17. The system of claim 9, where the render server allocates at least a portion of one or more server digital data processor resources in response to one of the render requests. (“Allocating a region of memory may include writing values indicating the desired location and size of the region to one or more registers of graphics controller G10 and/or processing unit 310. Depending on the particular implementation of display sever D100, such values may be written according to one or more commands issued by server program 200, operating system 80, and/or graphics controller G10, and regions may be allocated from local memory 330 and/or from system memory.”; Challen, [0179]) The display server allocates a region of memory based on, for example, commands issued by a server program, which include render requests (allocates at least a portion of one or more server digital data processor resources in response to the render requests). Re: claim 18, Challen, Stauffer, Nicholas and Boyd teach 18. The system of claim 17, where the one or more server digital data processor resources comprise a graphics memory that is coupled to any of the one or more graphics processing units. (“Fig. 12A shows a block diagram of implementation G20 of graphics controller G10 that includes a local memory 330.”; Challen, [0179], Figs. 11-12A) Fig. 11 illustrates that the display server includes a graphics controller. Fig. 12A illustrates that the graphics controller includes a local memory (graphics memory) coupled to a processing unit (graphics processing unit). Re: claim 19, Challen, Stauffer, Nicholas and Boyd teach 19. The system of claim 18, where the render server allocates, as the one or more server digital data processor resource, the graphics memory having a data set specified by a request. (“Allocating a region of memory may include writing values indicating the desired location and size of the region to one or more registers of graphics controller G10 and/or processing unit 310. Depending on the particular implementation of display server D100, such values may be written according to one or more commands issued by server program 200, operating system 80 and/or graphics controller G10 and regions may be allocated from local memory 330 and/or from system memory.”; Challen, [0179], Figs. 11-12A) Fig. 11 illustrates that the display server includes a graphics controller. The display server allocates a region of memory, such as the local memory (graphics memory) based on commands from the server program, which include rendering requests (having a data set specified by the request). Re: claim 20, Challen, Stauffer, Nicholas and Boyd teach 20. The system of claim 19, where the render server causes the graphics memory to maintain the data set. (“Allocating a region of memory may include writing values indicating the desired location and size of the region to one or more registers of graphics controller G10 and/or processing unit 310. Depending on the particular implementation of display server D100, such values may be written according to one or more commands issued by server program 200, operating system 80 and/or graphics controller G10 and regions may be allocated from local memory 330 and/or from system memory.”; Challen, [0179], Figs. 11-12A) The graphics controller writes values to local memory (graphics memory) according to commands issued by the server program, which include rendering requests (cause graphics memory to maintain the data set). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA J RICKS whose telephone number is (571)270-7532. The examiner can normally be reached on M-F 7:30am-5pm EST (alternate Fridays off). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Donna J. Ricks/Examiner, Art Unit 2618 /DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Jan 26, 2024
Application Filed
Dec 22, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602751
SAMPLE DISTRIBUTION-INFORMED DENOISING & RENDERING
2y 5m to grant Granted Apr 14, 2026
Patent 12592021
GRAPHICS PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12579726
HIERARCHICAL TILING MECHANISM
2y 5m to grant Granted Mar 17, 2026
Patent 12573133
Reprojection method of generating reprojected image data, XR projection system, and machine-learning circuit
2y 5m to grant Granted Mar 10, 2026
Patent 12555281
MANAGING MULTIPLE DATASETS FOR DATA BOUND OBJECTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
86%
With Interview (+8.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 502 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month