Prosecution Insights
Last updated: April 19, 2026
Application No. 18/211,725

Foveated Rendering

Non-Final OA §103§112
Filed
Jun 20, 2023
Examiner
CRAWFORD, JACINTA M
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Imagination Technologies Limited
OA Round
3 (Non-Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
97%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
709 granted / 805 resolved
+26.1% vs TC avg
Moderate +9% lift
Without
With
+9.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
29 currently pending
Career history
834
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
55.1%
+15.1% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 805 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 9, 2026 has been entered. Claims 1-20 are pending in this case. Independent claims 1, 18, and 20 have been newly amended. No claims have been newly added or cancelled. This action is made Non-Final. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent claims 1, 18, and 20 have been amended to similarly recite, “…rendering logic configured to process graphics data to generate an initial image suitable for display, the initial image comprising pixel values representing an image of the scene…” where generating an initial image, e.g. by the rendering logic, suitable for display lacks supports. Rather, the specification explicitly discloses the updated image outputted from the update logic is suitable for display (page 25, first paragraph). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 4, 11, 14, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hempel et al. (US 7,973,790), evidenced by ONEPPO et al. (US 2009/0322751). As to claim 1, Hempel et al. disclose a processing system (e.g. a graphics processing system (not illustrated) including a graphics processing unit (GPU), column 3, lines 54-58) configured to render one or more images of a scene (e.g. to perform the processes as outlined in Figures 1 and 2), the processing system comprising: rendering logic (e.g. rasterizer) configured to process graphics data to generate an initial image, the initial image comprising pixel values representing an image of the scene (e.g. Figure 1, step 110, column 5, lines 66-67, notes generating an image using rasterization after determining which pixels contain polygon vertices that include reflective and/or refractive surfaces at step 100, column 5, lines 62-66; Figure 2, step 210, column 7, line 37, notes each polygon is rasterized after determining which pixels contain polygon vertices that include reflective and/or refractive surfaces at step 200, column 7, lines 35-37); ray tracing logic (e.g. ray tracer) configured to perform ray tracing to determine ray traced data for one or more regions of the initial image (e.g. Figure 1, step 140, column 6, lines 6-9 notes for each foreground polygon vertex determined in step 130 (column 6, lines 3-6), generating a secondary ray from the polygon using the directional vector associated with the polygon vertex; Figure 2, step 250, column 7, lines 37-40, notes if it is determined that the polygon has reflective and/or refractive material at step 220 (column 7, lines 37-39), the polygon is immediately raytraced); and update logic (e.g. shader) configured to update one or more pixel values of the initial image using the determined ray traced data for the one or more regions of the initial image (e.g. Figure 1, step 150, column 6, lines 9-12 notes a shader program is then invoked to accurately render the remainder of the image, the shader invocation may result in additional recursive secondary ray generation; Figure 2, step 240, column 7, line 40 notes a shader is executed after raytracing is performed, where column 1, lines 43-44 notes a shader program is invoked to compute colour for each point from the interpolated vertex attributes), to thereby determine an updated image to be outputted for display (column 1, lines 45-47 notes each sampled point is written to an array of colour values called the frame buffer, each value in the colour array corresponds to a pixel on the screen, where it is well known in the art that these values are then output from the frame buffer to a display). As noted above, Hempel et al. describes the method as performed by a graphics processing unit, where it is well known that a GPU typically comprises a graphics pipeline, comprised of various components, similar to that as described, thus would render obvious that each step described above is performed by the respective component outlined, yielding predictable results, without changing the scope of the invention. Additionally, as noted above, Hempel et al. disclose its rasteriser, e.g. rendering logic, generates an initial image, but do not explicitly disclose the initial image is suitable for display. However, Hempel et al. disclose invoking a shader program to compute colours during rasterization, e.g. to generate the image (column 6, lines 18-52). It is also well known in the art that a rasterizer/rasteriser is typically a last stage in a graphics processing pipeline for rendering an image to be output, e.g. for display. For support, ONEPPO et al. disclose rendering logic configured to process graphics data to generate an initial image suitable for display ([0025] notes a “rasterizer” is a component that takes an image made up of high-order primitives, such as lines, points, and triangles, and converts the image into a raster image, e.g. pixels, for output on a video display, the raster image is a bitmap representation of the primitives with color). It would have been obvious to one of ordinary skill in the art at the time of the invention to recognize that the initial image generated by Hempel et al. is suitable for display as taught by ONEPPO et al. as this is a well-known process performed by rasterizers/rasterisers in a graphics processing pipeline, thus further yielding predictable results, without changing the scope of the invention. As to claim 2, Hempel et al. disclose the rendering logic (e.g. rasterizer) is configured to process the graphics data using a rasterisation technique to generate the initial image (e.g. as noted in claim 1 above, rasterizer performed rasterization). As to claim 4, Hempel et al. disclose the initial image is a lower detail image than the updated image (e.g. as noted in claim 1, the initial image is generated via rasterization, where the image is ultimately shaded via shader (or shader program), where column 1, lines 43-44 notes a shader program computes colour for each point from the interpolated vertex attributes, and column 2, lines 32-34 notes shaders are important in high-quality rendering, thus may be considered to produce a higher detail image than the initial image generated via rasterization). As to claim 11, Hempel et al. disclose the rendering logic (e.g. rasterizer) and the ray tracing logic (e.g. ray tracer) are configured to operate asynchronously (e.g. as noted in claim 1, in each of Figures 1 and 2, rasterization is performed prior to ray tracing, thus may be considered to operate asynchronously). As to claim 14, Hempel et al. disclose acceleration structure building logic configured to determine an acceleration structure representing the graphics data of geometry in a scene of which an image is to be rendered (column 4, lines 19-27 notes techniques of the present invention used to conjunction with any raytracer algorithm, including that described in KD-Tree Acceleration Structures for a GPU Raytracer, which describes building an acceleration structure for scenes with many objects on the GPU (reference provided)). As to claim 17, Hempel et al. disclose the update logic (e.g. shader) is configured to update the initial image using the determined ray traced data for the one or more regions of the initial image by adding detail to the one or more regions of the initial image (e.g. as noted in claim 1, shading is performed after ray tracing, thus uses the determined ray traced data by adding detail, e.g. as further noted in claim 4, colour and high quality rendering). As to claim 18, Hempel et al. disclose a method of rendering one or more images of a scene at a processing system (e.g. Figures 1 and 2), the method comprising steps similar to the steps as performed by the processing system of claim 1. Please see the rejection and rationale of claim 1 above. As to claim 19, Hempel et al. disclose displaying an image based on the updated image (column 1, lines 43-47 notes a shader program is invoked to compute colour for each point from the interpolated vertex attributes, each sampled point is written to an array of colour values called the frame buffer, each value in the colour array corresponds to a pixel on the screen, where it is well known in the art that these values are then output from the frame buffer to a display). Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hempel et al. (US 7,973,790), evidenced by ONEPPO et al. (US 2009/0322751), as applied to claim 1 above, and further in view of Artue Lira dos Santos et al., Real-Time Ray Tracing for Augmented Reality, 2012 14th Symposium on Virtual and Augmented Reality, pp. 131-140. As to claim 3, Hempel et al. do not disclose, but Artue Lira dos Santos et al. disclose the rendering logic is configured to process the graphics data using a ray tracing technique to generate the initial image (e.g. pages 132-133, section III. Ray Tracing Pipeline Applied To Augmented Reality, subsection A. RT2 notes the main difference between the RT2 and standard graphics pipelines (such as OpenGL and Direct3D) resides in the main techniques used, while both OpenGL and Direct3D use rasterization as the core technique for rendering, RT2 uses ray tracing to obtain most of the visual effects). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hempel et al.’s rendering logic to use ray tracing techniques as described in Artue Lira dos Santos et al. to achieve more accurate simulation of the real world than that of rasterization techniques, thus enhancing the system (see page 131, Introduction of Artue Lira dos Santos et al.). Claim(s) 5, 7, 8, and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hempel et al. (US 7,973,790), evidenced by ONEPPO et al. (US 2009/0322751), as applied to claim 1 above, and further in view of Nilsson, Foveated Real-Time Ray Tracing, pages 1-6 (cited in Information Disclosure Statement (IDS) filed June 20, 2023). As to claim 5, Hempel et al. do not disclose, but Nilsson discloses region identification logic configured to identify the one or more regions of the initial image (e.g. identifying regions including foveal, parafoveal, and peripheral regions) and gaze tracking logic configured to determine one or more gaze positions for the initial image (e.g. eye tracking to determine observer’s gaze), wherein the region identification logic is configured to receive one or more indications of the one or more determined gaze positions, and to identify the one or more regions of the initial image based on the one or more determined gaze positions (page 2, section 3.1 Hardware, paragraph 1 notes Tobii EyeX Devkit Controller, which is a consumer-level corneal-reflection eye tracking device, which may [determine] the position the gaze of an observer on a computer screen, pages 2-3, section 3.2 Software, paragraphs 1-5 notes utilizing Tobii C/C++ SDK for retrieval of gaze positional data and communication with the eye tracking device, where the rendering process is subdivided into a number of FOV’s (e.g. fovea, parafovea, peripheral, etc.) to achieve varying levels of quality or resolution in the field of vision). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hempel et al.’s system including ray tracing logic to further comprise region identification logic and gaze tracking logic as described in Nilsson to accelerate the ray tracing algorithm by employing foveation to reduce graphics processing unit (GPU) workload, thus enhancing the functionality and performance of the graphics system (e.g. page 1, Introduction of Nilsson). As to claim 7, Hempel et al. modified with Nilsson one of the one or more identified regions of the initial image surrounds one of the one or more determined gaze positions, thereby representing a foveal region (Nilsson, pages 2-3, section 3.2 Software, paragraphs 1-5 notes identifying fovea, parafovea, and peripheral regions). As to claim 8, Hempel et al. modified with Nilsson a camera pipeline which is configured to: receive image data from a camera which is arranged to capture images of a user looking at a display on which a rendered image is to be displayed; and process the received image data to generate a captured image; wherein the gaze tracking logic is configured to analyse the captured image to determine the gaze position for the initial image (Nilsson, page 2, section 3.1 Hardware, paragraph 1 notes Tobii EyeX Devkit Controller, which is a consumer-level corneal-reflection eye tracking device, which may [determine] the position the gaze of an observer on a computer screen, pages 2-3, section 3.2 Software, paragraphs 1-5 notes utilizing Tobii C/C++ SDK for retrieval of gaze positional data and communication with the eye tracking device, and rendering a number of FOV’s (e.g. fovea, parafovea, peripheral, etc.) to achieve varying levels of quality or resolution in the field of vision, thus the hardware and software, e.g. pipeline, utilized for gaze tracking). As to claim 10, Hempel et al. do not disclose, but Nilsson discloses region identification logic configured to identify the one or more regions of the initial image (Nilsson, e.g. identifying regions including foveal, parafoveal, and peripheral regions), wherein the region identification logic is configured to analyse the initial image to determine one or more regions of high frequency, wherein the one or more determined regions of high frequency are the one or more identified regions of the initial image (Nilsson, pages 2-3, section 3.2 Software, paragraphs 1-5 notes rendering number of FOV’s (e.g. fovea, parafovea, peripheral, etc.) to achieve varying levels of quality or resolution in the field of vision, where parafoveal regions may be rendered at a high quality/resolution, where peripheral regions may be rendered at a lower quality/resolution, thus may be considered parafoveal regions are of high frequency). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hempel et al.’s system including ray tracing logic to further comprise region identification logic as described in Nilsson to accelerate the ray tracing algorithm by employing foveation to reduce graphics processing unit (GPU) workload, thus enhancing the functionality and performance of the graphics system (e.g. page 1, Introduction of Nilsson). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hempel et al. (US 7,973,790), evidenced by ONEPPO et al. (US 2009/0322751), in view of Nilsson, Foveated Real-Time Ray Tracing, pages 1-6 as applied to claim 5 above, and further in view of D’Amico et al. (US 9,261,959). As to claim 6, Hempel et al. modified with Nilsson do not disclose, but D’Amico et al. disclose the gaze tracking logic is configured to implement a predictive model to anticipate movements in gaze (column 4, lines 27-45 notes memory 114 may function as a database of information related to gaze direction and/or HMD wearer eye location, where such information may be used by the HMD 100 to anticipate where the wearer will look and determine what images are to be displayed to the wearer). It would have been obvious to one of ordinary skill in the art at the time of the invention to further modify Hempel et al. modified with Nilsson’s system including gaze tracking logic with D’Amico et al.’s method of anticipating movements in gaze to ultimately enhance gaze tracking by speeding up processing, thus reducing latencies of the system (column 4, lines 27-45 of D’Amico et al.). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hempel et al. (US 7,973,790), evidenced by ONEPPO et al. (US 2009/0322751), in view of Nilsson, Foveated Real-Time Ray Tracing, pages 1-6 as applied to claim 8 above, and further in view of Newcombe et al. (US 2012/0194644). As to claim 9, Hempel et al. modified with Nilsson disclose the ray tracing logic and the rasterisation logic are implemented on a graphics processing unit (Hempel, column 3, lines 54-58 notes rasterization and raytracing implemented in GPU), but do not disclose, but Newcombe et al. disclose wherein the camera pipeline and the graphics processing unit are implemented as part of a system on chip (SOC) ([0096] notes computing-based device 1404 comprises one or more processors 1400 which may be microprocessors, graphics processing units (GPUs), controllers or any other suitable type of processors for processing computing executable instructions to control the operation of the device in order to provide real-time camera tracking, e.g. a system on chip architecture is used, where processors 1400 may include one or more fixed function blocks which implement a part of the method of real time camera tracking in hardware). It would have been obvious to one of ordinary skill in the art at the time of the invention to further modify Hempel et al. modified with Nilsson’s system to implement the camera pipeline and graphics processing unit as part of a system on chip (SoC) as described in Newcombe et al. which is well known to reduce transmission times between components of the SoC as well as reduce the overall size of the device, thus enhancing the system. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hempel et al. (US 7,973,790), evidenced by ONEPPO et al. (US 2009/0322751), as applied to claim 1 above, and further in view of Du et al. (US 2009/0096797). As to claim 12, Hempel et al. disclose the ray tracing logic and the update logic…and…the rendering logic, but do not disclose, but Du et al. disclose the ray tracing logic and the update logic (e.g. fragment shader 314D) are configured to operate at a first rate, and wherein the rendering logic (e.g. rasterizer 314C) is configured to operate at a second rate, wherein the first rate is faster than the second rate (Figure 3, and associated text, e.g. [0032] notes power controller is configured to adjust power and clock input signals to each of the components of pipeline 308 independent of the other components based on status information collected for the components, where the power controller 210 can change (increase or decrease) the power and/or clock frequency to one or more components of the pipeline 308 while leaving the power and/or clock frequency for the other components of the pipeline 30 unchanged, thus considered that each pipeline component may operate at different rates). NOTE: Although Du et al. do not disclose a ray tracing logic, Hempel et al. explicitly discloses the ray tracing logic as part of the graphics pipeline, thus it would have been obvious that the method may be applied to any graphics pipeline component that may be not be explicitly described. It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hempel et al.’s ray tracing logic, update logic, and rendering logic to operate independently of each other, including operating at different clock frequencies (e.g. rates) based on respective workloads and status information of each component to reduce bottlenecks of the graphics pipeline that may occur when components are slower than other, thus enhancing the performance of the system ([0032] and [0042] of Du et al.). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hempel et al. (US 7,973,790), evidenced by ONEPPO et al. (US 2009/0322751), as applied to claim 11 above, and further in view of Li (US 9,240,069). As to claim 13, Hempel et al. do not disclose, but Li discloses time warping logic configured to apply an image warping process to the updated image before it is sent for display (column 2, lines 22-45 notes system includes image warper, the image warper provides for low-latency virtual reality display via efficient rerendering of scenes, e.g. the image warper may monitor the pose changes of the user and rerender displayed images based on these pose changes, the rerendering performed may be a rerendering approximation, rather than a full perspective projection from the original 3D scene model). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hempel et al.’s system to further comprise time warping logic as described by Li et al. to provide for low-latency virtual reality display via efficient rerendering of scenes, thus further extending the system’s capabilities (see column 2, lines 22-45 of Li et al.). Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hempel et al. (US 7,973,790), evidenced by ONEPPO et al. (US 2009/0322751), as applied to claim 14 above, and further in view of Mejdrich et al. (US 2010/0188396). As to claim 15, Hempel et al. do not disclose, but Mejdrich et al. disclose the processing system is configured to render a plurality of images representing a sequence of frames, and wherein the acceleration structure building logic is configured to determine the acceleration structure for a current frame by updating the acceleration structure for the preceding frame (Figure 4, [00[0041] notes rendering image data according to a rate of change in the perspective of a viewer in between frames, more specifically, rendering based on the rate of change of a camera perspective, where a ray tracing operation may include updating an acceleration data structure (ADS) 120 in between frames (e.g. frame-to-frame) in response to a changing vantage point, where [0015] notes the rate of change may be based upon a rate of change associated with preceding frames of the plurality of frames, thus considered to update a preceding frame for a current frame). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hempel et al.’s system including ray tracing logic and acceleration structure building logic to determine the acceleration structure as described by Mejdrich et al. as building acceleration structures are well known in the art and common technique in ray tracing, thus yielding predictable results (see [0005] and [0008]). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hempel et al. (US 7,973,790), evidenced by ONEPPO et al. (US 2009/0322751), as applied to claim 1 above, and further in view of Adam Celarek, Merging Ray Tracing and Rasterization in Mixed Reality, Bachelor’s Thesis for Bachelor of Science in Media Informatics and Visual Computing, Vienna University of Technology, November 2012, 55 pages. As to claim 16, Hempel et al. do not disclose, but Adam Celarek discloses the processing system is arranged to be included in a virtual reality system or an augmented reality system (e.g. page 19, Chapter 5 Implementation, first paragraph, notes RESHADE framework provides a working mixed reality program (e.g. augmented reality) based on a traditional Direct3D renderer, which includes, rendering rasterized parts of images, merging ray traced and rasterized parts, render pass for creating ray tracing mask, tone mapping, etc.). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hempel et al.’s system to be arranged in a mixed reality system (e.g. virtual reality system or an augmented reality system) as described in Adam Celarek as an application for providing enhanced, realistic images for such systems (see page 19, Chapter 5 Implementation of Adam Celarek). Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hempel et al. (US 7,973,790), evidenced by ONEPPO et al. (US 2009/0322751), in view of Tavenrath (US 8,379,022). As to claim 20, Hempel et al., evidenced by ONEPPO et al., disclose an integrated circuit that, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to manufacture a processing system comprising logic components similar to that of the processing system of claim 1 (Hempel, e.g. graphics processing system (not illustrated) including a graphics processing unit (GPU), column 3, lines 54-58, further including components as described in claim 1). Please see the rejection and rationale of claim 1 above. Hempel et al. differ from the invention defined in claim 20 in that Hempel et al. do not disclose, but Tavenrath discloses a non-transitory computer readable storage medium having stored thereon a computer readable dataset description of an integrated circuit that, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to manufacture a processing system comprising logic components (Figure 4, where column 4, lines 65 thru column 5, lines 54 notes computer readable medium as memory for storing instructions for performing the operations of the graphics system, which includes rasterization, ray tracing, and fragment shading). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hempel et al.’s system and method to be implemented in a non-transitory computer readable storage medium as described in Tavenreth as computer readable mediums are well known in the art as part of computer systems for storing instructions (e.g. code and/or programs) which are executed by a processor to perform the method as outlined, thus yielding predictable results. Response to Arguments Applicant's arguments filed February 9, 2026 have been fully considered but they are not persuasive. Applicant amends independent claims 1, 18, and 20 to similarly recite, “…rendering logic configured to process graphics data to generate an initial image suitable for display…” Applicant argues on pages 8-11 of the Amendment filed December 8, 2025 that the prior art of record fails to disclose the limitations of the claims as now amended. More specifically, Applicant argues that the Examiner’s interpretation of the term “pixel value” is incorrect, “…Although block 110 of Hempel is described as representing a step of generating the image using rasterization…it is clear from the rest of the paragraph from column 5, line 62 to column 6, line 12 of Hempel color is not computed for pixels until shader programs are invoked in blocks 140 and 150 of the method of figure 1. As such, it is only after the shader programs have been invoked in blocks 140 and 150 that the image is suitable for display and the pixels include pixel values representing color or greyscale values...For example, it is clear that the image is fully rendered at the end of the flow chart, i.e. after block 150 (e.g. column 6 lines 9-11 states “A shader program is then invoked to accurately render the remainder of the image (Block 150)”), and thus is then suitable for display…It is further clear from the method of figure 1 and the disclosure of column 5, lines 55 to 61 of Hempel that, “to cast rays from rasterized surfaces, the world-space position of polygons being rasterized are interpolated from vertex attributes, and if these polygons are reflective or refractive, a per-fragment shader is invoked that implements a ray tracing algorithm for secondary rays instead of the normal surface shader”. In other words, each pixel of an image is only shaded once either by a rasterization shader or by a ray tracing shader. The pixel value, representing the colour or greyscale values for a pixel, is therefore not updated as required by claim 1.” (first through last paragraphs of page 9). In reply, as noted in the rejection above, Hempel discloses generating an image, e.g. “an initial image,” using rasterization, the image comprising pixels (step 110). The pixels containing polygon vertices that include reflective and/or refractive surfaces are then evaluated to determine whether the reflective and/or refractive polygon vertices are in the foreground (step 130). For each such foreground polygon vertex, a secondary ray is generated from the polygon vertex using the directional vector associated with the polygon vertex (step 140). Finally, a shader program is invoked to accurately render the remainder of the image (step 150). Hempel additionally describes its raytracer and rasterization may use the same shader programs to compute consistent colours, but may invoke the shader programs in different ways. For each particular scene, all shader programs for that scene can be transformed to read from the scene data structure, and can be combined with the raytracer framework. The combination of the raytracer and the shader results in a single shader program which can both raytrace the scene and shade the intersection points. This master shader program can be used during rasterization to add raytraced effects to particular surfaces (column 6, lines 18-52). This denotes that the shader program may be used during rasterization, e.g. to generate the image as described in step 110, and further by raytracer, e.g. for identified foreground pixels having reflective and/or refractive polygon vertices as described in step 150. Thus, the pixels described may be considered to have color values. For support, ONEPPO describes a typical process of a rasterizer, converts the image into a raster image, e.g. pixels, for output on a video display, the raster image is a bitmap representation of the primitives with color. Therefore, it is believed Hempel still teaches the limitations of the claims as recited. Applicant further argues on page 10 of the Amendment filed that “…Hempel describes…a second embodiment in which the reflective/refractive determination is performed before rasterization. As such, the Office action’s implication that pixel values must be present in order to deduce whether surfaces are reflective or refractive as part of deducing whether to ray trace is incorrect…In addition, Hempel does not provide a step that updates one or more pixel values of an image as required by claim 1. As discussed above, Hempel provides the shading of pixels to compute the colour in an image by applying ‘a shader’…Hempel describe that the same shader may be applied in different ways for ray traced or rasterised portions of the image. However, there is no disclosure in Hempel that a shader is invoked to the same portion of the image twice…In other words, the shader may be invoked to compute the color of the ray traced portion of the image, and then the shader may be invoked in a different manner to the remainder of the image as described in block 150 of Hempel. As such, the shader is invoked only once for each portion of the image. Therefore, Hempel does not disclose the feature of amended claim 1 of “update logic configured to update one or more pixel values of the initial image using the determined ray traced data for the one or more regions of the initial image, to thereby determine an updated image to be outputted for display”. Hempel does not disclose at any point, shading the same portion of the image at both the ray tracing step and the shading step…” (first through last paragraphs of page 10). In reply, as noted in the Examiner’s response to the arguments above, an image, e.g. “initial image,” is generated using rasterization. Hempel further describes invoking a shader program during rasterization and by the raytracer in a different ways. Thus, the image generated during rasterization may undergo shading, where the pixels containing reflective and/or refractive polygon vertices in the foreground are further ray traced and shaded to accurately render those pixels. Therefore, it is considered these foreground pixels are “updated” as outlined in the rejection. Applicant further argues on pages 11-14 regarding the prior PTAB decision of grandparent application 15/372,589. Applicant argues regarding the Examiner's arguments presented in the Final Office Action mailed October 7, 2025 that the Examiner's arguments are incorrect that "the PTAB reversed the prior rejection not because Hempel failed to disclose the claim limitations but because the Examiner “failed to provide sufficient support for the conclusion of obviousness.” Applicant further argues “...if Hempel did in fact disclose the claim limitations at issue, the PTAB would have affirmed the rejection. The PTAB commonly phrases error in terms of an examiner failing to provide requisite rationale, when the PTAB means that the prior art of record does not disclose what the examiner alleges. In any event, as stated if the PTAB found that Hempel disclosed the claim limitations the PTAB would have affirmed the rejection or at least would have made a new ground of rejection under 37 CFR 41.50 (b)...” Applicant further disagrees with the Examiner's argument that “...the Office Action asserts that the PTAB consistently emphasizes that the decision to reverse the Examiner's rejection...was because the Examiner fails to provide sufficient support for the legal conclusion of obviousness,...NOT because the cited portions were not taught, e.g. as disclosed by Hempel...In particular, the PTAB expressly agreed that “the Examiner's proposed combination would not make technological sense...” (first and second paragraphs of page 12). See additional arguments regarding the prior Examiner's rejection (last paragraph of page 12 continued to page 14). In reply, the Examiner maintains that throughout the ENTIRETY of the PTAB decision, it is consistently emphasized that the prior Examiner’s combination of both Kim and Hempel to reject the claims of the grandparent application did not make “technological sense.” This means these two references should not have been combined in the way that it was by the prior Examiner. The PTAB decision does not make any mention of Hempel SOLELY not teaching any aspect of the claims. The PTAB addressed Applicant's prior arguments regarding motivation (e.g. to combine), thus the PTAB “...concur[red] with the Appellant's contentions that one skilled in the art would NOT have been motivated to combine Kim with the teachings of Hempel, as proffered by the Examiner...” (first paragraph of page 10 under “Obviousness”). The Examiner disagrees that the PTAB would have affirmed the prior Examiner's rejection because it still did not make sense to combine these references. Therefore, the Examiner believes the arguments presented in the Final Office Action regarding this decision are valid and the rejection using Hempel solely as outlined is proper. Please refer to the Examiner’s detailed reply regarding this matter in the Final Office Action mailed October 7, 2025. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACINTA M CRAWFORD whose telephone number is (571)270-1539. The examiner can normally be reached 8:30a.m. to 4:30p.m. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y. Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACINTA M CRAWFORD/Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Jun 20, 2023
Application Filed
Mar 22, 2025
Non-Final Rejection — §103, §112
Jun 27, 2025
Response Filed
Oct 04, 2025
Final Rejection — §103, §112
Dec 08, 2025
Response after Non-Final Action
Feb 09, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602734
GRAPHICS PROCESSORS
2y 5m to grant Granted Apr 14, 2026
Patent 12602735
GRAPH DATA CALCULATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12602841
HIGH DYNAMIC RANGE VISUALIZATIONS INDICATING RANGES, POINT CURVES, AND PREVIEWS
2y 5m to grant Granted Apr 14, 2026
Patent 12597180
ARTIFICIAL INTELLIGENCE AUGMENTATION OF GEOGRAPHIC DATA LAYERS
2y 5m to grant Granted Apr 07, 2026
Patent 12591946
DETECTING ERROR IN SAFETY-CRITICAL GPU BY MONITORING FOR RESPONSE TO AN INSTRUCTION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
97%
With Interview (+9.2%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 805 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month