Prosecution Insights
Last updated: April 19, 2026
Application No. 18/323,977

IMAGE PROCESSING METHOD AND RELATED APPARATUS

Non-Final OA §103
Filed
May 25, 2023
Examiner
PROVIDENCE, VINCENT ALEXANDER
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
15 granted / 18 resolved
+21.3% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
38 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
0.9%
-39.1% vs TC avg
§103
82.4%
+42.4% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
0.9%
-39.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed December 9th, 2025 has been entered. Claims 1, 3-4, and 8-15 are pending in the application. Claims 2, 5, 6, and 7 are cancelled. Applicant’s amendments to claims 1, 14, and 15 have overcome the § 103 rejection based on Blackmon and Powers. Previously cited reference “Uludag” was used for the amended limitations. See Response to Arguments below. Response to Arguments The Examiner appreciates the Applicant’s thorough consideration of the previous Final Office Action. Applicant's arguments filed December 9th, 2025 have been fully considered but they are not persuasive. The Applicant argues: “There is simply no disclosure in the cited paragraphs of Uludag of ‘when the intersection point includes a corresponding projection pixel on the first image or a third image,updating the color of the target pixel based on a color of the projection pixel’.” and that therefore, “independent claim 1 is patentable over Blackmon, Powers and Uludag.” The Applicant cites [0020] of the present specification as support: “As discussed in Applicant's Specification as-filed, aspects of amended claim 1 (and amended claims 14-15) obtain the color of the intersection point by reusing the color of the pixel point in the previous frame of image ("third image") or the current frame of image ('first image") to avoid recalculation of the color of the intersection point and reduce the calculation amount. (Specification as-filed, para. [0020])” (emphasis added). The Examiner respectfully disagrees with the argument, because Uludag teaches: “When computing the reflection information for pixel 1402A (in a previous frame or in the current frame), a ray was cast from pixel 1402A and was identified as intersecting an object at point 1406A […] instead of spawning multiple rays from pixel 1402B to determine the reflection information, some embodiments may reuse the color information from points 1406A and 1406C if certain conditions are satisfied” [0112], emphasis added. In other words, Uludag teaches a system where a pixel may be updated based on data re-used from corresponding pixels. Fig. 14A and Fig. 14B of Uludag help showcase this functionality. The Applicant also highlights that “the third image is a previous frame of image of the second image” in amended Claims 1, 14, and 15. As cited above, Uludag explicitly mentions that the previous frame may be used in paragraph [0112]. Therefore, the Examiner is not convinced that Claims 1, 14, and 15 of the present application is allowable over Blackmon, Powers, and Uludag. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 8, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Blackmon (US 20170169602 A1; from applicant’s IDS) in view of Powers (US 20180374276 A1) and Uludag (US 20200051311 A1). Regarding claim 1: Blackmon teaches: An image processing method (Blackmon: a method of rendering one or more images at a processing system [0015]), comprising: obtaining to-be-rendered data (Blackmon: In step S310 graphics data (e.g. primitive data) describing scene geometry for an image to be rendered is received at the GPU 208 [0051]); performing rasterization processing on the to-be-rendered data (Blackmon: graphics data for a second region of the image (e.g. a peripheral region) of an image is processed using a second rendering technique (e.g. rasterisation) [0034]) to obtain a first image (Blackmon: a full low-detail image may be generated using a first rendering technique (e.g. rasterisation) [0035]); and performing ray tracing processing (Blackmon: using a first rendering technique (e.g. ray tracing) [0034]) on a target object (Blackmon: first region (e.g. the foveal region) of an image [0034]; (Blackmon: regions corresponding to objects with a particular depth in the image may be identified [0073]) in the first image to obtain a second image (Blackmon: a full low-detail image may be generated using a first rendering technique (e.g. rasterisation) and detail may be added to particular regions of the low-detail image using a second rendering technique (e.g. ray tracing) [0035]); wherein: performing the ray tracing processing on the target object in the first image comprises: performing the ray tracing processing on the target object in the first image based on the identifier of the target object, to obtain a ray tracing result (Blackmon: ray tracing logic configured to perform ray tracing to determine ray traced data for the identified one or more regions of the initial image [0014]), and updating a color of the target object in the first image based on the ray tracing result, to obtain the second image (Blackmon: update logic configured to update the initial image using the determined ray traced data for the identified one or more regions of the initial image [0014]) ; performing the ray tracing processing on the target object in the first image based on the identifier of the target object comprises: determining a target pixel in the first image, wherein the target pixel includes the identifier of the target object, and the target object comprises one or more target pixels (Blackmon: In this document, region is used to mean any subset of pixels, sub-pixels or samples within a buffer or image [0035]; see Note 1A), obtaining a target location (Blackmon: the position of a region of interest (e.g. foveal region, or high-detail region) within the image [0035]) of the target pixel in a three-dimensional scene (see Note 1D), and performing the ray tracing processing on the target object in the first image based on the target location and the identifier, to obtain an intersection point between a ray and the three-dimensional scene (Blackmon: one or more rays may be considered to be “shot” from the viewpoint through each pixel of the image to be rendered and the rays are traced to an intersection with a primitive in the scene [0038]); updating the color of the target object in the first image comprises: updating a color of the target pixel (Blackmon: shading calculations may be performed to compute the colour or light contribution from the identified objects to the appropriate pixels [0036]) based on a color of the intersection point (see Note 1E); and Note 1D: “Shooting” a ray from the viewpoint of each pixel, as taught by Blackmon in [0038], requires obtaining a target location of the target pixel in the three-dimensional scene, as the ray must be cast from a 3D position. Note 1E: Blackmon teaches: “The execution of a shader program may result in secondary rays which can then be intersection tested and may result in further shader program instances being executed. The shader programs may be used to determine visual effects in the scene such as reflections, shadows, global illumination, etc. […] The results of the ray tracing are passed to the update logic 712” [0075], i.e., the characteristics of incident lighting are based on the execution of shaders at intersection points of rays. Blackmon further teaches that: “The update logic 712 also receives, from the ray tracing logic 710, the ray traced image data to be added to the initial image,” [0076]. Additionally, Blackmon teaches in paragraph [0036] that “shading calculations may be performed to compute the colour or light contribution from the identified objects to the appropriate pixels”. Therefore, the ray traced image data will be used to update the color of the pixels in the region, based on the intersection points processed during the ray tracing. Blackmon fails to teach: performing ray tracing processing on a target object in the first image to obtain a second image based on an identifier that identifies an object on which the ray tracing processing is to be performed and further identifies a rendering effect corresponding to the object, wherein the target object includes the identifier. updating the color of the target pixel comprises: calculating a projection of the intersection point on an image based on a location of the intersection point in the three-dimensional scene, when the intersection point includes a corresponding projection pixel on the first image or a third image, updating the color of the target pixel based on a color of the projection pixel; and when the intersection point does not include the corresponding projection pixel on the first image or the third image, calculating the color of the intersection point, and updating the color of the target pixel based on the color of the intersection point; wherein the third image is a previous frame of image of the second image. Powers teaches: performing ray tracing processing (Powers: the object rendering instructions 2222 may utilize ray tracing based on the 3D model, [0201]) on a target object in the first image to obtain a second image (Powers: The object selection instructions 1420 may be configured to select an object 1428 […] to render within a 3D virtual representation of an environment being generated for the user 1404. In some cases, the object selection instructions 1420 may select an object 1428 to render within the 3D virtual representation based at least in part on the objects detected within the image data, [0201]; see Note 1A) based on an identifier that identifies an object on which the ray tracing processing is to be performed (Powers: the spatial interaction system may identify objects within the physical environment form [sic] the 3D model and/or the image data, [0305]; see Note 1B) and further identifies a rendering effect corresponding to the object (Powers: the object rendering instructions 2136 may utilize ray tracing based on the 3D model 2142 (e.g., lighting position, lighting type, information about articles surrounding the rendered object, etc.) and the object data 2150 (e.g., texture, surface shape, surface material, etc.) [0201]; see Note 1C), wherein the target object includes the identifier (Powers: In some cases, the system may identify the object by detecting a bar code, other code, serial number, or other identifier on the object, [0305]; see Note 1B). Note 1A: In [0201] cited above, Powers teaches that an object may be detected from image data (analogous to a first image) to render as a 3D object (the rendering produces a second image). Note 1B: Powers teaches: “In some cases, the system may identify the object by detecting a bar code, other code, serial number, or other identifier on the object,” [0299]. That is, the spatial interaction system may identify an object to render as a 3D model based on an identifier determined from the code or number included on the object. Note 1C: Powers teaches: “The system 200 may utilize the position information and the lighting characteristics, color characteristics, occlusion characteristics, texture characteristics to cause the object to be rendered as if the object was present in the physical environment 204. For instance, if the object was reflective, the spatial interaction system 200 may cause a reflection of physical objects in the physical environment to be reflected on the surface of the rendered object,” [0070]. That is, Powers identifies an object as reflective (the reflectiveness may be considered analogous to a “rendering effect”) and therefore renders the object in the second image corresponding to this effect. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Powers with Blackmon. Performing ray tracing processing on a target object in the first image to obtain a second image based on an identifier that identifies an object on which the ray tracing processing is to be performed and further identifies a rendering effect corresponding to the object, wherein the target object includes the identifier, as in Powers, would benefit the Blackmon teachings by enabling specific ray tracing functionality depending on the object detected in a first image, ensuring that any given object may be optimized so that the object may be rendered quickly and accurately. Blackmon in view of Powers fails to teach: updating the color of the target pixel comprises: calculating a projection of the intersection point on an image based on a location of the intersection point in the three-dimensional scene, when the intersection point includes a corresponding projection pixel on the first image or a third image, updating the color of the target pixel based on a color of the projection pixel; and when the intersection point does not include the corresponding projection pixel on the first image or the third image, calculating the color of the intersection point, and updating the color of the target pixel based on the color of the intersection point; wherein the third image is a previous frame of image of the second image. Uludag teaches: updating the color of the target pixel comprises: calculating a projection of the intersection point on an image based on a location of the intersection point in the three-dimensional scene (Uludag: projecting, for the first frame, a first line from the location in the reflection realm of the intersection of the object by the reflection ray for the first pixel towards a first location of a camera [0008]), when the intersection point includes a corresponding projection pixel on the first image or a third image, updating the color of the target pixel based on a color of the projection pixel (Uludag: When computing the reflection information for pixel 1402A (in a previous frame or in the current frame), a ray was cast from pixel 1402A and was identified as intersecting an object at point 1406A […] instead of spawning multiple rays from pixel 1402B to determine the reflection information, some embodiments may reuse the color information from points 1406A and 1406C if certain conditions are satisfied [0112]; see Note 1F); and when the intersection point does not include the corresponding projection pixel on the first image or the third image, calculating the color of the intersection point, and updating the color of the target pixel based on the color of the intersection point (Uludag: embodiments of the disclosure provide a system and method where reflections are generated by attempting to perform ray marching, and performing ray tracing if ray marching fails. Another embodiment, described below, provides a system and method to reuse ray marching results of nearby pixels for a given pixel. [0109]); wherein the third image is a previous frame of image of the second image (Uludag: when attempting to determine reflection information for a given pixel (e.g., pixel 1402B), some reflection information from nearby pixels from previous frames may be reused [0112]). Note 1F: Uludag teaches various conditions for re-using pixel data from nearby or “corresponding” pixels. If any of these conditions fail, as shown in Fig. 15 of Uludag, the system cannot re-use the pixel data. It would be obvious to one of ordinary skill in the art, if there are no nearby or corresponding pixels, that the pixel data cannot be re-used and must be regenerated. Uludag, in [0109] cited above, teaches that when ray marching fails, the system may revert to ray tracing to generate new reflection information for a pixel in an image to be output. Before the effective filing date of the claimed invention, it would be obvious to one of ordinary skill in the art to combine the teachings of Uludag with Blackmon in view of Powers. Calculating a projection of the intersection point on an image based on a location of the intersection point in the three-dimensional scene; when the intersection point includes a corresponding projection pixel on the first image or a third image, updating the color of the target pixel based on a color of the projection pixel; and when the intersection point does not include the corresponding projection pixel on the first image or the third image, calculating the color of the intersection point, and updating the color of the target pixel based on the color of the intersection point; wherein the third image is a previous frame of image of the second image, as in Uludag, would improve the Blackmon in view of Powers teachings by enabling devices to simulate high quality ray-tracing even with lower processing power: “various other techniques have been developed to render reflections in video games in real-time or near-real time. However, these other techniques suffer from poor quality, particularly when compared to the reflections obtained from full ray tracing,” (Uludag, [0004]). Regarding claim 4: Blackmon in view of Powers teaches: The image processing method according to claim 1 (as shown above), wherein performing the ray tracing processing on the target object in the first image comprises: obtaining a location of the target object in the first image in a three-dimensional scene (Blackmon: the position of a region of interest (e.g. foveal region, or high-detail region) within the image [0035]; Blackmon: the graphics data provided to a graphics processing system may describe geometry within a three dimensional (3D) scene to be rendered [0001]); performing the ray tracing processing based on the location of the target object in the three- dimensional scene (Blackmon: In some examples, the different rendering techniques may be used in different ways across different parts of the image based on the position of a region of interest (e.g. foveal region, or high-detail region) within the image [0035]), to obtain a ray tracing result; and updating a color of the target object (Blackmon: shading calculations may be performed to compute the colour or light contribution from the identified objects to the appropriate pixels [0036]) in the first image based on the ray tracing result, to obtain the second image (Blackmon: update logic configured to update the initial image using the determined ray traced data for the identified one or more regions of the initial image, to thereby determine an updated image to be outputted for display [0014]). Claims 5, 6, and 7 are cancelled. Regarding claim 8: Blackmon in view of Powers and Uludag teaches: The image processing method according to claim 1 (as shown above), wherein performing the ray tracing processing on the target object in the first image based on the target location and the identifier, comprises: obtaining an acceleration structure, wherein the acceleration structure is obtained based on the three-dimensional scene (Blackmon: The acceleration structure building logic 706 is configured to determine an acceleration structure representing the graphics data of geometry in a scene of which an image is to be rendered [0074]); and performing the ray tracing processing on the target object in the first image based on the target location (Blackmon: one or more rays may be considered to be “shot” from the viewpoint through each pixel of the image to be rendered and the rays are traced to an intersection with a primitive in the scene [0038]) and the identifier (Blackmon: The identified regions are regions for which ray traced data is to be computed [0073]; see Note 1 above), by using the acceleration structure (Blackmon: Methods for performing ray tracing are known in the art and typically includes performing intersection testing of rays against the geometry in the scene, as represented by the acceleration structure, and then executing shader programs on intersection hits [0075]), to obtain the intersection point between the ray and the three-dimensional scene. Regarding claim 14: Claim 14 is substantially similar to claim 1, and is therefore rejected for similar reasons. Claim 14 has the following notable differences: Claim 14 is a device claim instead of a method claim. Blackmon teaches an electronic device: An electronic device (Blackmon: A processor, computer, or computer system may be any kind of device, machine or dedicated circuit, or collection or portion thereof, with processing capability such that it can execute instructions [0117]), comprising: a processor (Blackmon: one or more processors executing code [0115]); and a memory storing a program code, which when executed by the processor, cause the electronic device to perform operations (Blackmon: The algorithms and methods described herein could be performed by one or more processors executing code that causes the processor(s) to perform the algorithms/methods [0115]), the operations comprising: Regarding claim 15: Claim 15 is substantially similar to claim 1, and is therefore rejected for similar reasons. Claim 15 has the following notable differences: Claim 15 is a computer-readable medium (CRM) claim instead of a method claim. Blackmon teaches a non-transitory computer-readable storage medium: A non-transitory computer-readable storage medium having computer-readable instructions stored therein, which when executed by a computer, cause the computer to perform operations (Blackmon: There is provided a non-transitory computer readable storage medium having stored thereon a computer readable description of an integrated circuit that, when processed in an integrated circuit manufacturing system, causes the system to manufacture a processing system [0013]), the operations comprising: Claim 2 is cancelled. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Blackmon (US 20170169602 A1; from applicant’s IDS) in view of Powers (US 20180374276 A1), Uludag (US 20200051311 A1) and Peterson (US 20120001912 A1). Regarding claim 3: Blackmon in view of Powers, Uludag, and Peterson teaches: The image processing method according to claim 1 (as shown above), wherein the rendering effect (see Note 1C above) comprises reflection, refraction, shadow, or caustic (Peterson: By further example, if a surface of an object were rough, not smooth, then a shader for that object may issue rays to model a diffuse reflection on that surface [0007], see Note 1B above and Note 3A). Note 3A: Peterson further teaches that the shader can be used to identify and perform different types of ray tracing processing: “Shaders may emit rays to test whether an intersection point is shadowed by another object for known light sources in the scene. Shaders also can model complex materials characteristics, such as subsurface scattering for skin, reflection, refraction, and so on,” [0030]. Before the effective filing date of the claimed invention, it would be obvious to one of ordinary skill in the art to combine the teachings of Peterson with Blackmon in view of Powers and Uludag. Having the ray tracing processing comprise reflection, refraction, shadow, or caustic, as in Peterson, would improve the Blackmon in view of Powers and Uludag teachings by enabling many realistic effects to be displayed on the object, increasing the sense of immersion for the user. Regarding claim 13: Blackmon in view of Powers and Uludag teaches: The image processing method according to claim 1 (as shown above), wherein the to-be-rendered data comprises the target object and a material parameter of the target object; and the method further comprises: Blackmon in view of Powers and Uludag fails to teach: determining the identifier of the target object based on the material parameter of the target object. Peterson teaches: determining the identifier of the target object based on the material parameter of the target object (Peterson: For example, if the primitive is part of a mirror, then a reflection ray is issued to determine whether light is hitting the intersected point from a luminaire, or in more complicated situations, subsurface reflection, and scattering can be modeled, which may cause issuance of different rays to be intersected tested. By further example, if a surface of an object were rough, not smooth, then a shader for that object may issue rays to model a diffuse reflection on that surface [0007]; see Note 1B and Note 3A above). Before the effective filing date of the claimed invention, it would be obvious to one of ordinary skill in the art to combine the teachings of Peterson with Blackmon in view of Powers and Uludag. Determining the identifier of the target object based on the material parameter of the target object, as in Peterson, would improve the Blackmon in view of Powers and Uludag teachings by ensuring that only the rays that need to be calculated for a given object are processed. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Blackmon (US 20170169602 A1; from applicant’s IDS) in view of Powers (US 20180374276 A1), Uludag (US 20200051311 A1) and Walewski (NPL: Heuristic based real-time hybrid rendering with the use of rasterization and ray tracing method, from applicant’s IDS). Blackmon in view of Powers and Uludag teaches: The image processing method according to claim 1 (as shown above), wherein performing the rasterization processing on the to-be-rendered data comprises: Blackmon in view of Powers and Uludag fails to teach: performing illumination-free rendering on the to-be-rendered data to obtain a fourth image; obtaining, based on attribute information of the to-be-rendered data, a geometry buffer corresponding to a pixel in the fourth image, wherein the geometry buffer is used to store an attribute parameter corresponding to the pixel; and performing illumination calculation on the pixel in the fourth image based on the geometry buffer, to obtain the first image. Walewski teaches: performing illumination-free rendering on the to-be-rendered data to obtain a fourth image (Walewski: To perform calculations based on objects’ materials, deferred rendering fills G-Buffer structure with appropriate data, which is then used in the lighting calculation pass, Pg. 6, par. 1; Walewski: Figure 2, see Note 9A); obtaining, based on attribute information of the to-be-rendered data, a geometry buffer corresponding to a pixel in the fourth image, wherein the geometry buffer is used to store an attribute parameter corresponding to the pixel (Walewski: Per-pixel data that is stored in G-Buffer contains: position, normal, emission color, ambient color, diffuse color, specular color, optical properties, heuristic mask, depth buffer, Pg. 6, par. 1); and performing illumination calculation (Walewski: In this stage, ray tracing method is responsible for generating secondary effects for objects that have been selected basing on heuristic decisions. Those effects includes shadows, reflections and refractions, Pg. 10, Section 3.3, par. 1) on the pixel (Walewski: For each pixel for image size, so called “ray generation program” is launched, Pg. 12, par. 1) in the fourth image based on the geometry buffer (Walewski: As an input, ray tracing receives both data accumulated in G-Buffer during previous stages and scene object’s data using structures provided by OptiX, Pg. 12, par. 1), to obtain the first image (Walewski: To correctly blend images, four mentioned textures are being passed: lighting (L), shadows(S), reflection(M), refraction(R), and two additional textures from G-Buffer: optical properties and heuristic mask, Pg. 12, Section 3.4, par. 1). Note 9A: Figure 2 depicts “An example of data generated in G-Bufer” [sic], in which no illumination calculations have been performed (as opposed to the images in Figure 3.) Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Walewski with Blackmon in view of Powers and Uludag. Performing illumination-free rendering on the to-be-rendered data to obtain a fourth image; obtaining, based on attribute information of the to-be-rendered data, a geometry buffer corresponding to a pixel in the fourth image, wherein the geometry buffer is used to store an attribute parameter corresponding to the pixel; and performing illumination calculation on the pixel in the fourth image based on the geometry buffer, to obtain the first image, as in Walewski, would benefit the Blackmon in view of Powers and Uludag teachings by ensuring that lighting calculations are performed separate from rasterization, allowing the ray tracing system to defer the illumination rendering until the data has been sufficiently processed such that the lighting calculations can be performed efficiently. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Blackmon (US 20170169602 A1; from applicant’s IDS) in view of Powers (US 20180374276 A1), Uludag (US 20200051311 A1) and Walewski (NPL: Heuristic based real-time hybrid rendering with the use of rasterization and ray tracing method, from applicant’s IDS) and Guenter (NPL: Foveated 3D Graphics). Blackmon in view of Powers, Uludag and Walewski teaches: The image processing method according to claim 9 (as shown above), wherein obtaining the geometry buffer corresponding to the pixel in the fourth image comprises: when a to-be-rendered object in the fourth image is the target object (Blackmon: first region (e.g. the foveal region) of an image [0034]; Blackmon: regions corresponding to objects with a particular depth in the image may be identified [0073]), generating, based on attribute information (Walewski: To perform calculations based on objects’ materials, deferred rendering fills G-Buffer structure with appropriate data, which is then used in the lighting calculation pass, Pg. 6, par. 1; Walewski: Figure 2, see Note 9A) of the to-be-rendered object and a first resolution (Blackmon: The pixel density of the rendered image may be higher in the region of interest than in other regions, e.g. in the periphery. In examples described herein the rasterisation logic renders the whole of the rasterisation region at the same resolution [0109]), a first geometry buffer corresponding to the to-be-rendered object (Blackmon: The image buffer 814 of the combining logic 814 can store image values for each sample position (or each pixel position) of an image [0091]); Blackmon in view of Powers, Uludag, and Walewski fails to teach: when the to-be-rendered object in the fourth image is located in a surrounding area of the target object, generating, based on the attribute information of the to-be-rendered object and a second resolution, a second geometry buffer corresponding to the to-be-rendered object; and when the to-be-rendered object in the fourth image is located in a background area, generating, based on the attribute information of the to-be-rendered object and a third resolution, a third geometry buffer corresponding to the to-be-rendered object; wherein the to-be-rendered data comprises the to-be-rendered object, the first resolution is greater than the second resolution, the second resolution is greater than the third resolution, and the first geometry buffer, the second geometry buffer, and the third geometry buffer are used to store a color attribute parameter. Guenter teaches: when a to-be-rendered object in the fourth image encompasses (Guenter: (red border = inner layer), Pg. 1, Figure 1) the target object (Guenter: the tracked gaze point (pink dot), Pg. 1, Figure 1), generating, based on attribute information of the to-be-rendered object and a first resolution, a first geometry buffer corresponding to the to-be-rendered object (see Note 10A); when the to-be-rendered object in the fourth image is located in a surrounding area (Guenter: green = middle layer, Pg. 1, Figure. 1) of the target object, generating, based on the attribute information of the to-be-rendered object and a second resolution, a second geometry buffer corresponding to the to-be-rendered object (see Note 10A); and when the to-be-rendered object in the fourth image is located in a background area (Guenter: blue = outer layer, Pg. 1, Figure 1), generating, based on the attribute information of the to-be-rendered object and a third resolution, a third geometry buffer corresponding to the to-be-rendered object (see Note 10A); wherein the to-be-rendered data comprises the to-be-rendered object (Guenter: Graphical content for the study involved a moving camera through a static 3D scene, composed of a terrain, a grid of various objects positioned above it, Pg. 6, Section 5, par. 6), the first resolution is greater than the second resolution, the second resolution is greater than the third resolution (Guenter: The two peripheral layers cover a progressively larger angular diameter but are rendered at progressively lower resolution and coarser LOD, Pg. 2, par. 1), and the first geometry buffer, the second geometry buffer, and the third geometry buffer are used to store a color attribute parameter (see Note 10B). Note 10A: Guenter teaches: “Our system exploits foveation on existing graphics hardware by rendering three nested and overlapping render targets”. That is, multiple separate geometry buffers (in this case, three, though Guenter later teaches a method that allows for more in Pg. 5, section 4.3) are created to execute the foveated graphics process. It has previously been shown above (Claim 1) that the target object is analogous to the region taught by Blackmon. Similarly, Guenter teaches a set of “eccentricity layers” that define regions surrounding the gaze point, (namely, the inner, middle, and outer layers) where the to-be-rendered object is to be rendered. Guenter further teaches: “The objects range from diffuse to glossy and were rendered with various types of procedural shaders, including texture and environment mapping,”; i.e., each object has data associated with it that defines the attributes of how a given object is rendered. Note 10B: Figure 1 shows that each of the three eccentricity layers at least store a diffuse parameter of the objects in the scene. (There is no ray-traced shading in the scene, akin to the display of the diffuse G-buffer showcased in Walewski, Pg. 5, Fig. 2 (e).) Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Guenter with Blackmon in view of Powers, Uludag and Walewski. When a to-be-rendered object in the fourth image encompasses the target object, generating, based on attribute information of the to-be-rendered object and a first resolution, a first geometry buffer corresponding to the to-be-rendered object; when the to-be-rendered object in the fourth image is located in a surrounding area of the target object, generating, based on the attribute information of the to-be-rendered object and a second resolution, a second geometry buffer corresponding to the to-be-rendered object; and when the to-be-rendered object in the fourth image is located in a background area, generating, based on the attribute information of the to-be-rendered object and a third resolution, a third geometry buffer corresponding to the to-be-rendered object; wherein the to-be-rendered data comprises the to-be-rendered object, the first resolution is greater than the second resolution, the second resolution is greater than the third resolution, and the first geometry buffer, the second geometry buffer, and the third geometry buffer are used to store a color attribute parameter, as in Guenter, would benefit the Blackmon in view of Powers, Uludag and Walewski teachings by ensuring that less computing power is spent on regions of the image that are not occupied by a target object. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Blackmon (US 20170169602 A1; from applicant’s IDS) in view of Powers (US 20180374276 A1), Uludag (US 20200051311 A1) and Zhang (CN 109510940 A). Blackmon in view of Powers and Uludag teaches: The image processing method according to claim 1 (as shown above), wherein obtaining the to-be-rendered data comprises: Blackmon in view of Powers and Uludag fails to teach: obtaining three-dimensional scene data and a fifth image sent by a server, wherein the fifth image is a rendered background image. Zhang teaches: obtaining three-dimensional scene data (Zhang: according to the target scene information, Pg. 6, par. 13) and a fifth image sent by a server, wherein the fifth image is a rendered (see Note 12A) background image (Zhang: the terminal device can according to the target scene information, obtaining the matched background image from the cloud server, Pg. 6, par. 13). Note 12A: Zhang teaches: “the terminal device can advance through three-dimensional reconstruction method acquiring three-dimensional model (three-dimensional model comprises a background model and foreground model), and the three-dimensional model stored in the cloud server (i.e., cloud) together with geographic location information”, Pg. 6, par. 5, i.e., the background model is stored in the cloud server for later retrieval of an image rendered from the background model. Before the effective filing date of the claimed invention, it would be obvious to one of ordinary skill in the art to combine the teachings of Zhang with Blackmon in view of Powers and Uludag. Obtaining three-dimensional scene data and a fifth image sent by a server, wherein the fifth image is a rendered background image, as in Zhang, would improve the Blackmon in view of Powers and Uludag teachings by reducing the workload on the ray tracing system by offloading the work to a separate remote system. Allowable Subject Matter Claim 11 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Blackmon in view of Powers, Uludag, Walewski and Guenter teaches: The image processing method according to claim 10 (as shown above), wherein the obtaining the geometry buffer corresponding to the pixel in the fourth image further comprises: generating, based on the attribute information of the to-be-rendered object and a fourth resolution, a fourth geometry buffer corresponding to the to-be-rendered object, and the fourth resolution is less than the first resolution. This is because Guenter teaches that n buffers can be used instead of the 3 shown in Figure 1: “We outline the general procedure for n layers. Denote a layer by Li, where i = 1 indexes the inner layer and i = n indexes the outermost, with corresponding angular radius ei, and sampling factor (pixel size) si ≥ 1 represented as a multiple of the (unit) pixel size of the native display”, Section 4.3, par. 2. However, Blackmon in view of Powers, Uludag, Walewski and Guenter fails to teach the limitation: “wherein an attribute parameter stored by the fourth geometry buffer is not the color attribute parameter”. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT ALEXANDER PROVIDENCE whose telephone number is (571)270-5765. The examiner can normally be reached Monday-Thursday 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VINCENT ALEXANDER PROVIDENCE/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

May 25, 2023
Application Filed
Jun 30, 2023
Response after Non-Final Action
Mar 27, 2025
Non-Final Rejection — §103
Jun 27, 2025
Response Filed
Aug 06, 2025
Final Rejection — §103
Nov 04, 2025
Response after Non-Final Action
Dec 09, 2025
Request for Continued Examination
Jan 06, 2026
Response after Non-Final Action
Jan 28, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586303
GEOMETRY-AWARE THREE-DIMENSIONAL SYNTHESIS IN ALL ANGLES
2y 5m to grant Granted Mar 24, 2026
Patent 12530847
IMAGE GENERATION FROM TEXT AND 3D OBJECT
2y 5m to grant Granted Jan 20, 2026
Patent 12530808
Predictive Encoding/Decoding Method and Apparatus for Azimuth Information of Point Cloud
2y 5m to grant Granted Jan 20, 2026
Patent 12524946
METHOD FOR GENERATING FIREWORK VISUAL EFFECT, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12380621
COMPUTER-IMPLEMENTED SYSTEMS AND METHODS FOR GENERATING ENHANCED MOTION DATA AND RENDERING OBJECTS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+25.0%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month