Prosecution Insights
Last updated: April 19, 2026
Application No. 18/840,681

Rendering Method and Apparatus, Device, and Storage Medium

Non-Final OA §103
Filed
Aug 22, 2024
Examiner
HSU, JONI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Honor Device Co., Ltd.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
741 granted / 848 resolved
+25.4% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
34 currently pending
Career history
882
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
59.7%
+19.7% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
3.1%
-36.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 848 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on October 25, 2024 and May 23, 2025 were filed after the mailing date of the application on August 22, 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The disclosure is objected to because of the following informalities: According to MPEP 608.01(m), the present Office practice is to insist that each claim must be the object of a sentence starting with “I (or we) claim,” “The invention claimed is” (or the equivalent). Thus, the heading simply stating “CLAIMS” is not sufficient. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4, 12, 15-17, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu (CN114820910A) and Armsden (US 20130002671A1). As per Claim 1, Yu teaches a method, comprising: obtaining to-be-rendered data that comprises a to-be-rendered model; performing ray tracing rendering on the to-be-rendered model (ray tracing rendering result of the obtain to be rendered, p. 16, 1st paragraph). Yu teaches each pixel tracking light number refers to the number of light passing through each pixel (p. 5, 2nd to last paragraph). Thus, the tracking light number is the ray emission density. Yu teaches the content to be rendered is divided into a first to-be-rendered patch set and a second to-be rendered patch set. The first to-be-rendered patch set to the first tracking light number of light tracking rendering. At the same time, the second-to-be-rendered patch set to the second tracking light number of light tracking rendering, wherein the first tracking light number is higher than the second tracking light number (p. 6, 5th paragraph). The first to-be-rendered sheet set is the high-attention surface sheet. By establishing a virtual viewpoint and the high attention region, can determine the imaging region corresponding to the high attention region on the virtual viewing plane. The second to-be-rendered sheet is the low-attention surface sheet. By establishing a virtual viewpoint and the connection of the low attention region, can determine the imaging region corresponding to the low attention region on the virtual viewing plane (p. 14, 3rd-4th paragraphs). The mobile plane rendering result corresponding to the covered pixel, can include low attention degree model rendering result corresponding to the covered pixel (p. 12, 5th paragraph). Fig. 1 (d) shows a pixel projection area covers the partial area of the patch 6 (p. 7, 2nd to last paragraph). Fig. 1 (d) shows that patch 6 that has the covered pixel is a different size than patches 1-5 with no covered pixel. Thus, Yu teaches based on first ray emission density when the to-be-rendered model meets a first condition, wherein the first condition comprises a visible range proportion of the to-be-rendered model in a to-be-rendered picture being a first visible range proportion; and performing ray tracing rendering on the to-be-rendered model based on second ray emission density when the to-be-rendered model meets a second condition, wherein the second condition comprises the visible range proportion of the to-be-rendered model in the to-be rendered picture being a second visible range proportion, wherein the first visible range proportion is different from the second visible range proportion, and/or the first distance is different from the second distance, and wherein the to-be-rendered picture is drawn through the photographing apparatus based on the to-be-rendered data in a rendering process, and the photographing apparatus is a virtual camera in an electronic device (p. 5, 2nd to last paragraph; p. 6, 5th paragraph; p. 14, 3rd-4th paragraphs; p. 12, 5th paragraph; p. 7, 2nd to last paragraph; Fig. 1(d); p. 16, 1st paragraph). However, Yu does not expressly teach wherein the first condition comprises a distance between the to-be-rendered model and a photographing apparatus being a first distance; and wherein the second condition comprises the distance between the to-be-rendered model and the photographing apparatus being a second distance. However, Armsden teaches performing ray tracing rendering on the to-be-rendered model based on first ray emission density when the to-be-rendered model meets a first condition, wherein the first condition comprises a distance between the to-be-rendered model and a photographing apparatus being a first distance; and Performing ray tracing rendering on the to-be-rendered model based on second ray emission density when the to-be-rendered model meets a second condition, wherein the second condition comprises the distance between the to-be-rendered model and the photographing apparatus being a second distance (ray tracing engine may be biased based on the information provided by the importance map, for example, the ray tracing engine may primarily send rays toward an area of interest, [0037], the number of rays directed to a particular area may be determined based on distance, [0044]), wherein the to-be-rendered picture is drawn through the photographing apparatus based on the to-be-rendered data in a rendering process, and the photographing apparatus is a virtual camera in an electronic device (snapshot of a scene using a ray tracing approach may be rendered by calculating the path of rays from a virtual camera 130 though pixels in an image plane and then into the virtual world, [0024]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Yu so that the first condition comprises a distance between the to-be-rendered model and a photographing apparatus being a first distance; and wherein the second condition comprises the distance between the to-be-rendered model and the photographing apparatus being a second distance because Armsden suggests that this way, if the distance is larger, than it can use fewer rays, so that it is less computationally intensive for areas that are further away, since the user does not need to see as much detail for areas that are further away [0044]. As per Claim 2, Yu does not expressly teach wherein when the first visible range proportion is the same as the second visible range proportion, the first ray emission density is less than the second ray emission density if the first distance is greater than the second distance. However, Armsden teaches the number of rays directed to a particular area may be determined based on size, distance, or the like [0044]. Thus, it would have been obvious to one of ordinary skill in the art that if the size of the first visible range portion is the same as the second visible range portion, then the number of rays directed to a particular visible range portion is determined based on distance, and thus the first ray emission density is less than the second ray emission density if the first distance is greater than the second distance [0044]. This would be obvious for the reasons given in the rejection for Claim 1. As per Claim 3, Yu teaches wherein when the first distance is the same as the second distance, the first ray emission density is greater than the second ray emission density if the first visible range proportion is greater than the second visible range proportion (first visible range proportion is greater since there are no covered pixels, second visible range proportion is less since there are covered pixels) (p. 5, 2nd to last paragraph; p. 6, 5th paragraph; p. 14, 3rd-4th paragraphs; p. 12, 5th paragraph; p. 7, 2nd to last paragraph; Fig. 1(d)). As per Claim 4, Yu teaches wherein the ray emission density of the to-be-rendered model is based on a condition met by the to-be-rendered model according to the visible range proportion of the to-be-rendered model in the to-be-rendered picture, wherein the ray emission density is the first ray emission density when the condition met by the to-be-rendered model is the first condition, and wherein the ray emission density is the second ray emission density when the condition met by the to-be-rendered model is the second condition (p. 5, 2nd to last paragraph; p. 6, 5th paragraph; p. 14, 3rd-4th paragraphs; p. 12, 5th paragraph; p. 7, 2nd to last paragraph; Fig. 1(d)). However, Yu does not expressly teach wherein the ray emission density of the to-be-rendered model is based on a condition met by the to-be-rendered model according to the distance between the to-be-rendered model and the photographing apparatus. However, Armsden teaches wherein the ray emission density of the to-be-rendered model is based on a condition met by the to-be-rendered model according to the distance between the to-be-rendered model and the photographing apparatus [0044]. This would be obvious for the reasons given in the rejection for Claim 1. As per Claim 12, Claim 12 is similar in scope to Claim 1, except that Claim 12 is directed to an electronic device, comprising: one or more processors; and a memory coupled to the one or more processors and configured to store instructions that when executed by the one or more processors, cause the electronic device to be configured to perform the method of Claim 1. Yu teaches an electronic device, comprising: one or more processors; and a memory coupled to the one or more processors and configured to store instructions that when executed by the one or more processors, cause the electronic device to be configured to perform the method (computing device comprises a processor and a memory, the processor of the computing device is used for executing the instructions stored in the memory of the computing device, so that the computing device cluster executes any one method, p. 4, 3rd paragraph). Thus, Claim 12 is rejected under the same rationale as Claim 1. As per Claims 15-17 and 23, these claims are similar in scope to Claims 2-4 and 12 respectively, and therefore are rejected under the same rationale. Claim(s) 6-8 and 19-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu (CN114820910A) and Armsden (US 20130002671A1) in view of Chen (US005613048A) and Chang (US 20210337136A1). As per Claim 6, Yu and Armsden are relied upon for the teachings as discussed above relative to Claim 1. Yu teaches wherein performing ray tracing rendering on the to-be-rendered model based on the ray emission density comprises: performing ray tracing rendering on the to-be-rendered model based on the ray emission density to obtain a ray tracing rendering result image of the to-be-rendered model (p. 16, 1st paragraph; p. 5, 2nd to last paragraph; p. 6, 5th paragraph). However, Yu and Armsden do not teach wherein the ray tracing rendering result image carries a hole pixel; and performing color filling on the hole pixel in the ray tracing rendering result image according to a template value of each pixel of the to-be-rendered model during drawcall. However, Chen teaches wherein the ray tracing rendering result image carries a hole pixel; and performing color filling on the hole pixel in the ray tracing rendering result image according to a template value of each pixel of the to-be-rendered model during drawcall (ray trace diagram which depicts the manner in which holes can be formed in an image, col. 2, lines 63-64; method to fill the holes is to interpolate the colors of adjacent pixels, to identify the holes, the destination image can be first filled with a predetermined background color, new colors for these hole pixels can be computed by interpolating the colors of non-background pixels that are adjacent to the hole pixels, col. 5, lines 26-35). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Yu and Armsden so that the ray tracing rendering result image carries a hole pixel; and performing color filling on the hole pixel in the ray tracing rendering result image according to a template value of each pixel of the to-be-rendered model during drawcall because Chen suggests that this way, hole pixels can be filled in a way that best approximates what the colors of the hole pixels should be (col. 5, lines 26-35). However, Yu, Armsden, and Chen do not teach a scene semantic image of the to-be-rendered model, wherein the scene semantic image is for identifying a model to which a pixel belongs. However, Chang teaches a scene semantic image of the to-be-rendered model, wherein the scene semantic image is for identifying a model to which a pixel belongs (target object identification model may identify whether each pixel in the image belongs to the target object, [0030]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Yu, Armsden, and Chen to include a scene semantic image of the to-be-rendered model, wherein the scene semantic image is for identifying a model to which a pixel belongs because Chang suggests that this is an efficient way to identify a target object that needs to be processed [0030]. As per Claim 7, Yu teaches wherein a target pixel on which ray tracing rendering is to be performed in the to-be-rendered model is based on the ray emission density and a preset shaded pixel arrangement, wherein the shaded pixel arrangement indicates position arrangement information of a pixel rendered by using ray tracing (p. 5, 2nd to last; p. 6, 5th paragraph; ray tracing, it obtains the model of the light path through tracking the light interaction with the optical surface so as to obtain the light path, when used for rendering, tracking the light emitted from the eye, light tracking method calculates arrival of the new position, then generating a new light from the new position, p. 5, last paragraph), and wherein performing ray tracing rendering on the to-be-rendered model based on the ray emission density to obtain the ray tracing rendering result image of the to-be-rendered model (p. 5, 2nd to last; p. 6, 5th paragraph) comprises: performing ray tracing rendering on the target pixel to obtain color data of the target pixel; and outputting the color data to a position at which the target pixel is located to obtain the ray tracing rendering result image of the to-be-rendered model (color of the pixel is calculated from the color of the light passing through the pixel in the light tracking process, in the light tracking, the size of each patch tracking light number can affect the rendering result, each patch tracking light number is larger, the calculation of the colour value of each pixel can be more accurate, p. 5, 2nd to last paragraph). As per Claim 8, Yu and Armsden do not teach wherein performing color filling on the hole pixel in the ray tracing rendering result image according to the template value of each pixel of the to-be-rendered model during drawcall comprises performing, when a current pixel that is sampled is a hole pixel, color filling on the hole pixel by using color data of a reference pixel, wherein the reference pixel is based on the template value of each pixel of the to-be-rendered model during drawcall, wherein the reference pixel has a same template value as the hole pixel on the ray tracing rendering result image, and wherein the reference pixel is a pixel that is on the ray tracing rendering result image and already has color data. However, Chen teaches wherein performing color filling on the hole pixel in the ray tracing rendering result image according to the template value of each pixel of the to-be-rendered model during drawcall comprises performing, when a current pixel that is sampled is a hole pixel, color filling on the hole pixel by using color data of a reference pixel, wherein the reference pixel is based on the template value of each pixel of the to-be-rendered model during drawcall, wherein the reference pixel has a same template value as the hole pixel on the ray tracing rendering result image, and wherein the reference pixel is a pixel that is on the ray tracing rendering result image and already has color data (col. 2, lines 63-64; col. 5, lines 26-35). This would be obvious for the reasons given in the rejection for Claim 6. However, Yu, Armsden, and Chen do not teach the scene semantic image of the to-be-rendered model. However, Chang teaches the scene semantic image of the to-be-rendered model [0030]. This would be obvious for the reasons given in the rejection for Claim 6. As per Claims 19-21, these claims are similar in scope to Claims 6-8 respectively, and therefore are rejected under the same rationale. Claim(s) 10 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu (CN114820910A) and Armsden (US 20130002671A1) in view of Pankratz (see citation below). As per Claim 10, Yu and Armsden are relied upon for the teachings as discussed above relative to Claim 1. However, Yu and Armsden do not teach wherein performing ray tracing rendering on the to-be-rendered model comprises performing ray tracing rendering on the to-be-rendered model by using a bound vulkan ray tracing acceleration structure. However, Pankratz teaches wherein performing ray tracing rendering on the to-be-rendered model comprises performing ray tracing rendering on the to-be-rendered model by using a bound vulkan ray tracing acceleration structure (Vulkan ray-tracing, p. 138, 2nd to last paragraph; traversal and intersection takes a ray definition and traverses the acceleration structure to determine ray collisions, an object may be defined with an axis aligned bounding box, p. 139, 2nd paragraph). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Yu and Armsden so that performing ray tracing rendering on the to-be-rendered model comprises performing ray tracing rendering on the to-be-rendered model by using a bound vulkan ray tracing acceleration structure as suggested by Pankratz. It is well-known in the art that Vulkan ray tracing allows for real-time rendering of complex scenes with high-quality graphics. As per Claim 22, Claim 22 is similar in scope to Claim 10, and therefore is rejected under the same rationale. Allowable Subject Matter Claims 5, 9, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art taken singly or in combination do not teach or suggest the combination of all the limitations of Claim 5 and base Claim 1 and intervening Claim 4, and in particular, do not teach performing, based on a preset first weight coefficient and second weight coefficient, weighted summation on the visible range proportion of the to-be-rendered model in the to-be-rendered picture and the distance between the to-be-rendered model and the photographing apparatus, to obtain a ray emission density coefficient of the to-be-rendered model, wherein the ray emission density of the to-be-rendered model is based on the ray emission density coefficient of the to-be-rendered model and a preset first relationship that indicates a correspondence between the ray emission density coefficient and the ray emission density. Claim 18 is similar in scope to Claim 5, and therefore also contains allowable subject matter. The prior art taken singly or in combination do not teach or suggest the combination of all the limitations of Claim 9 and base Claim 1 and intervening Claims 6 and 8, and in particular, do not teach obtaining, according to the scene semantic image of the to-be-rendered model, texture coordinates of the hole pixel and texture coordinates of a first pixel that is on the ray tracing rendering result image and already has color data; and querying a template buffering position based on the texture coordinates of the hole pixel and the texture coordinates of the first pixel, wherein the template buffering position is for storing the template value of each pixel of the to-be-rendered model during drawcall, wherein when a template value of the hole pixel is consistent with a template value of the first pixel, the first pixel is the reference pixel of the hole pixel, and wherein when the template value of the hole pixel is inconsistent with the template value of the first pixel, the method further comprises traversing other pixels that are on the ray tracing rendering result image and already have color data until the reference pixel having the same template value as the hole pixel is identified. Prior Art of Record Pankratz, David; Vulkan Vision: Ray Tracing Workload Characterization using Automatic Graphics Instrumentation; March 2021; 2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO); p. 137-148; https://ieeexplore.ieee.org/abstract/document/9370320 Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONI HSU whose telephone number is (571)272-7785. The examiner can normally be reached M-F 10am-6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JH /JONI HSU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 22, 2024
Application Filed
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592028
METHODS AND DEVICES FOR IMMERSING A USER IN AN IMMERSIVE SCENE AND FOR PROCESSING 3D OBJECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12586306
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MODELING OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12586260
CREATING IMAGE ENHANCEMENT TRAINING DATA PAIRS
2y 5m to grant Granted Mar 24, 2026
Patent 12581168
A METHOD FOR A MEDIA FILE GENERATING AND A METHOD FOR A MEDIA FILE PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12561850
IMAGE GENERATION WITH LEGIBLE SCENE TEXT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+7.2%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 848 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month