Prosecution Insights
Last updated: April 19, 2026
Application No. 18/827,338

RENDERING METHOD AND ELECTRONIC DEVICE

Non-Final OA §102§103§112
Filed
Sep 06, 2024
Examiner
SAJOUS, WESNER
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
1099 granted / 1196 resolved
+29.9% vs TC avg
Moderate +8% lift
Without
With
+7.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
29 currently pending
Career history
1225
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
33.5%
-6.5% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1196 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . It is responsive to the submission dated 07/14/2025. Claims 1-18 and 20-21 are presented for examination, of which, claims 1, 13 and 20 are independent claims. Information Disclosure Statement 2. The information disclosure statements (IDSs) submitted on 06/02/2025 and 12/30/2024 are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner. Claim Rejections - 35 USC § 112 3. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 4. Claims 1-18 and 20-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claim 1, the limitations reciting “determining, …, rendering input information …, wherein complexity of … input information is different; and performing rendering based on the rendering input information …, wherein content complexity of the rendered image is associated with complexity of the rendering input information” render the claim indefinite because the ambiguity of the phrases “complexity of input information” and “content complexity of image” are so vague as such the Examiner is left in doubt as to the true meaning of the technical features to which they refer. The term “complexity” is relative term with no well-recognized meaning in the art. The meaning of said terms may change depending on context or the mental capacity of the person elected to make such a determination. As such, it submitted that the limitations in question do not to limit the scope of the claim in regard to what the applicant regards as the claimed invention. In claim 3, the limitation reciting “preset three-dimensional object models correspond to a same object” renders the claim indefinite, because the term “a same object” does not make clear the claimed invention and the technical problem being solved by the additional features of the claim. As such, the limitation fails to limit the claim. In claim 4, the limitation reciting “the preset three-dimensional object model is a face model, and complexity of the preset three-dimensional object model is determined based on a quantity of faces of the preset three-dimensional object model” renders the claimed limitation indefinite because it is unclear as to how the complexity of the preset three-dimensional object model be determined based on a quantity of faces from said object model since the preset three-dimensional object model is already known to have a signal face model. Accordingly, it is submitted that the limitation in question fails to limit the scope of the claim in view of what the applicant regards as the invention. In claim 5, the limitation reciting “the plurality of preset motion models correspond to a same motion process” renders the claim indefinite, because the term “a same motion process” does not make clear the claimed invention and the technical problem being solved by the additional features of the claim. As such, the limitation fails to limit the claim. In claim 7, the limitation reciting “preset particle models correspond to same particle special effect” renders the claim indefinite, because it is unclear as to what is being encompassed by “same particle special effect”. As such, the limitation fails to limit the claim. As such, the limitation fails to limit the scope of the claim. In claim 8, the limitation reciting “complexity of the preset particle model is determined based on a particle behavior parameter of the preset particle model” renders the claim indefinite because the ambiguity of the phrase “complexity based on a based on a particle behavior parameter” is so vague as such it leaves the reader in doubt as to the meaning of the technical features to which it refers. The term “behavior parameter” is a relative term with no well-recognized meaning in the art. The meaning of the term “behavior” may change depending on context and/or the mental state of a person at a moment in time. Therefore, the claimed limitation in question fails to limit the scope of the claim in view of what the applicant regards as the invention. In claim 9, the terms “a same” and “texture complexity” render the claimed limitation indefinite, because the meaning are ambiguously vague as such, they fail to limit the scope of the claim in light of what the applicant regards as the invention. Independent claim 13 is indefinite for the same reasons as in claim 1. Claim 15 is indefinite for the same reason as in claim 3. Claim 16 is indefinite for the same reason as in claim 4. Claim 17 is indefinite for the same reason as in claim 5. Independent claim 20 is indefinite for the same reasons as in claim 1. The claims not specifically cited in this rejection are rejected as being dependent upon their rejected base claims. Claim Rejections - 35 USC § 102 5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 6. Claims 1-2, 10-16 and 20-21 are rejected under 35 U.S.C. 102(a)(a1) as being anticipated by Dey et al. (US 20130307847). Considering claim 1, Dey discloses a rendering method … is applied to an electronic device (see abstract and para. 20) and comprises: obtaining channel feedback information (e.g., network condition or delay or constraint) of an encoded transmission channel (e.g., video data packet delivery), wherein the encoded transmission channel is used to transmit an encoded bitstream corresponding to a rendered image (for example, Dey discloses receives or obtains information regarding the network condition 10 and the server utilization 12, and also has knowledge of the current adaptation level 14. The network conditions in step 10 can, for example, be determined based upon both network delay and packet loss as indicators to detect a level of network constraint. A decision 16 based upon network delay and packet loss determines … due to the time needed to transmit a packet through the core network and RF link in the case where the end destination of video being delivered is a wireless client. See para. 60. See also paras. 19-20); determining, from a plurality of groups of preset rendering input information, rendering input information corresponding to the channel feedback information (for examples, Dey discloses At least one rendering parameter used by the graphics rendering engine is set based upon a level of communication constraint or computation constraint…. encoding adaptation also responds to bit rate constraints and rendering is optimized based upon a given bit rate. See para. 20. In addition, Dey discloses a 3D rendering adaptation scheme for Cloud Mobile Gaming, which includes an off-line or preliminary step of identifying rendering settings, and also preferably encoding settings, for different adaptation levels, where each adaptation level represents a certain communication and computation cost and is related to specific rendering and encoding levels. The settings for each adaptation level are preferably optimized according to objective and/or subjective video measures. During operation, a run-time level-selection method automatically adapts the adaptation levels, and thereby the rendering settings, such that the rendering cost will satisfy the communication and computation constraints imposed by the fluctuating constraints, such as network bandwidth, server available capacity, or feedback relating to previous video. The choices that can be used to identifying rendering settings. See para. 26, wherein the plurality of groups of preset rendering input information corresponds to the different preset rendering parameters, such as bit rate constraints, communication constraint or computation constraint is adaptable based on feedback about video quality. Paras.49-54 and 63-64 of Dey also teach that the plurality of rendering parameters and settings are set in advance and can be stored in a look-up table), wherein complexity of the plurality of groups of preset rendering input information is different (e.g., Dye teaches providing a rendering adaptation technique that can reduce the video content complexity by lowering rendering settings, such that the required bitrate for acceptable video quality will be much less than before but not solely by adjustment of the encoding rate. The rendering adaptation of the can also be used to address computation constraints or other conditions such as feedback about video that has been received, e.g., lags in display, pixellation, and poor quality. See para. 23 and also paras. 26 and 51-57 and 64); and performing rendering, based on the rendering input information corresponding to the channel feedback information, to obtain the rendered image, wherein content complexity of the rendered image is associated with complexity of the rendering input information corresponding to the channel feedback information (e.g., Dey discloses performing joint rendering and encoding adaptation technique to satisfy the communication constraints, network constraints, and preset bit rate transmission constraints to render a video with an acceptable video quality level that provides the best video quality, thus leading to an overall acceptable user experience. See paras. 65-70. See also paras. 19-20, 23-27 and claims 1 and 7-10 of Dey). As per claim 2, Dye discloses the rendering input information corresponding to the channel feedback information comprises at least one of the following: a three-dimensional object model, a motion model, a particle model, or a texture. See paras. 29-30, 33-37 and 40. As per claim 10, Dye discloses determining a channel status change degree based on the channel feedback information; and based on the channel status change degree being greater than a preset change threshold, performing the step of determining, from the plurality of groups of preset rendering input information, the rendering input information corresponding to the channel feedback information. See paras. 60-68. As per claim 11, Dye discloses the channel feedback information comprises at least one of the following: network transmission bandwidth; a network transmission delay; a network jitter value; or a network packet loss rate. See para. 26. As per claim 12, Dye discloses graphics processing unit, configured to perform the rendering method according to claim 1. See para. 20. The invention of claim 13 recites analogous features that correspond in scope with the limitations recited claim 1. As the limitations of claim 1 were found to be anticipated by Dey, it is readily apparent that the applied prior art performs the underlying elements. As such, the limitations of claim 13 are, therefore, subject to rejections under the same rationale as claim 1. In addition, Dye discloses a rendering apparatus (see fig. 9) comprises a memory coupled to a processor and the memory stores program instructions to executed by the processor. See paras. 91-93. Claim 14 is rejected under the same rationale as claim 2. Claim 15 is rejected under the same rationale as claim 4. Claim 16 is rejected under the same rationale as claim 5. The subject-matter of independent claim 20 corresponds in terms of a non-transitory computer-readable medium to that of independent method claims 1 and 13, and the rationale raised above to reject the later also apply, mutatis mutandis, to the former. Claim 21 is rejected under the same rationale as claim 2. Claim Rejections - 35 USC § 103 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claims 3-4 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Dey in view of Hickman et al. (US 9183672). Regarding claim 3, Dey discloses most claimed features of the invention as described above with respect to claim 1, except that Dey fails to particularly teach that the rendered image obtained is based on the three-dimensional object model corresponding to the channel feedback information, which is disclosed by Hickman. Particularly, Hickman discloses generating a rendered 3D object data model using the 3D image viewer with 3D rendering techniques that provides an interactive 3D image that can be manipulated by a user, based on associated customization parameters including command set for interactive 3D object data models received from input source 102 (see col. 3 lines 27-51), wherein the rendering is performed based on the capability level of a client device according to different rendering types meeting a predetermined threshold, and based on feedback (see col. 12 lines 60 to col. 13 line 4 and col. 14 lines 10-53). Accordingly, it would have been obvious to one of the ordinary skilled in the art, before the effective filling date of the invention was made, to have modified the teachings of Dey to include generating a rendered image obtained based on the three-dimensional object model corresponding to the channel feedback information; in the same conventional manner as taught by Hickman. The motivation to combine would be to generate a 3D image that may be displayed using a number of different types of interfaces with functionality to enable interaction between a user and the 3D models. See col. 1 lines 45-48. As per claim 4, Dye fails to teach but Hickman discloses the preset three-dimensional object model is a point cloud model, and the complexity of the preset three-dimensional object model is determined based on a quantity of vertices of the preset three-dimensional object model (see col. 1 lines 20-48 of Hickman in view of col. 12 lines 60 to col. 13 line 4 and col. 14 lines 10-53). Accordingly, it would have been obvious to one of the ordinary skilled in the art, before the effective filling date of the invention was made, to combine the teachings of Dey with Hickman; in order to generate a 3D image that may be displayed using a number of different types of interfaces with functionality to enable interaction between a user and the 3D models. See col. 1 lines 45-48. Claim 15 is rejected under the same rationale as claim 3. Claim 16 is rejected under the same rationale as claim 4. Allowable Subject Matter 9. Claims 5-9 and 17-18 have no art rejection but rejected under 35 U.S.C. § 112(b). As the technical features of the wording of said dependent claims are indefinite for the reasons discussed above, the technical effect of the subject-matters of these claims is indeterminate and thus, they cannot be agreed that a problem is solved by each of these diverging claims. Thus, in the absence of a problem being solved, it is not, at present, apparent which part of the application could serve as a basis for a new, allowable claim. A final determination of patentability will be made upon resolution of the above 35 U.S.C. § 112(b) rejections. Conclusion 10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Najaf et al. (US 20210209809) discloses a method for point cloud decoding includes receiving a bitstream. The method also includes decoding the bitstream into multiple frames that include pixels. Certain pixels of the multiple frames correspond to points of a three-dimensional (3D) point cloud. The multiple frames include a first set of frames that represent locations of the points of the 3D point cloud and a second set of frames that represent attribute information for the points of the 3D point cloud. The method further includes reconstructing the 3D point cloud based on the first set of frames. Additionally, the method includes identifying a first portion of the points of the reconstructed 3D point cloud based at least in part on a property associated with the multiple frames. The method also includes modifying a portion of the attribute information. The portion of the attribute information that is modified corresponds to the first portion of the points. 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESNER SAJOUS whose telephone number is (571) 272-7791. The examiner can normally be reached on M-F 10:00 TO 7:30 (ET). Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice or email the Examiner directly at wesner.sajous@uspto.gov. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached on 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WESNER SAJOUS/Primary Examiner, Art Unit 2612 WS 03/07/2026
Read full office action

Prosecution Timeline

Sep 06, 2024
Application Filed
Jul 14, 2025
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597177
Changing Display Rendering Modes based on Multiple Regions
2y 5m to grant Granted Apr 07, 2026
Patent 12597185
METHOD, APPARATUS, AND DEVICE FOR PROCESSING IMAGE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597203
SIMULATED CONSISTENCY CHECK FOR POINTS OF INTEREST ON THREE-DIMENSIONAL MAPS
2y 5m to grant Granted Apr 07, 2026
Patent 12589303
Computer-Implemented Methods for Generating Level of Detail Assets for Dynamic Rendering During a Videogame Session
2y 5m to grant Granted Mar 31, 2026
Patent 12592038
EDITABLE SEMANTIC MAP WITH VIRTUAL CAMERA FOR MOBILE ROBOT LEARNING
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+7.6%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1196 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month