Prosecution Insights
Last updated: April 19, 2026
Application No. 18/577,489

METHOD FOR TRAINING LIGHT FILLING MODEL, IMAGE PROCESSING METHOD, AND RELATED DEVICE THEREOF

Non-Final OA §102
Filed
Jan 08, 2024
Examiner
VO, QUANG N
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Honor Device Co., Ltd.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
80%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
439 granted / 612 resolved
+9.7% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
23 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
13.4%
-26.6% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
22.1%
-17.9% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 612 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 07/10/2024 and 10/24/2024 were filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner. Applicant has not provided an explanation of relevance of cited document(s) discussed below. Reference US 11,776,095 B2 is a general background reference covering: Apparatus and methods related to applying lighting models to images of objects are provided. A neural network can be trained to apply a lighting model to an input image. The training of the neural network can utilize confidence learning that is based on light predictions and prediction confidence values associated with lighting of the input image. A computing device can receive an input image of an object and data about a particular lighting model to be applied to the input image. The computing device can determine an output image of the object by using the trained neural network to apply the particular lighting model to the input image of the object. (see abstract). Reference US 2023/0215132 A1 is a general background reference covering: A method for generating a relighted image includes: obtaining a to-be-processed image and a guidance image corresponding to the to-be-processed image; obtaining a first intermediate image consistent with an illumination condition in the guidance image by performing relighting rendering on the to-be-processed image in a time domain based on the guidance image; obtaining a second intermediate image consistent with the illumination condition in the guidance image by performing relighting rendering on the to-be-processed image in a frequency domain based on the guidance image; and obtaining a target relighted image corresponding to the to-be-processed image based on the first intermediate image and the second intermediate image. (see abstract). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 21, 22, 38-40 are rejected under 35 U.S.C. 102(a1) as being anticipated by Legendre et al. (Legendre) (WO 2022231582 A1). Regarding claim 21, Legendre discloses an image processing method (e.g., FIG. 22 is a flowchart of a method 2200, paragraph 174), comprising: displaying a first interface, wherein the first interface comprises a first control (e.g., receiving, via a computing device, an image comprising a subject, paragraph 174); detecting a first operation performed on the first control (e.g., at block 2220, the method further involves relighting, via a neural network, a foreground of the image, paragraph 175); obtaining an original image in response to the first operation (e.g., a foreground of the image to maintain a consistent lighting of the foreground with a target illumination, such as discussed above at least in the context of FIGS. 2-7. The relighting is based on a per-pixel light representation indicative of a surface geometry of the foreground, paragraph 175); and processing the original image by using a target light filling model to obtain a captured image, wherein the target light filling model is configured to fill light for a person and an environment in the original image (e.g., The light representation includes a specular component, and a diffuse component, of surface reflection. At block 2230, figure 22, the method also involves predicting, via the neural network, an output image comprising the subject in the relit foreground, such as discussed above at least in the context of FIGS. 1-8, 16, 17). Regarding claim 22, Legendre discloses wherein the target light filling model comprises a target inverse rendering sub-model, a to-be-light-fiIled position estimation module, and a target light filling control sub-model, wherein the target inverse rendering sub-model is connected to the to-be-light-filled position estimation module and the target light filling control sub-model, and the to-be-light-filled position estimation module is connected to the target light filling control sub-model (e.g., In some embodiments, shading net 260 includes two sequential networks, specular net 420 and neural rendering net 435. As described herein, a surface of an object may be composed of one or more materials. Specular net 420 models an uncertainty in the material properties of the one or more materials of an input image. One or more specular light maps 255 represented as S.sub.n, input foreground 205 represented as F, paragraph 59, figure 4); and the processing the original image by using a target light filling model to obtain a captured image (e.g., FIG. 2 is a diagram depicting a relighting network for enhancing lighting of images, in accordance with example embodiments, paragraph 15) comprises: figure 2, inputting the original image into the target inverse rendering sub-model to obtain an albedo image, a normal image, and an environment image corresponding to the original image, wherein the target inverse rendering sub-model is configured to disassemble the person and the environment in the original image, the albedo image is used to represent an albedo characteristic corresponding to the person in the original image (e.g., and predicted albedo 225 represented as A, may be input into specular net 420. In some aspects, one or more specular light maps 255 may be generated with a plurality of Phong exponents n, paragraphs 59, 60), the normal image is used to represent a normal characteristic corresponding to the person in the original image (e.g., image 150 depicts surface normal representation of image 110, paragraph 47, figure 1), and the environmental image is used to represent an environmental content other than the person in the original image (e.g., Images 160 and 170 depict relit foregrounds of input image 110 composited into a number of different target backgrounds, paragraph 47, figure 1); inputting the original image and the environmental image into the to-be-light-filled position estimation module to obtain a light-filled environment image, wherein the to-be-light- filled position estimation module is configured to determine a light filling position in the environmental image based on the original image and fill the light filling position with light (e.g., a relighting network can be designed to computationally generate relit images for consumer photography or other applications. As described herein, these methods are applicable to arbitrary omnidirectional input and target lighting environments. Also, for example, in addition to delivering realistic results for low-frequency lighting, the relighting network is also able to render hard shadows and specular highlights appropriate for lighting with high- frequency detail. In some embodiments, the method involves relighting, via a neural network, a foreground of the image to maintain a consistent lighting of the foreground with a target illumination, paragraph 48); and inputting the albedo image, the normal image, and the light-filled environment image into the target light filling control sub-model to obtain the captured image, wherein the target light filling control sub-model is configured to fill the person with light based on the light-filled environment image by using the albedo image and the normal image (e.g., FIG. 2 is a diagram depicting a relighting network 200 for enhancing lighting of images, in accordance with example embodiments. For example, relighting network 200 regresses from an input foreground F to a geometry image N, encoding per-pixel surface normal representation and then to an approximate diffuse albedo image A. An input foreground 205 may be utilized by a convolutional neural network, such as geometry net 210, to generate surface normal representation 215 indicative of a surface geometry of input foreground 205. Surface normal representation 215 may be utilized by the convolutional neural network, such as albedo net 220, to generate an approximate diffuse albedo image 225, paragraph 49). Regarding claim 38, Legendre discloses an electronic device, comprising a processor and a memory, wherein the memory is configured to store a computer program executable on the processor; and the processor invoke computer instructions (e.g., FIG. 19 depicts a distributed computing architecture 1900, in accordance with example embodiments. Distributed computing architecture 1900 includes server devices 1908, 1910 that are configured to communicate, via network 1906, with programmable devices 1904a, 1904b, 1904c, 1904d, 1904e. Network 1906 may correspond to a local area network (LAN), a wide area network (WAN), a WLAN, a WWAN, a corporate intranet, the public Internet, or any other type of network configured to provide a communications path between networked computing devices. Network 1906 may also correspond to a combination of one or more LANs, WANs, corporate intranets, and/or the public Internet, paragraph 147). The remaining of claim 38 limitations is similar to limitations of claim 21. Therefore, the remaining of claim 38 limitations are rejected as set forth above as claim 21. Regarding claim 39, claim 39 is the electronic device, according to claim 38 with limitations similar of limitations of claim 22. Regarding claim 40, Legendre discloses a chip, comprising: a processor, configured to invoke a computer program from a memory and run the computer program (e.g., in another aspect, a computing device is provided. The computing device includes one or more processors and data storage. The data storage has stored thereon computer- executable instructions that, when executed by one or more processors, cause the computing device to carry out functions, paragraph 10), so that a device equipped with the chip performs is enabled to. The remaining of claim 40 limitations is similar to limitations of claim 21. Therefore, the remaining of claim 40 limitations are rejected as set forth above as claim 21. Allowable Subject Matter Claim 23 is allowed. Claim 23 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Referring to claim 23, the prior art searched and of record neither anticipates nor suggests in the claimed combinations, [The method according to claim 22, further comprising: obtaining a plurality of frames of initial portrait training images and a panoramic environment image; performing first processing on the plurality of frames of initial portrait training images to obtain a refined matte portrait training image and a plurality of frames of OLAT (one light at a time) training images; performing second processing on the refined matte portrait training image and the plurality of frames of OLAT training images to obtain an albedo portrait training image and a normal portrait training image; performing third processing on the refined matte portrait training image, the plurality of frames of OLAT training images, and the panoramic environment image to obtain a to-be-light- filled composite rendered image and a light-filled composite rendered image; and training an initial light filling model by using the albedo portrait training image, the normal portrait training image, the to-be-light-filled composite rendered image, and the light- filled composite rendered image, to obtain the target light filling model, wherein the initial light filling model comprises an initial inverse rendering sub-model, an initial to-be-light-filled position estimation module, and an initial light filling control sub-model, the target inverse rendering sub-model is a trained initial inverse rendering sub-model, the target light filling control sub-model is a trained initial light filling control sub-model, and the to-be- light-filled position estimation module is a trained initial to-be-light-filled position estimation module.]. Claims 24-37 depend on claim 23. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUANG N VO whose telephone number is (571)270-1121. The examiner can normally be reached Monday-Friday, 7AM-4PM, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached at 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUANG N VO/Primary Examiner, Art Unit 2683
Read full office action

Prosecution Timeline

Jan 08, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592002
COLOR CONVERSION SYSTEM, COLOR CONVERSION METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12577842
METHOD AND SYSTEM FOR MEASURING VOLUME OF A DRILL CORE SAMPLE
2y 5m to grant Granted Mar 17, 2026
Patent 12581023
GREYSCALE IMAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12572996
FRACTIONALIZED TRANSFERS OF SENSOR DATA FOR STREAMING AND LATENCY-SENSITIVE APPLICATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12573172
IMAGE OUTPUTTING DEVICE AND IMAGE OUTPUTTING METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
80%
With Interview (+8.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 612 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month