Prosecution Insights
Last updated: April 19, 2026
Application No. 18/523,563

STRONG IMAGE STYLIZATION EFFECTS

Final Rejection §102§112
Filed
Nov 29, 2023
Examiner
TSENG, CHENG YUAN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
703 granted / 835 resolved
+22.2% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
30 currently pending
Career history
865
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
28.1%
-11.9% vs TC avg
§102
39.1%
-0.9% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 835 resolved cases

Office Action

§102 §112
DETAILED ACTION Specification The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter. See 37 CFR 1.75(d)(1) and MPEP § 608.01(o). Correction of the following is required: The claim terminologies of “visual characteristics” and “throughout an entire portion” do not have proper antecedent basis to specification. Particularly, the two terminologies were never used or appeared. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the claim feature of “the image generation neural network transforms visual characteristics of the accessed image throughout an entire portion of the accessed image from the source image domain to the target image domain” must be shown or the features canceled from the claims. For example, in fig. 6, the transformations can be seen as only applied partially on the source image, rather than an entire portion. No new matter should be entered. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor at the time the application was filed, had possession of the claimed invention. In claim 1, the claim limitation “transforms visual characteristics of the accessed image throughout an entire portion of the accessed image” does not have sufficient support from original specification. The specification does not state “throughout an entire portion” at all. Claims 9 and 17 have the same issue. Dependent claims 1-8, 10-16 and 18-20 are rejected for the same reason. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Gebre (US 12,033,254). Referring to claims 1, 9 and 17, Gebre discloses a system comprising: a processor (fig. 8, processor 802); a memory component (fig. 8, memory 806) storing instructions cause the processor to: accessing an image (fig. 3, original face image 302) representing a source image domain (fig. 3, camera captured face images; 5:44-46); generating a training dataset (fig. 3, computer generated image 332) representing a target image domain (fig. 2, image with various hair styles); training a base generative neural network (fig. 3, first trained neural network 306; fig. 1, input to neural network model 102) to generate images representing the source image domain (fig. 1, generate face layer 104 images) and images representing adjacent source image domains (fig. 1, generate hair layer 104 images); training a final generative neural network (fig. 3, discriminative networks 330/336; 9:4-7, neural network), using the base generative neural network (fig. 3, neural network 306) and the training dataset (fig. 3, computer generated image 332); generating a paired image dataset (9:7-24, acceptable generated face layer and hair layer images with corresponding actual image) using the final generative neural network; training an image generation neural network (fig. 4, second trained neural network 402; fig. 1, neural network model 106), using the paired image dataset (fig. 1, such as hair layer; 6:40-59), to generate a modified image (fig. 1, generate modified hair layer 108) for an input image (fig. 1, input image 102; fig. 3, input image 302); and generating a modified image by applying the image generation neural network to the accessed image (fig. 1, generate modified image 110; 6:60-7:29, combine modified hair layer with face layer), the modified image representing the target image domain (fig. 2, one of the faces with hair), wherein the image generation neural network (fig. 4, neural network 402) transforms visual characteristics (fig. 4, hair layer 314) of the accessed image (fig. 3, original face image 302) throughout an entire portion of the accessed image (fig. 3, entire portion of original face image 302 is transformed through neural network 306/402 and discriminative networks 330/336/326 to generate modified image 110) from the source image domain to the target image domain. As to claims 2 and 10, Gebre discloses the system of claim 1, wherein the training dataset comprises textual data and image data describing the target domain (11:49-53, training dataset images; fig. 3, annotation 320). As to claims 3 and 11, Gebre discloses the system of claim 1, wherein the paired image dataset comprises a plurality of image pairs, each image pair in the plurality of image pair comprising an original image corresponding to the source image domain and a stylized image corresponding to the target image domain (9:7-24, acceptable generated face layer and hair layer images to actual image). As to claims 4 and 12, Gebre discloses the system of claim 1, wherein the base generative neural network is used as initialization for training of the final generative neural network (fig. 1, first trained neural network 102 initialize training of the second trained neural network 106 etc.). As to claims 5, 13 and 18, Gebre discloses the system of claim 1, wherein training the base generative neural network comprises: training the base generative neural network on an image dataset (fig. 2, face images with different hair layers), wherein each image in the image dataset has a condition (6:48-59, hairstyles) representing an adjacent source domain of the image. As to claims 6, 14 and 19, Gebre discloses the system of claim 1, wherein neural network layers of the base generative neural network can accept a set of conditions (fig. 3, expressions 328/334) associated with the adjacent source domains. As to claims 7, 15 and 20, Gebre discloses the system of claim 6, comprising: applying a one-hot conditioning (fig. 3, expression 328/334 each represents a one-hot condition) to the base generative neural network; wherein each condition in the set of condition is represented as a vector (fig. 3, expressions 328/334); and supplementing random gaussian noise (10:39-40, random noise) associated with the neural network layers of the base generative neural network with the set of conditions. As to claims 8 and 16, Gebre discloses the system of claim 6, comprising: modifying a vector (10:39-47, noise vector) representation of each neural network layer of the base generative neural network to incorporate data representing a respective condition in the set of conditions (10:39-47, modifying noise vector introduces variety into the modified hair layer). Response to Arguments Applicant’s arguments have been fully considered, but they are not deemed to be persuasive. Applicant argues that the cited prior art does not disclose claim limitation “wherein the image generation neural network transforms visual characteristics of the accessed image throughout an entire portion of the accessed image from the source image domain to the target image domain”, because Gebre discloses transformation with layers and masks (pp.7-8). In fig. 1, Gebre’s provides entire portion of original image to neural network, and generates a modified image to the entire portion of the original image with layers and masks. Although, the output image may appear as only partially modified from original image, the modified image 110 is a result of transforming entire original image 102. Further, the claim scope does not exclude transformation using layers and/or masks. Conclusion This action is made final. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire in three months from the mailing date of this action. In the event a first reply is filled within two months of the mailing date of this final action and the advisory action is not mailed until after the end of the three-month shortened statutory period, then the shortened statutory period will expire on the date of the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than six months from the date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner Cheng-Yuan Tseng whose telephone number is (571)272-9772, and fax number is (571)273-9772. The examiner can normally be reached on Monday through Friday from 09:00 to 17:30 Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on (571)272-2330. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866)217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800)786-9199 (IN USA OR CANADA) or (571)272-1000. /CHENG YUAN TSENG/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Nov 29, 2023
Application Filed
Nov 05, 2025
Non-Final Rejection — §102, §112
Mar 09, 2026
Response Filed
Mar 15, 2026
Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602844
Graphics Processor
2y 5m to grant Granted Apr 14, 2026
Patent 12586285
METHODS AND SYSTEMS FOR MARKERLESS FACIAL MOTION CAPTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12579415
Area-Efficient Convolutional Block
2y 5m to grant Granted Mar 17, 2026
Patent 12572355
MODULAR ADDITION INSTRUCTION
2y 5m to grant Granted Mar 10, 2026
Patent 12567173
Infant 2D Pose Estimation and Posture Detection System
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+15.7%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 835 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month