Prosecution Insights
Last updated: April 19, 2026
Application No. 18/478,783

GENERALIZING IMAGE STYLIZATION EFFECTS

Final Rejection §102§112§DP
Filed
Sep 29, 2023
Examiner
HARRISON, CHANTE E
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
497 granted / 725 resolved
+6.6% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
30 currently pending
Career history
755
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
31.8%
-8.2% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 725 resolved cases

Office Action

§102 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This action is responsive to communications: Amendment, filed on 08/29/2025. This action is made FINAL. 2. Claims 1-20 are pending in the case. Claims 1, 8 and 15 are independent claims. Claims 1, 8 and 15 have been amended. Response to Arguments Applicant's arguments filed August 29, 2025 have been fully considered but they are not persuasive. Applicant requests the Double Patenting Rejection be held in abeyance until pending claims are in condition for allowance. In response, Applicant’s request is acknowledged. Accordingly, the double patenting rejection is maintained. Applicant submits the amended claims address the rejection of claims 8-520 under §112. In response, the rejection of claim 8-20 is withdrawn. Additionally, Applicant’s amendment raises new issues under §112, which are addressed in the following rejection. Applicant argues (claims 1, 8 and 15) Davies fails to disclose “generating a stylized target image based on the input image by applying the stylization effect on the object and the background of the input image, the stylized target image generated using a second neural network trained on the paired image dataset;”. In response, Applicant acknowledges Davies (Para 42) discloses cropping portions of an image. Davies (Fig. 5) discloses a depicted face including an eye object and a face background. Additionally, Davies (Para 27) discloses applying correction to facial features including eyes and skin. Therefore, Davies discloses “generating a stylized target image based on the input image by applying the stylization effect on the object and the background of the input image, the stylized target image generated using a second neural network trained on the paired image dataset;”. Applicant argues (claims 1, 8 and 15) Noh fails to cure the deficiencies of Davies. In response, Noh discloses the amended claim elements as discussed in the rejection that follows. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites: “accessing an input image, the input image comprising an object and a background” Claim 6 recites: “The system of claim 1, wherein generating the stylized target image further comprises: generating a first image by applying the stylization effect on a portion of the input image comprising a main object using the second neural network; generating a second image by applying the stylization effect on an entire portion of the input image using the second neural network” It is unclear if “the object” (of claim 1) corresponds to or differs from the “main object” (of claim 6). Additionally, it is unclear whether the “background” (of claim 1) corresponds to the background of only a portion of the image where the object is located. Correction is required. Claims 8 and 15, similar in scope to claim 1, are similarly rejected. Accordingly, dependent claims 2-7, 9-14 and 16-20 are rejected based on dependency from a rejected base claim. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-2, 4, 7-9, 11, 14-16, 18, 19 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 7, 9, 10, 15, 17, 19, 20 of copending Application No. 17/804,268 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because each processes paired images to generate stylized images corresponding to a target domain style using machine learning. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Current Application: 18/472,783 (claim 1) Prior Application: 17/804,268 (claim 10) A system comprising: at least one processor; and at least one memory component storing instructions that, when executed by the at least one processor, cause the at least one processor to perform operations comprising: A system comprising: a processor; and a memory storing instructions that, when executed by the processor, configure the system to perform operations comprising: accessing an input image; generating a paired image dataset using a first neural network, each pair of images in the paired image dataset comprising a source image and a target image, wherein an entire portion of the target image has a stylization effect; generating a set of paired images using a first machine learning model, the set of paired images comprising a first image corresponding to a source image and a second image corresponding to [[the]]a target domain style; analyzing the generated set of paired images using a second machine learning model trained to analyze the generated set of paired images based on a plurality of protected feature criteria; based on the analyzing, determining a set of image transformations for the generated set of paired images; generating a transformed set of paired images, the generating comprising performing the set of image transformations on the generated set of paired images; generating a stylized target image based on the input image by applying the stylization effect on an entire portion of the input image, the stylized target image generated using a second neural network trained on the paired image dataset; And generating one or more stylized images corresponding to the target domain style, the one or more stylized images generated using a supervised image translation model trained on the transformed set of paired images and causing display of the stylized target image on a graphical user interface of a computing device. Claim 2: wherein the first neural network is trained on a dataset representing the stylization effect. Claim 4: generating an augmented training dataset by applying image transformations on the paired image dataset; and supplementing the paired image dataset with the augmented training dataset. *Emboldened text above indicates claim features that differ in the independent claims, but taught by dependent claims. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 8-12, and 15-18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Thomas Davies et al., CA 3,152,644. Independent claim 1, Davies discloses a system comprising: at least one processor (i.e. processor - Fig. 1 “14”); and at least one memory component storing instructions that, when executed by the at least one processor (i.e. memory, including instructions, coupled to processor - Fig. 1 “16, 72”), cause the at least one processor to perform operations comprising: accessing an input image (i.e. shots received from a client – Fig. 4A; Fig. 5 “604”), the input image comprising an object and a background (i.e. a depicted face including an eye object and a face background – Fig. 5); generating a paired image dataset using a first neural network, each pair of images in the paired image dataset comprising a source image and a target image, wherein an entire portion of the target image has a stylization effect (i.e. crop the plurality of images for the region of interest and create cropped first training image pairs (X, Y); with crops of the plurality of images, pre-train a first auto encoder using image pairs (X, X) to learn an identity function; train the first autoencoder using the cropped first training image pairs (X,Y) – Para 42; Each training sample consists of three inputs and one output, in some embodiments. This approach, or a similar one, may be able to copy the style change from the reference pair and apply it to the input image - Para 343); generating a stylized target image based on the input image by applying the stylization effect on the object and the background of the input image, the stylized target image generated using a second neural network trained on the paired image dataset (i.e. generate image masks (mask_X) for second training image pairs (X, mask_X); train a second autoencoder for image segmentation using training image pairs (X, mask_X); segment a target region of modification and generate a second output image; and add the first output image to the target region identified by the second output image – Para 41; apply correction to facial features including eyes and skin – Para 27); and causing display of the stylized target image on a graphical user interface of a computing device (i.e. display data on an output interface – Para 145; Fig. 5, 20). Claim 2, Davies discloses the system of claim 1, wherein the first neural network is trained on a dataset representing the stylization effect (i.e. learnable parameters using supervised data of paired images is used to train the network – Para 339, 341). Claim 3, Davies discloses the system of claim 1, wherein the first neural network is a generative model (i.e. generative adversarial network for generating an image – Para 142). Claim 4, Davies discloses the system of claim 1, further comprising: generating an augmented training dataset by applying image transformations on the paired image dataset; and supplementing the paired image dataset with the augmented training dataset (i.e. Each image in the training set, both original images and images edited or annotated by a visual effects artist, is subjected to the same automated transformation or combination of transformations to generate a new image that together compose an augmented training set that is many times larger than the original – Para 23). Claim 5, Davies discloses the system of claim 4, wherein the image transformations comprise at least one of: image rotations or image distortions (i.e. scales target images - Fig. 20). Independent claim 8, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein. Claims 9-12, the corresponding rationale as applied in the rejection of claims 2-7 apply herein. Independent claim 15, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein. Claims 16-18, the corresponding rationale as applied in the rejection of claims 2-7 apply herein. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Junyong Noh et al., US 2023/0140249 A1. Independent claim 1, Noh discloses a system comprising: at least one processor (Fig. 9 “901”); and at least one memory (Fig. 9 “903”) component storing instructions that, when executed by the at least one processor (i.e. software instructions – Para 133), cause the at least one processor to perform operations comprising: accessing an input image (i.e. user input image - Fig. 1 “121”), the input image comprising an object and a background (i.e. input image includes a dog’s facial features, e.g. eye or nose, and the background portion including the dog’s face/body surrounding the eye/nose – Fig. 1 “121”); generating a paired image dataset using a first neural network, each pair of images in the paired image dataset comprising a source image and a target image, wherein an entire portion of the target image has a stylization effect (i.e. a pair of images - Fig. 1 “101, 102” – and a target intermediate image – Fig. 1 “103”- are used by a teacher generative model to morph an image – Para 62; style of an input image is extracted – Para 63); generating a stylized target image based on the input image by applying the stylization effect on the object and the background of the input image, the stylized target image generated using a second neural network trained on the paired image dataset (i.e. apply style to input image – Fig. 1 “130” – using a student genitor – Para 56; obtaining a first output morphing image including a content of the second image and a style of the first image – Para 27; Fig. 1 “130”; the style applied to the dog’s nose and background face/body region near the nose – Fig. 1 “130”); and causing display of the stylized target image on a graphical user interface of a computing device (i.e. user device – Para 127 – outputs morphed image – Para 3; Fig. 1 “130”). Claim 2, Noh discloses the system of claim 1, wherein the first neural network is trained on a dataset representing the stylization effect (i.e. training phase, e.g. teacher model, learns to generate morphed images having content and style extracted - Fig. 1; Para 62, 63) . Claim 3, Noh discloses the system of claim 1, wherein the first neural network is a generative model (i.e. the teaches is a generative adversarial network – Para 56). Claim 4, Noh discloses the system of claim 1, further comprising: generating an augmented training dataset by applying image transformations on the paired image dataset (i.e. paired image dataset is transformed - Fig. 1 “130”); and supplementing the paired image dataset with the augmented training dataset (i.e. feed-forward neural network outputs a morphing image from an input image to another input image – Para 55). Claim 5, Noh discloses the system of claim 4, wherein the image transformations comprise at least one of: image rotations or image distortions (i.e. imager morphing refers to linear transformation – Para 101). Claim 6, Noh discloses the system of claim 1, wherein generating the stylized target image further comprises: generating a first image by applying the stylization effect on a portion of the input image comprising a main object using the second neural network (i.e. The obtaining of the output morphing image may include obtaining a first output morphing image including a content of the second image and a style of the first image – Para 27; Fig. 1 “130”); generating a second image by applying the stylization effect on an entire portion of the input image using the second neural network (i.e. obtaining a second output morphing image including a content of the first image and a style of the second image - Para 27); generating a combined image by combining the first image with second image and a soft mask layer (i.e. interpolate a latent code extracted from each of two encoders before feeding the latent code to a decoder F. By interpolating content and the style features at a deep bottleneck position of a network layer – Para 76; Fig. 4, 8); and generating the stylized target image based on the combined image (Fig. 8). Claim 7, Noh discloses the system of claim 6, further comprising: generating a new target image dataset using the second neural network (i.e. a neural network for image morphing may be a feed-forward neural network that learns a semantic change between input images in a latent space, which may correspond to a network that outputs a morphing image from an input image to another input image. The neural network may also be referred to herein as a morphing generator, a generator, or G – Para 55); training a third neural network using the new target image dataset (i.e. The morphing generator 410 which is a student network may learn a basic morphing effect of a teacher network by adjusting the latent code. The morphing generator 410 may also learn disentangled morphing that the teacher network may not generate by individually adjusting latent codes in separate content and style spaces – Para 78); and generating the second image using the third neural network (i.e. the decoder F may output the morphing image y aa based on the processed style feature vector and the content feature vector – Para 86). Independent claim 8, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein. Claims 9-14, the corresponding rationale as applied in the rejection of claims 2-7 apply herein. Independent claim 15, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein. Claims 16-20, the corresponding rationale as applied in the rejection of claims 2-7 apply herein. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANTE HARRISON whose telephone number is (571)272-7659. The examiner can normally be reached Monday - Friday 8:00 am to 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHANTE E HARRISON/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Sep 29, 2023
Application Filed
May 12, 2025
Non-Final Rejection — §102, §112, §DP
Aug 15, 2025
Response Filed
Oct 30, 2025
Final Rejection — §102, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597213
GESTURE BASED TACTILE INTERACTION IN EXTENDED REALITY USING FORM FACTOR OF A PHYSICAL OBJECT
2y 5m to grant Granted Apr 07, 2026
Patent 12592043
Systems, Methods, and Graphical User Interfaces for Displaying and Manipulating Virtual Objects in Augmented Reality Environments
2y 5m to grant Granted Mar 31, 2026
Patent 12592045
AUGMENTED REALITY SYSTEM AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586322
OPTICAL DEVICE FOR AUGMENTED REALITY HAVING GHOST IMAGE PREVENTION FUNCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12561891
GRAPHICS PROCESSORS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
97%
With Interview (+28.8%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 725 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month