Prosecution Insights
Last updated: April 19, 2026
Application No. 17/652,390

GENERATING ARTISTIC CONTENT FROM A TEXT PROMPT OR A STYLE IMAGE UTILIZING A NEURAL NETWORK MODEL

Non-Final OA §103
Filed
Feb 24, 2022
Examiner
RICHER, AARON M
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
5 (Non-Final)
51%
Grant Probability
Moderate
5-6
OA Rounds
4y 0m
To Grant
70%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
236 granted / 465 resolved
-11.2% vs TC avg
Strong +20% interview lift
Without
With
+19.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
28 currently pending
Career history
493
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 465 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 24 November 2025 have been fully considered but they are not persuasive. Applicant’s arguments with respect to the prior art have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. More specifically, while the Hu reference was previously cited, the reference has now been cited much more extensively to teach large portions of the amended claims, while the Tran and Liu references argued by applicant are no longer used to reject the independent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3, 21, 22, 24, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Hu (U.S. Publication 2019/0026870) in view of Divakaran (U.S. Publication 2021/0297498). As to claim 1, Hu discloses a non-transitory computer-readable medium storing instructions thereon that, when executed by at least one processor (p. 5, sections 0049-0053), cause a computing device to perform operations comprising: receiving a digital image having content to stylize and one or more style parameters for stylizing the digital image, the one or more style parameters including at least one of a style digital image or a style text prompt (fig. 3, elements 322 and 324; the content image is the image to be stylized while the style image is the style parameter); determining, using a multi-domain style encoder of an artistic image neural network, one or more style encodings for the one or more style parameters within a multi-domain encoding space, wherein the multi-domain style encoder comprises a first image encoder for encoding style digital images (fig. 3, elements 324 and 332; p. 2, section 0028-p. 3, section 0029; p. 3, section 0034; p. 4, section 0041; the style image is fed to an encoder network, a portion of which receives the image and encodes features of the image; the input and processing path taken by the style image through the encoder network would read on a first image encoder; the encoder reads on a multi-domain encoder since it can encode from different domains such as an uploaded image or a preset style); determining, from the digital image utilizing a second image encoder of the artistic image neural network, parameters for a learnable tensor (fig. 3; fig. 6; p. 3, section 0033-p. 4, section 0039; a portion of the encoder network is used to generate parameters of a content feature vector, which is a type of tensor; the vector/tensor is repeatedly optimized and fed back through a network, reading on a learnable tensor; the input and processing path taken by the content image through the encoder network would read on a second image encoder); determining updated parameters for the learnable tensor using a plurality of iterations of an optimization loop of the artistic image neural network by, for each iteration of the plurality of iterations (fig. 3; fig. 6; p. 3, section 0033-p. 4, section 0039; the vector/tensor is repeatedly optimized and iteratively looped back through the network): generating, based on the parameters of the learnable tensor using a decoder of the artistic image neural network, an intermediate artistic digital image (fig. 3, element 326; fig. 6, element 635; p. 3-4, section 0038; based on refined tensor/vector parameters, a tentative/intermediate image is decoded, inherently by some sort of module reading on a decoder); generating using a third image encoder portion of the artistic image neural network, artistic encodings from the intermediate artistic digital image (fig. 3, elements 325 and 332; p. 3, section 0036; a portion of the encoder network is used to generate encodings for the tentative/intermediate image; the input and processing path taken by the tentative/intermediate image through the encoder network would read on a third image encoder); determining a loss of the artistic image neural network by comparing the artistic encodings to the one or more style encodings (fig. 6, elements 630, 631, and 632; p. 3, sections 0034-0037; p. 4-5, section 0047; the tentative style feature vector, which is a representation of the initial style encoding, is compared to the refined style feature vector, which is one of the artistic encodings of the intermediate/tentative image, to determine a style loss parameter); updating the parameters of the learnable tensor using the loss (fig. 6, elements 630, 631, and 632; p. 3, sections 0034-0037; p. 4-5, section 0047; the learnable/optimizable image vector/tensor is refined/updated using the loss), and generating, utilizing the decoder of the artistic image neural network, an artistic digital image based on the learnable tensor with the updated parameters (fig. 3, element 326; fig. 6, element 635; an updated/refined image is decoded based on the learnable/optimizable image vector/tensor as the output of the system). Hu does not disclose, but Divakaran discloses that the multi-domain style encoder comprises both a first image encoder for encoding style digital images and a text encoder for encoding style text prompts (fig. 1, elements 120 and 130; fig. 3; p. 3, section 0032; p. 4, section 0042; p. 5, section 0056; p. 6, section 0065; p. 7, sections 0070-0074; using the embedding/encoding modules of fig. 1, vector representations are created/encoded for words/text and images; the text and images can correspond to image styles such as clothing, sports, music, etc.; the text is from a prompt such as a social media entry form). The motivation for this is to map different content modalities into the same embedding space and link the different types of content to a user (p. 1, section 0005; p. 2, section 0030). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Hu to have a multi-domain style encoder comprise a first image encoder for encoding style digital images and a text encoder for encoding style text prompts in order to map different content modalities into the same embedding space and link the different types of content to a user as taught by Divakaran. As to claim 3, Hu discloses wherein determining, from the digital image utilizing the second image encoder of the artistic image neural network, the parameters for the learnable tensor comprises generating, utilizing the second image encoder of the artistic image neural network, one or more initialized encodings of the digital image within an encoding space (fig. 3; fig. 6; p. 3, section 0033; using the encoding path taken by the content image, which reads on a second image encoder as noted in the rejection to claim 1, an initial feature vector encoding is first generated; the vector space this exists in would read on an encoding space) As to claim 21, see the rejection to claim 1. Further, Hu discloses a system comprising a memory component and one or more processing devices coupled to the memory component, the one or more processing devices to perform operations (p. 5, sections 0049-0053). As to claim 22, see the rejection to claim 3. As to claim 24, Hu discloses wherein determining the loss by comparing the artistic encodings to the one or more style encodings comprises determining the loss by comparing the artistic encodings to the one or more style encodings utilizing a style loss and a pixel loss corresponding to the digital image (fig. 3; p. 3, sections 0033-p. 4, section 0038; p. 4, section 0043; a style loss, and a content loss with regard to the image pixels, reading on pixel loss, are used in the comparison). As to claim 29, see the rejection to claim 1. Claims 2 and 30 rejected under 35 U.S.C. 103 as being unpatentable over Hu and Divakaran and further in view of Sun (U.S. Publication 2020/0082610). As to claim 2, Hu does not disclose, but Sun discloses instructions that, when executed by the at least one processor, cause the computing device to perform operations comprising modifying the artistic digital image to include one or more art details associated with a physical visual medium (p. 3, section 0033-p. 4, section 0035; a modification of a user painting is performed to simulate a physical medium such as a canvas with particular materials applied, as part of neural network operations) utilizing an artistic superzoom neural network (fig. 4; p. 2, section 0020; p. 7, sections 0066-0067; a “superzoom” network can be any network which increases resolution; resolution increase is performed as part of the neural network operations after a feature map is derived). The motivation for this is to model complex lighting effects accurately (p. 1, section 0003). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Hu and Divakaran to cause the computing device to perform operations comprising modifying the artistic digital image to include one or more art details associated with a physical visual medium utilizing an artistic superzoom neural network in order to model complex lighting effects accurately as taught by Sun. As to claim 30, see the rejection to claim 2. Claims 4, 26, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Hu and Divakaran, and further in view of Li (U.S. Publication 2018/0240257). As to claim 4, Hu teaches iterations of an optimization loop for an intermediate artistic digital image as noted in the rejection to claim 1. Hu does not teach, but Li does teach wherein generating for each iteration comprises: generating, via a first set of iterations of an optimization loop, a first set of intermediate artistic digital images corresponding to a first image resolution; and generating, via a second set of iterations of the optimization loop, a second set of intermediate artistic digital images corresponding to a second image resolution (fig. 3; p. 1, sections 0002-0003; p. 5, section 0052; a set of iterations using a first set of layers is used to generate images of a first resolution and a set of iterations using a second set of layers is used to generate images of a second resolution). The motivation for this is to reproduce both global and local artistic style features (p. 2, section 0018). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Hu and Divakaran to generate, via a first set of iterations of an optimization loop, a first set of intermediate artistic digital images corresponding to a first image resolution and generate, via a second set of iterations of the optimization loop, a second set of intermediate artistic digital images corresponding to a second image resolution in order to reproduce both global and local artistic style features as taught by Li. As to claim 26, see the rejection to claim 4. Further, Li discloses the second image resolution higher than the first image resolution (fig. 3; p. 5, section 0052; a medium resolution corresponding to a second resolution is higher than a first low resolution; alternatively, a high resolution corresponding to a second resolution is higher than a first medium or low resolution). Motivation for the combination is given in the rejection to claim 4. As to claim 31, Hu does not disclose, but Li discloses a method further comprising, for the plurality of iterations: refining, using the artistic image neural network, the intermediate artistic digital image at a first resolution over a first set of iterations (fig. 3; p. 5, section 0050-p. 6, section 0056; the image is refined through a number of iterations at a lower resolution in addition to higher ones); modifying, using a resize block of the artistic image neural network, a resolution of the intermediate artistic digital image from the first resolution to a second resolution (fig. 3; p. 5, section 0050-p. 6, section 0056; the upsampling layers read on a resize block since they are modifying the image to be higher resolution); and using the artistic image neural network to process the intermediate artistic digital image at the second resolution over a second set of iterations (fig. 3; p. 5, section 0050-p. 6, section 0056; a set of iterations processing an intermediate medium or high resolution image using the upsampling layers is performed). Motivation for the combination is given in the rejection to claim 4. Claims 5, 6, 8, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Hu and Divakaran, and further in view of Mounsaveng (U.S. Publication 2021/0241041). As to claim 5, Hu discloses wherein operations further comprise modifying an artistic digital image utilizing an augmentation chain of transformation operations, and comparing the artistic encodings to the one or more style encodings comprises comparing the artistic encodings generated from the modified intermediate artistic digital image to the one or more style encodings (p. 3, sections 0029-0030; p. 3, sections 0033-0037; an image is augmented via a number of operations including whitening and coloring; a tentative style vector encoding is compared to the refined/modified intermediate image style vector encoding). Hu does not disclose, but Mounsaveng discloses augmentation for each iteration of the plurality of iterations (p. 10, sections 0155-0169). The motivation for this is that optimizing based on augmented new data can avoid overfitting (p. 1, sections 0010-0011). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Hu and Divakaran to augment for each iteration of the plurality of iterations in order to avoid overfitting as taught by Mounsaveng. As to claim 6, Mounsaveng discloses wherein modifying the intermediate artistic digital image utilizing the augmentation chain of transformation operations comprises modifying the intermediate artistic digital image via a sequence of transformation operations that includes a resize operation, a crop operation, a perspective operation, an image flip operation, and a noise operation (p. 2, sections 0022-0023; p. 7, sections 0110-111; p. 10, sections 0169-172; p. 13, section 0241; a sequence/chain of augmentation transformations includes zooming/resizing, cropping, rotation/translation which would change perspective, image flipping, and adding noise; since the sequence is applied to images in each iteration, the images, except for the image in the first iteration, are intermediate artistic digital images). Motivation for the combination is given in the rejection to claim 5. As to claim 8, Mounsaveng discloses wherein modifying the intermediate artistic digital image via the sequence of transformation operations that includes the image flip operation comprises modifying the intermediate artistic digital image by flipping the digital image horizontally (p. 2, sections 0022-0023; p. 7, sections 0110-111; p. 10, sections 0169-172; p. 13, section 0241; a sequence/chain of augmentation transformations includes zooming/resizing, cropping, rotation/translation which would change perspective, image flipping, and adding noise; the flip can be a horizontal flip). Motivation for the combination is given in the rejection to claim 5. As to claim 25, see the rejection to claim 5. Further, Mounsaveng discloses an augmentation chain of transformation operations comprising at least one of a resize operation, a crop operation, a perspective operation, an image flip operation, or a noise operation (p. 2, sections 0022-0023; p. 7, sections 0110-111; p. 10, sections 0169-172; p. 13, section 0241; a sequence/chain of augmentation transformations includes zooming/resizing, cropping, rotation/translation which would change perspective, image flipping, and adding noise). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Hu, Divakaran, and Mounsaveng and further in view of Pranskevichus (U.S. Publication 2023/0267583). As to claim 7, Hu discloses wherein the operations further comprise modifying the digital image; determining, from the digital image utilizing the second image encoder of the artistic image neural network, the parameters for the learnable tensor comprises determining the parameters for the learnable tensor based on the digital image with the modification utilizing the second image encoder of the artistic image neural network (p. 3, sections 0029-0030; p. 3, sections 0033-0037; a content image is augmented via a number of operations including whitening and coloring; parameters for the vector/tensor for the content image are determined based on the second encoder as noted in the rejection to claim 1). Hu does not disclose, but Pranskevichus does disclose that the modification applied is fractal noise (p. 5, section 0052; fractal noise is used to train the generative network). The motivation for this is to teach the generative network to reproduce small details more accurately. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Hu, Divakaran, and Mounsaveng to use fractal noise in order to teach the generative network to reproduce small details more accurately as taught by Pranskevichus. Claims 9 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Hu, Divakaran, and Mounsaveng and further in view of Li ‘757 (U.S. Publication 2020/0175757). As to claim 9, Hu discloses wherein modifying the intermediate artistic digital image via the sequence of transformation operations that includes the noise operation comprises modifying the intermediate artistic digital image by an augmentation (see rejection to claim 5). Hu does not disclose, but Li ‘757 does disclose that this augmentation is adding Gaussian noise (p. 6, section 0053). The motivation for this is to make an embedding network more robust. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Hu, Divakaran, and Mounsaveng to use an augmentation of adding Gaussian noise in order to make an embedding network more robust as taught by Li ‘757. As to claim 27, see the rejection to claim 9. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Hu and Divakaran and further in view of Liao (U.S. Publication 2022/0044352). As to claim 23, Hu discloses wherein determining the loss by comparing the artistic encodings to the one or more style encodings comprises determining the loss by comparing the artistic encodings to the one or more style encodings utilizing a style loss and a loss corresponding to the digital image (fig. 3; p. 3, sections 0033-p. 4, section 0038; p. 4, section 0043; a style loss, and a content loss with regard to the image pixels, are used in the comparison). Hu does not disclose, but Liao discloses that the loss corresponding to the digital image is a perceptual loss (p. 10, section 0095-p. 11, section 0096; p. 11, section 0102-p. 12, section 0104; perceptual loss is used to compare an intermediate image having an applied style). The motivation for this is to explicitly constrain the input image of the constraint encoder and the output image of the decoder to remain unchanged in content. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Hu and Divakaran to use a perceptual loss in order to explicitly constrain the input image of the constraint encoder and the output image of the decoder to remain unchanged in content as taught by Liao. Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Hu, Divakaran, and Li and further in view of Kearney (U.S. Publication 2021/0118129) and Pranskevichus. As to claim 28, Hu does not disclose, but Kearney discloses wherein the operations further comprise modifying the digital image for use in the first set of iterations utilizing noise; and modifying the intermediate artistic digital image from the first set of intermediate artistic digital images for use in the second set of iterations utilizing additional noise (fig. 14a; fig. 14b; p. 16, section 0234-p. 17, section 0238; each iteration adds noise to the generated images). The motivation for this is to train prediction in a neural network. It would have been obvious to one skilled in the art before the effective filing date to modify Hu, Divakaran, and Li to modify the digital image for use in the first set of iterations utilizing noise; and modify the intermediate artistic digital image from the first set of intermediate artistic digital images for use in the second set of iterations utilizing additional noise in order to train prediction in a neural network as taught by Kearney. Kearney does not disclose, but Pranskevichus discloses that the noise is fractal (p. 5, section 0052; fractal noise is used to train the generative network). Motivation for the combination of references is similar to that given in the rejection to claim 7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON M RICHER whose telephone number is (571)272-7790. The examiner can normally be reached 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON M RICHER/Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Feb 24, 2022
Application Filed
May 04, 2024
Non-Final Rejection — §103
Jun 18, 2024
Interview Requested
Jun 25, 2024
Applicant Interview (Telephonic)
Jun 25, 2024
Examiner Interview Summary
Jun 28, 2024
Response Filed
Oct 19, 2024
Final Rejection — §103
Jan 08, 2025
Interview Requested
Jan 15, 2025
Applicant Interview (Telephonic)
Jan 16, 2025
Request for Continued Examination
Jan 16, 2025
Examiner Interview Summary
Jan 21, 2025
Response after Non-Final Action
Jan 26, 2025
Non-Final Rejection — §103
Mar 25, 2025
Interview Requested
Apr 03, 2025
Applicant Interview (Telephonic)
Apr 04, 2025
Response Filed
Apr 06, 2025
Examiner Interview Summary
Aug 26, 2025
Final Rejection — §103
Nov 14, 2025
Interview Requested
Nov 21, 2025
Applicant Interview (Telephonic)
Nov 21, 2025
Examiner Interview Summary
Nov 24, 2025
Request for Continued Examination
Dec 02, 2025
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586151
Frame Rate Extrapolation
2y 5m to grant Granted Mar 24, 2026
Patent 12579600
SEAMLESS VIDEO IN HETEROGENEOUS CORE INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12571669
DETECTING AND GENERATING A RENDERING OF FILL LEVEL AND DISTRIBUTION OF MATERIAL IN RECEIVING VEHICLE(S)
2y 5m to grant Granted Mar 10, 2026
Patent 12555305
Systems And Methods For Generating And/Or Using 3-Dimensional Information With Camera Arrays
2y 5m to grant Granted Feb 17, 2026
Patent 12548233
3D TEXTURING VIA A RENDERING LOSS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
51%
Grant Probability
70%
With Interview (+19.5%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 465 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month