Prosecution Insights
Last updated: April 19, 2026
Application No. 18/482,599

COLORIZATION OF IMAGES AND VECTOR GRAPHICS

Non-Final OA §103
Filed
Oct 06, 2023
Examiner
COCHRAN, BRIANNA RENAE
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
3 (Non-Final)
40%
Grant Probability
Moderate
3-4
OA Rounds
2y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
2 granted / 5 resolved
-22.0% vs TC avg
Minimal -40% lift
Without
With
+-40.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
29 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments This is in response to applicant’s amendment/response filed on 01/20/2026 which have been entered and made of record. Applicant’s arguments regarding claim rejections under 35 U.S.C. 103 have been fully considered, but are not persuasive. Applicant Argues the Office Action rejects independent claim 1 under 35 U.S.C. § 103 as allegedly being unpatentable over Menges and Cao. Without conceding the merits of the rejection, Applicant submits that Menges and Cao, individually or in any permissible combination, cannot be relied on to teach or suggest: "obtaining an outline image and a color hint, wherein the outline image depicts an outline of a target shape, and wherein the color hint comprises a brush tool input that applies a color to a region of the outline image" Examiner Respectively Disagrees. Menges teaches obtaining an outline image through multiple methods, such as uploading an outline image or drawing an outline using the pen tool(Para. 0033, Fig.5, and Fig.6). Menges also teaches a color hint, the color hint can be selecting colors from a color palette to color the outline image, describing the color and portion of the outline to be colored in the text prompt, or utilizing an outline image that is already partially colored in (Para. 0073, 0094, and 0100). Users can create or modify images using Menges interactive infinite canvas (Para. 0030). An outline image is an image that contains various shapes that make up the outline, any of the shapes or the whole outline image can be the target shape. Thus, Menges teaches obtaining an outline image and a color hint, wherein the outline image depicts an outline of a target shape, and wherein the color hint comprises Cao teaches utilizing outline images( Fig.4 or Fig. 5) in the form of line drawings, a colored reference image (Fig.4 or Fig.5), and a user-hint colorization method where users can use brushes to color specific regions of the line drawing (Section: 2.1 Line Drawing Colorization, Page 2, Para. 1). Thus, Cao teaches obtaining an outline image and a color hint, wherein the outline image depicts an outline of a target shape, and wherein the color hint comprises a brush tool input that applies a color to a region of the outline image. The user-hint colorization method uses deep learning technology specifically to control the color. It can be encoded as input into a neural network with the line drawing to create colored line drawings based on the user-hint colorization. (Section: 2.1 Line Drawing Colorization, Page 2, Para. 1) Thus, Cao teaches control guidance represents the target shape having the color within the region, and wherein the control guidance is in an input space of the image generator. Regarding the remaining arguments applicant argues with respect to the amended claim language, which is fully addressed in the prior art rejections set forth below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7-9, 15-16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Menges et al. U.S. Patent Application Publication 20250086865 A1 (hereinafter Menges) in view of NPL “AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion Models” by Yu Cao, Xiangqiao Meng, P.Y. Mok, Xueting Liu, Tong-Yee Lee, and Ping Li. (hereinafter Cao). Regarding claim 1, Menges teaches a method comprising: obtaining an outline image (Image Uploader or Sketch Drawn on Infinite Canvas from Pen Tool, Para. 0033, Fig. 6) and a color hint (Colors/Shades from Color Palette, Text Prompt, or User Selection, Para. 0094 and 0100), wherein the outline image depicts an outline of a target shape (Outline images are made up of various shapes, Fig. 5 and Fig. 16), and wherein the color hint comprises (Color Palette or Colored Outline Image, 0073, 0094, and 0100) input that applies a color (A user could specify to color a specific portion of an outline image and to prioritize certain parts of the outline image shape. Fig. 5 and Fig.16 ) to a region of the outline image; (A user can choose to upload or draw an outline image. The color of the image can then be determined using the color palette selection interface, describing the color in a text prompt, the color present in the sketch/uploaded image, or inpainting. -Para. 0030, 0033, 0073, 0094, and 0100; as discussed above, Menges teaches obtaining an outline image through multiple methods, such as uploading an outline image or drawing an outline using the pen tool, para. 0033, Fig.5, and Fig.6. Menges also teaches a color hint, the color hint can be selecting colors from a color palette to color the outline image, describing the color and portion of the outline to be colored in the text prompt, or utilizing an outline image that is already partially colored in, para. 0073, 0094, and 0100. Users can create or modify images using Menges interactive infinite canvas, para. 0030. An outline image is an image that contains various shapes that make up the outline, any of the shapes or the whole outline image can be the target shape. Thus, Menges teaches obtaining an outline image and a color hint, wherein the outline image depicts an outline of a target shape, and wherein the color hint indicates a color and a region of the outline image for the color); and generating (Image-to-Image Generation, Para. 0046), using the image generator (Generative Design System 300, Para. 0041), a synthesized image (Set of images, Para. 0046) by denoising (Stable Diffusion Model, Para. 0032) a noise input based on the control guidance (Outline Image and Color, Para. 0030) wherein the synthesized image depicts an object having the target shape (Fig. 5 and Fig. 16) based on the outline image and the color (A user could specify to color a specific portion of an outline image and to prioritize certain parts of the outline image shape. Fig. 5 and Fig.16 ) in the region indicated by the color hint(Colors/Shades from Color Palette, Text Prompt, or User Selection, Para. 0094 and 0100). One of ordinary skill in the art would recognize the stable diffusion model used in Menges would utilize a denoiser to denoise the input image. Menges invention teaches designing to promote creation and creativity through flexibility and user interaction (Menges, Para. 0029-0030). However, Menges fails to explicitly teach: wherein the color hint comprises a brush tool input that applies a color to a region of the outline image; encoding, using an outline encoder, the outline image and the color hint to obtain control guidance for an image generator, wherein the control guidance represents the target shape having the color within the region, and wherein the control guidance is in an input space of the image generator; Menges and Cao are analogous to the claimed invention because both of them are in the same field of image generation, specifically utilizing Diffusion models to perform image-to-image generation based on outline images. Cao teaches: wherein the color hint (User-hint Colorization Method) comprises a brush tool input that applies a color to a region of the outline image (Line Drawing, Fig. 4 or Fig. 5); Cao teaches utilizing outline images( Fig.4 or Fig. 5) in the form of line drawings, a colored reference image (Fig.4 or Fig.5), and a user-hint colorization method where users use brushes to color specific regions of the line drawing (Section: 2.1 Line Drawing Colorization, Page 2, Para. 1). encoding, using an outline encoder (Fig.2), the outline image (Line Drawing, Fig.4 or Fig. 5) and the color hint (Colored Reference Image, Fig.4 or Fig.5) to obtain control guidance (Line Drawing and Colored Reference Image, Fig.4 and Fig.5) for an image generator wherein the control guidance represents the target shape (Line Drawing Shapes, Fig.2 , Fig.4, and Fig.5 ) having the color within the region (Colored Line Drawing or Colored Reference Image), and wherein the control guidance is in an input space of the image generator; (Section: 4 AnimeDiffusion) The user-hint colorization method uses deep learning technology specifically to control the color and can be encoded as input into a neural network with the line drawing. (Section: 2.1 Line Drawing Colorization, Page 2, Para. 1) denoising a noise input; (Fig.2 Section 1 Introduction, Page 2, Col. 1) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System to include Cao’s Outline Encoder and User-hint Colorization Method. Since, doing so would provide the benefit of enhancing the efficiency of the diffusion model performing the image-to-image generation and increasing the flexibility of the model by allowing users to control the color. Menges utilizes a generic stable diffusion model (Menges, Para. 0030) for a wide range of tasks, while Cao’s Diffusion model focuses on generating images from outline images and color references specifically (Cao, Section: Abstract). Regarding claim 2, Menges teaches the method of claim 1, wherein generating the synthesized image (Image-to-Image Generation, Para. 0046), comprises: providing the control guidance (Outline Image and Color, Para. 0046 and 0094) as an input to a decoder layer of the image generator. (A diffusion model can be used to generate images in Menges Invention Para. 0046. The attributes a user wants can be expressed through text, color, shape and other aspects. These attributes would be passed to the encoder and then respectively decoded.) Regarding claim 3, Menges teaches the method of claim 1, further comprising: obtaining a text prompt; (Text Prompt Input 102, Para. 0032) and encoding the text prompt to obtain a text encoding, wherein the synthesized image is generated based on the text encoding. (Fig. 1) Regarding claim 7, Menges teaches the method of claim 1, wherein: a single image includes the outline image (User’s Sketch, Para. 0043) and the color hint (Color Palette Selection, Text, or Color in Image, Para 0094) Regarding claim 8, Menges teaches the method of claim 1, wherein: the color hint is included in a color hint image (First Image Set 708) that is separate from the outline image (Sketch 704, Para. 0070 Fig. 7). (The user can create the first set images as outline images with the sketch being a colored version of the outline images. The user could also create the first set of images as outline images and then use the color palette tool as the color hint to generate the colored image, Para. 0096) Regarding claim 9, Menges teaches the method of claim 1, further comprising: generating a plurality of synthesized images (Set of Seed Images) based on the outline image and a plurality of different random seeds, respectively. (Para. 0104 and 0116, The user can select the set of seeds they want as well as the generated images in the collage can be randomly selected. Para. 0111) Regarding claim 15, Menges teaches an apparatus comprising: at least one processor; (Para. 0040) at least one memory storing instructions executable by the at least one processor; (Para. 40) the apparatus further comprising (Storage, Para. 0040) and trained to encode input data to obtain control guidance (Outline Image and Color, Para. 0030), wherein the input data includes an outline image (Image Uploader or Sketch Drawn on Infinite Canvas from Pen Tool, Para. 0033, Fig. 6) and a color hint (Colors/Shades from Color Palette, Text Prompt, or User Selection, Para. 0094 and 0100) wherein the outline image depicts an outline of a target shape (Outline images are made up of various shapes, Fig. 5 and Fig. 16), and wherein the color hint comprises (Color Palette or Colored Outline Image, 0073, 0094, and 0100) input that applies a color to a region of the outline image; A user can choose to upload or draw an outline image. The color of the image can then be determined using the color palette selection interface, describing the color in a text prompt, the color present in the sketch/uploaded image, or inpainting. -Para. 0030, 0033, 0073, 0094, and 0100; as discussed above, Menges teaches obtaining an outline image through multiple methods, such as uploading an outline image or drawing an outline using the pen tool, para. 0033, Fig.5, and Fig.6. Menges also teaches a color hint, the color hint can be selecting colors from a color palette to color the outline image, describing the color and portion of the outline to be colored in the text prompt, or utilizing an outline image that is already partially colored in, para. 0073, 0094, and 0100. Users can create or modify images using Menges interactive infinite canvas, para. 0030. An outline image is an image that contains various shapes that make up the outline, any of the shapes or the whole outline image can be the target shape. Thus, Menges teaches obtaining an outline image and a color hint, wherein the outline image depicts an outline of a target shape, and wherein the color hint indicates a color and a region of the outline image for the color;) and an image generator (Generative Design System 300, Para. 0041), including parameters stored in the at least one memory (Storage, Para. 0040) and trained to generate a synthesized image (Set of images, Para. 0046) by denoising (Stable Diffusion Model, Para. 0032) a noise input based on the control guidance (Outline Image and Color, Para. 0030), wherein the control guidance represents the target shape (Shape of Outline) having the color within the region(A user could specify to color a specific portion of an outline image and to prioritize certain parts of the outline image shape. Fig. 5 and Fig.16 ) , and wherein the control guidance (Outline Image and Color, Para. 0030) is in an input space of the image generator(Generative Design System 300, Para. 0041), and wherein the synthesized image (Set of images, Para. 0046) depicts an object having the target shape (Fig. 5 and Fig. 16) based on the outline image and the color (A user could specify to color a specific portion of an outline image and to prioritize certain parts of the outline image shape. Fig. 5 and Fig.16 ) in the region indicated by the color hint. (Colors/Shades from Color Palette, Text Prompt, or User Selection, Para. 0094 and 0100). (Fig.6). One of ordinary skill in the art would recognize the stable diffusion model used in Menges would utilize a denoiser to denoise the input image. However, Menges fails to teach: outline encoder including parameters stored in the at least one memory and trained to encode input data to obtain control guidance, wherein the input data includes an outline image and a color hint, wherein the color hint comprises a brush tool input that applies a color to a region of the outline image; Cao teaches: an outline encoder (Fig. 2) including parameters (Section 1 Introduction, Page 2, Col. 1) stored in the at least one memory (Fig.2, Section: 5 Experiments) and trained to encode input data to obtain control guidance (Line Drawing and Colored Reference Image, Fig.4 and Fig.5), wherein the input data includes an outline image (Line Drawing, Fig.4 or Fig. 5) and a color hint (Colored Reference Image, Fig.4 or Fig.5). wherein the color hint (User-hint Colorization Method) comprises a brush tool input that applies a color to a region of the outline image (Line Drawing, Fig. 4 or Fig. 5); Cao teaches outline images( Fig.4 or Fig. 5) in the form of line drawings, a colored reference image (Fig.4 or Fig.5), and a user-hint colorization method where users use brushes to color specific regions of the line drawing (Section: 2.1 Line Drawing Colorization, Page 2, Para. 1). wherein the control guidance represents the target shape (Line Drawing Shapes, Fig.2 , Fig.4, and Fig.5 ) having the color within the region (Colored Line Drawing or Colored Reference Image), and wherein the control guidance (Line Drawing and Colored Reference Image, Fig.4 and Fig.5) is in an input space of the image generator; (Section: 4 AnimeDiffusion) The user-hint colorization method uses deep learning technology specifically to control the color and can be encoded as input into a neural network with the line drawing. (Section: 2.1 Line Drawing Colorization, Page 2, Para. 1) denoising a noise input; (Fig.2 Section 1 Introduction, Page 2, Col. 1) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System to include Cao’s Outline Encoder and User-hint Colorization Method. Since, doing so would provide the benefit of enhancing the efficiency of the diffusion model performing the image-to-image generation and increasing the flexibility of the model by allowing users to control the color. Menges utilizes a generic stable diffusion model (Menges, Para. 0030) for a wide range of tasks, while Cao’s Diffusion model focuses on generating images from outline images and color references specifically (Cao, Section: Abstract). Regarding claim 16, Menges fails to teach the apparatus of claim 15, wherein: the image generator comprises a U-Net architecture. However, Cao teaches the apparatus of claim 15, wherein: the image generator (Section 1 Introduction, Page 2, Col. 1) comprises a U-Net architecture. (Section: 4 AnimeDiffusion) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System incorporate Cao’s U-Net Architecture. In one embodiment Menges uses a diffusion model to generate images. A U-Net Architecture is a common architecture for a diffusion model. Since, U-Net architectures are designed to maximized available data, choosing to use one for a diffusion model that requires large amounts of training data increases efficiency. Regarding claim 18, Menges teaches the apparatus of claim 15, wherein: the (Generative Design System 300, Para. 0041, The image generator in Menges can be used to generate a first set of images, such as outline images. Then a second set of images can be generated based on the first set. Para. 0043, One could treat the first set of images as an outline encoder and the second set at the image generator.) However, Menges does not specifically teach an outline encoder. Cao teaches an outline encoder. (Fig.2, Section: 4 AnimeDiffusion) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System to include Cao’s Outline Encoder. Since, doing so would provide the benefit of enhancing the efficiency of the diffusion model performing the image-to-image generation. Menges utilizes a generic stable diffusion model (Menges, Para. 0030) for a wide range of tasks, while Cao’s Diffusion model focuses on generating images from outline images and color references specifically (Cao, Section: Abstract). Regarding claim 20, Menges teaches the apparatus of claim 15, further comprising: a text encoder (Generative Design System 300) configured to encode a text prompt (Text Prompt Input 102) to obtain a text encoding. (Para. 0032, Fig. 1 and Fig. 5) Claims 4-6 and 10-14are rejected under 35 U.S.C. 103 as being unpatentable over Menges et al. U.S. Patent Application Publication 20250086865 A1 (hereinafter Menges) in view of NPL “AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion Models” by Yu Cao, Xiangqiao Meng, P.Y. Mok, Xueting Liu, Tong-Yee Lee, and Ping Li. (hereinafter Cao) in further view of Aggarwal et al. U.S. Patent Application Publication 20240404144 A1 (hereinafter Aggarwal). Regarding claim 4, Menges and Cao fail to teach the method of claim 1, wherein generating the synthesized image comprises: performing a reverse diffusion process. Menges, Cao, and Aggarwal are analogous to the claimed invention because all of them are in the same field of image generation, specifically utilizing Diffusion models to perform image generations based on outline images. Aggarwal teaches the method of claim 1, wherein generating the synthesized image (Output Image 355) comprises: performing a reverse diffusion process. (Reverse Diffusion Process 340, Para. 0065) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges diffusion process altered by Cao to incorporate Aggarwal reverse diffusion process. Since doing so would provide the benefit of reducing noise during the diffusion process. (Aggarwal, Para. 0027) Regarding claim 5, Menges and Cao fail to teach the method of claim 4, wherein processing the outline image comprises: obtaining a noisy input image for the image generator, wherein the control guidance is based on the noisy input image. However, Aggarwal teaches the method of claim 4, wherein processing the outline image comprises (Original Image 305, Para. 0065): obtaining a noisy input image (Noisy Features 335) for the image generator, wherein the control guidance (Guidance Features 370) is based on the noisy input image. (Noisy Features, Para. 0069) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Image Generation altered by Cao to incorporate Aggarwal’s nosy input image. Since, Menges uses a diffusion model in one embodiment, one would expect noise to be present in image generation. Hence, one would want to reduce noise and that is accomplished by incorporating Aggarwal’s reverse diffusion process and noisy images. (Aggarwal, Para. 0027) Regarding claim 6, Menges and Cao fail to teach the method of claim 1, wherein generating the synthesized image comprises: identifying a diffusion timestep; and encoding the diffusion timestep to obtain a timestep encoding, wherein the synthesized image is generated based on the timestep encoding. However, Aggarwal teaches the method of claim 1, wherein generating the synthesized image comprises: identifying a diffusion timestep; (Para. 0027) and encoding the diffusion timestep to obtain a timestep encoding, wherein the synthesized image is generated based on the timestep encoding. (CLIP Image Embedding, Para. 0027) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Image Generation alter by Cao to incorporate Aggarwal’s Diffusion Timestep. Since, Menges invention can incorporate a diffusion model, one of ordinary skill would know that a diffusion timestep is a core component of the model. A timestep is used to determine/track how many times the process has been repeated to reduce noise. (Aggarwal, Para. 0027) Regarding claim 10, Menges teaches an outline image (Image Uploader or Sketch Drawn on Infinite Canvas from Pen Tool, Para. 0033, Fig. 6), a color hint (Colors/Shades from Color Palette, Text Prompt, or User Selection, Para. 0094 and 0100), and an image generator. (Image-to-Image Generation, Para. 0046). However, Menges fails to teach a method comprising: obtaining training data including a training outline image, a training color hint, and a ground-truth colored image corresponding to the training outline image and the training color hint, wherein the training outline image depicts an outline of a target shape, and wherein the training color hint comprises a brush tool input that applies a color to a region of the training outline image; initializing an outline encoder using parameters of an image generator; and training the outline encoder, using the training outline image and the training color hint, to generate control guidance for the image generator for generating colored images, wherein the control guidance represents the target shape having the color within the region, and wherein the control guidance is in an input space of the image generator, and wherein the colored images depict an object having the target shape based on the training outline image and the color indicated in the region indicated by the training color hint. Cao teaches: initializing an outline encoder (Fig.2) using parameters (Section 1 Introduction, Page 2, Col. 1) of an image generator; (Section: 4 AnimeDiffusion) wherein the (User-hint Colorization Method) comprises a brush tool input that applies a color to a region of the (Line Drawing, Fig. 4 or Fig. 5); Cao teaches outline images( Fig.4 or Fig. 5) in the form of line drawings, a colored reference image (Fig.4 or Fig.5), and a user-hint colorization method where users use brushes to color specific regions of the line drawing (Section: 2.1 Line Drawing Colorization, Page 2, Para. 1). wherein the control guidance represents the target shape (Line Drawing Shapes, Fig.2 , Fig.4, and Fig.5 ) having the color within the region (Colored Line Drawing or Colored Reference Image), and wherein the control guidance (Line Drawing and Colored Reference Image, Fig.4 and Fig.5) is in an input space of the image generator; (Section: 4 AnimeDiffusion) The user-hint colorization method uses deep learning technology specifically to control the color and can be encoded as input into a neural network with the line drawing. (Section: 2.1 Line Drawing Colorization, Page 2, Para. 1) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System to include Cao’s Outline Encoder and User-hint Colorization Method. Since, doing so would provide the benefit of enhancing the efficiency of the diffusion model performing the image-to-image generation and increasing the flexibility of the model by allowing users to control the color. Menges utilizes a generic stable diffusion model (Menges, Para. 0030) for a wide range of tasks, while Cao’s Diffusion model focuses on generating images from outline images and color references specifically (Cao, Section: Abstract). However, Menges and Cao fail to teach a method comprising: obtaining training data including a training outline image, a training color hint, and a ground-truth colored image corresponding to the training outline image and the training color hint, wherein the training outline image depicts an outline of a target shape, and wherein the training color hint comprises a brush tool input that applies a color to a region of the training outline image; and training the outline encoder, using the training outline image and the training color hint, to generate control guidance for the image generator for generating colored images, wherein the control guidance represents the target shape having the color within the region, and wherein the control guidance is in an input space of the image generator, and wherein the colored images depict an object having the target shape based on the training outline image and the color indicated in the region indicated by the training color hint. Aggarwal teaches a method comprising: obtaining training data (Training Component 215) including a training outline image (Image Embedding), a training color hint, (Image Embedding or Color Embedding) and a ground-truth colored image (Ground-Truth Image Embedding) corresponding to the training outline image and the training color hint(Predicted Image Embedding, Para. 0052 and 0127),wherein the training outline image depicts an outline of a target shape (Image Embedding), and wherein the training color hint (Image Embedding, Color Embedding, or Text Embedding) comprises a brush tool input that applies a color to a region of the training outline image; (Predicted Image Embedding, Para. 0052 and 0127) and training the outline encoder (Training Component 215), using the training outline image (Image Embedding) and the training color hint (Image Embedding or Color Embedding), to generate control guidance for the image generator for generating colored images (Predicted Image Embedding, Para. 0052 and 0127), wherein the control guidance represents the target shape having the color within the region, and wherein the control guidance is in an input space of the image generator, and wherein the colored images depict an object having the target shape (Image Embedding) based on the training outline image and the color indicated in the region indicated by the training color hint (Image Embedding, Color Embedding, or Text Embedding). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System Image Generation and Cao’s Outline Encoder with User-hint Colorization Method to incorporate Aggarwal’s Training of the Image Generation Model Components and Outline Encoder. In one embodiment Menges uses a diffusion model to generate images. Machine learning models are trained to ensure the outcome of the models are correct. Hence, if you have a machine learning model using an outline image, color hint, and an outline encoder then these components would be trained to ensure the resulting image created from the components is correct using a ground-truth image. Regarding claim 11, Menges fails to teach the method of claim 10, wherein: the outline encoder is trained using a fixed copy of the image generator. However, Cao teaches an outline encoder. (Fig.2) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System to include Cao’s Outline Encoder. Since, doing so would provide the benefit of enhancing the efficiency of the diffusion model performing the image-to-image generation. Menges utilizes a generic stable diffusion model (Menges, Para. 0030) for a wide range of tasks, while Cao’s Diffusion model focuses on generating images from outline images and color references specifically (Cao, Section: Abstract). Menges and Cao fail to teach the method of claim 10, wherein: the outline encoder is trained using a fixed copy of the image generator. However, Aggarwal teaches the method of claim 10, wherein: the outline encoder (Image Embedding) is trained using a fixed copy of the image generator. (Same Image Embedding Context, Para. 0027 and 0149) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System altered by Cao’s Outline Encoder to incorporate Aggarwal’s training the outline encoder using a fixed copy of the image generator. Since training the outline encoder using the same copy of the image generator ensures the outline encoder is trained consistently and efficiently. Regarding claim 12, Menges teaches an outline image (Image Uploader or Sketch Drawn on Infinite Canvas from Pen Tool, Para. 0033, Fig. 6), a color hint (Colors/Shades from Color Palette, Text Prompt, or User Selection, Para. 0094 and 0100). However, Menges and Cao fail to explicitly teach the method of claim 10, wherein: the training outline image and the training color hint are generated based on the ground-truth image. Aggarwal teaches the method of claim 10, wherein: the training outline (Image Embedding) and the training color hint (Imaged Embedding) are generated based on the ground-truth image. (Ground-Truth Image Embedding, Para. 0052 and 0127) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System and altered by Cao to incorporate Aggarwal’s Training of the Outline and Color Hint. In one embodiment Menges uses a diffusion model to generate images. Machine learning models are trained to ensure the outcome of the models are correct. Hence, if you have a machine learning model using an outline and a color hint. These components would be trained with a ground-truth image to ensure the resulting image created is correct using a ground-truth image. Regarding claim 13, Menges fails to teach the method of claim 10, wherein the training comprises: providing the training outline image and the training color hint to the outline encoder; providing an output of the outline encoder to the image generator; and comparing an output of the image generator to the ground-truth colored image. Cao teaches an outline encoder. (Fig.2) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System to include Cao’s Outline Encoder. Since, doing so would provide the benefit of enhancing the efficiency of the diffusion model performing the image-to-image generation. Menges utilizes a generic stable diffusion model (Menges, Para. 0030) for a wide range of tasks, while Cao’s Diffusion model focuses on generating images from outline images and color references specifically (Cao, Section: Abstract). However, Menges and Cao fail to teach the method of claim 10, wherein the training comprises: providing the training outline image and the training color hint to the outline encoder; providing an output of the outline encoder to the image generator; and comparing an output of the image generator to the ground-truth colored image. Aggarwal teaches the method of claim 10, wherein the training (Training Component 215) comprises: providing the training outline image (Image Embedding) and the training color hint (Image Embedding) to the outline encoder; (Para. 0052 and 0127) providing an output (Imaged Embedding) of the outline encoder to the image generator; (Para. 0021 and 0052) and comparing an output (Predicted Image Embedding) of the image generator to the ground-truth colored image. (Ground-Truth Image Embedding, Para. 0052 and 0127) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System and Cao’s Outline Encoder to incorporate Aggarwal’s Training of the Image Generation Model Components and Outline Encoder. In one embodiment Menges uses a diffusion model to generate images. Machine learning models are trained to ensure the outcome of the models are correct. Hence, if you have a machine learning model using an outline image, color hint, and an outline encoder. These components would be trained to ensure the resulting image created from the components is correct using a ground-truth image. Regarding claim 14, teaches an embodiment using a diffusion model. (Para. 0032) However, Menges fails to teach the method of claim 10, wherein: training the outline encoder comprises a diffusion-based training process. Cao teaches an outline encoder. (Fig.2) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System to include Cao’s Outline Encoder. Since, doing so would provide the benefit of enhancing the efficiency of the diffusion model performing the image-to-image generation. Menges utilizes a generic stable diffusion model (Menges, Para. 0030) for a wide range of tasks, while Cao’s Diffusion model focuses on generating images from outline images and color references specifically (Cao, Section: Abstract). However, Cao fails to teach the method of claim 10, wherein: training the outline encoder comprises a diffusion-based training process. Aggarwal teaches the method of claim 10, wherein: training (Training Component 215) the outline encoder (Image Embedding) comprises a diffusion-based training process. (Para. 0051) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Diffusion Model altered by Cao’s Outline Encoder to incorporate Aggarwal’s Training of the outline encoder using a diffusion-based training. In one embodiment Menges uses a diffusion model to generate images. Machine learning models are trained to ensure the outcome of the models are correct. Hence, if you have a diffusion model using an outline encoder. The outline encoder would be trained using a diffusion-based training to ensure the resulting image created from the outline encoder is correct. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Menges et al. U.S. Patent Application Publication 20250086865 A1 (hereinafter Menges) in view of NPL “AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion Models” by Yu Cao, Xiangqiao Meng, P.Y. Mok, Xueting Liu, Tong-Yee Lee, and Ping Li. (hereinafter Cao) in further view of Babanin et al. U.S. Patent Application Publication 20250054210 A1(hereinafter Babanin). Regarding claim 17, Menges teaches an embodiment using a diffusion model. (Para. 0032) However, Menges fails to teach the apparatus of claim 15, wherein: the outline encoder comprises a ControlNet architecture. Cao teaches an outline encoder. (Fig.2) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System to include Cao’s Outline Encoder. Since, doing so would provide the benefit of enhancing the efficiency of the diffusion model performing the image-to-image generation. Menges utilizes a generic stable diffusion model (Menges, Para. 0030) for a wide range of tasks, while Cao’s Diffusion model focuses on generating images from outline images and color references specifically (Cao, Section: Abstract). However, Cao fails to teach the apparatus of claim 15, wherein: the outline encoder comprises a ControlNet architecture. Menges, Cao, and Babanin are analogous to the claimed invention because all of them are in the same field of image generation utilizing diffusion models. Babanin teaches the apparatus of claim 15, wherein: the outline encoder (Generate Synthesized Images) comprises a ControlNet Architecture. (Para. 0023, Babanin’s image generator can be used to create outline images and uses a stable diffusion model.) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System altered by Cao’s outline encoder to incorporate a ControlNet Architecture. ControlNet architectures are often used with diffusion models and provide more flexibility over the model. Since, an important aspect of Menges invention, is designed to promote creation and creativity through flexibility and user interaction. (Menges, Para. 0029-0030) It would be obvious to incorporate a ControlNet architecture. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Menges et al. U.S. Patent Application Publication 20250086865 A1 (hereinafter Menges) in view of NPL “AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion Models” by Yu Cao, Xiangqiao Meng, P.Y. Mok, Xueting Liu, Tong-Yee Lee, and Ping Li. (hereinafter Cao) in further view of “Accuracy and fidelity comparison of Luna and DALL-E 2 diffusion-based image generation systems.” By Michael Cahyadi, Muhammad Rafi, William Shan, Henry Lucky, and Jurike V. Moniaga (hereinafter Cahyadi). Regarding claim 19, Menges fails to teach the apparatus of claim 15, wherein: the outline encoder comprises an image adapter network. Cao teaches an outline encoder. (Fig.2) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System to include Cao’s Outline Encoder. Since, doing so would provide the benefit of enhancing the efficiency of the diffusion model performing the image-to-image generation. Menges utilizes a generic stable diffusion model (Menges, Para. 0030) for a wide range of tasks, while Cao’s Diffusion model focuses on generating images from outline images and color references specifically (Cao, Section: Abstract).However, Cao fails to teach the apparatus of claim 15, wherein: the outline encoder comprises an image adapter network. Menges, Cao, and Cahyadi are analogous to the claimed invention because all of them are in the same field of image generation using diffusion models. Cahyadi teaches the apparatus of claim 15, wherein: the outline encoder (DALL-E 2) comprises an image adapter network. (ViT or Vision Transformer, Col. 2 Para. 4) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Menges Generative Design System altered by Cao’s outline encoder to incorporate an Image Adapter Network. DALL-E 2 is a well-known machine learning image generator that uses ViT (Cahyadi, Col. 2 Para. 4) and can be used to generate outline images. ViTs are an alternative to CNNs when it comes to training models. Choosing which depends on the requirements of the task one wishes to accomplish. In the case of Menges invention one could use either. However, ViTs are often used for larger datasets, such as when using a diffusion model. Hence, one would benefit from using an image adapter network (ViT) for the outline encoder to increase performance. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. NPL “User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks” by YUANZHENG CI, XINZHU MA, ZHIHUI WANG, Dalian University of Technology, and ZHONGXUAN LUO (hereinafter Ci). Ci teaches utilizing a cGAN to generate images from outlines that can be colored by a user. That are used to generate a colored outlined image based on the users colored portions. (Section: 1 Introduction) Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIANNA R COCHRAN whose telephone number is (571)272-4671. The examiner can normally be reached Mon-Fri. 7:30am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRIANNA RENAE COCHRAN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Oct 06, 2023
Application Filed
Jun 11, 2025
Non-Final Rejection — §103
Jul 30, 2025
Interview Requested
Aug 07, 2025
Applicant Interview (Telephonic)
Aug 07, 2025
Examiner Interview Summary
Sep 02, 2025
Response Filed
Nov 14, 2025
Final Rejection — §103
Jan 07, 2026
Interview Requested
Jan 20, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12541922
METHOD FOR GENERATING A MODEL FOR REPRESENTING RELIEF BY PHOTOGRAMMETRY
2y 5m to grant Granted Feb 03, 2026
Patent 12482144
METHOD AND APPARATUS OF ENCODING/DECODING POINT CLOUD GEOMETRY DATA USING AZIMUTHAL CODING MODE
2y 5m to grant Granted Nov 25, 2025
Patent 12417567
METHOD FOR GENERATING SIGNED DISTANCE FIELD IMAGE, METHOD FOR GENERATING TEXT EFFECT IMAGE, DEVICE AND MEDIUM
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
0%
With Interview (-40.0%)
2y 3m
Median Time to Grant
High
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month