Prosecution Insights
Last updated: April 19, 2026
Application No. 18/511,899

Automated Digital Tool Identification from a Rasterized Image

Non-Final OA §103
Filed
Nov 16, 2023
Examiner
TSWEI, YU-JANG
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
376 granted / 447 resolved
+22.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
44 currently pending
Career history
491
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 447 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-12, 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tremblay et al. (US 20190251397 A1, hereinafter Tremblay) in view of Gupta et al. (US 20190158112 A1, hereinafter Gupta). Regarding Claim 1, Tremblay teaches a system comprising: a memory component; and a processing device coupled to the memory component, the processing device to perform operations including: (Tremblay, Paragraph [0021], "The labeled training data generation system 200 includes a graphics processing unit (GPU) 110, the task-specific training data computation unit 115, and an input image generator 220."; Fig. 3, Element 304 Memory; [0020], "the generated training data may be used to train neural networks"). receiving a labeled [[ vector ]] image that specifies use of a digital tool; (Tremblay, Paragraph [0020], "Training deep neural networks requires a large amount of labeled training data" [0030], "one or more of the 3D synthetic objects are selected by the labeled training data generation system 100" [0034], "At step 135, the GPU 110 renders a 3D object of interest to produce a rendered image of the object of interest. .. The 3D object of interest is rendered according to the rendering parameters <read on digital tool>"). identifying one or more parameter configurations for the digital tool identified in the labeled [[vector]] image (Tremblay, Paragraph [0022], "The rendering parameters may specify a position and/or orientation of the object of interest in a 3D scene, a position and/or orientation of a virtual camera, one or more texture maps, one or more lights including color, type, intensity, position and/or orientation, and the like."); generating a trained segmentation network and a trained classification network using the labeled [[ vector ]] image and the one or more parameter configurations; (Tremblay, Paragraph [0004], "The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects" [0024], "In an embodiment, the task is segmentation and the training data computation unit 115 determines an object identifier ... and computes the task-specific training data as a segmentation map corresponding to the input image ... where each pixel. .. is colored according to the object identifier ... "; [0127], "a deep learning or neural learning system needs to be trained in object recognition and classification for it get smarter and more efficient at identifying basic objects"). outputting the trained segmentation network [[ and the trained classification network, ]] the trained segmentation network [[ and the trained classification network, ]] configured to automatically identify the digital tool used to achieve a visual appearance in an input image. (Tremblay, Paragraph [0020], "A domain randomization technique to generate training data that is automatically labeled is described. The generated training data may be used to train neural networks for object detection and segmentation tasks" [0032], "During supervised training of a neural network model, the task-specific training data is ground truth labels ... compared with an output generated by the neural network model when the input image is processed by the neural network model."). But Tremblay does not explicitly disclose receiving [[ a labeled ]] vector [[ image that specifies use of a digital tool]], generating a trained classification network, [[ outputting the trained segmentation network and ]] the trained classification network configured to automatically identify the digital tool used to achieve a visual appearance in an input image. However, Gupta teaches a memory component; and a processing device coupled to the memory component (Gupta, Paragraph [0091], "includes a processor 802 communicatively coupled to one or more memory devices 804."; Paragraph [0092], "The memory device 804 includes any suitable non-transitory computer-readable medium for storing data, program code, or both."). generating a trained classification network ... using the labeled vector image…and outputting the trained ... classification network (Gupta, Paragraph [0020], "a deep-learning network or other neural network model classifies an input image or other raster graphic into a set of classes ... In various embodiments, using a trained neural network to select multiple vectorization operations specific to the characteristics of the input graphic can provide improved raster-to-vector conversions without requiring manual input to modify the vectorization process" [0035], "executes the customization training module 228 to generate, train, or otherwise develop a customization-identification network 208 <read on classification network>"; Paragraph [0037], "outputs the customization-identification network 208"). configured to automatically identify the digital tool used to achieve a visual appearance in an input image (Gupta, Paragraph [0037], "allows a graphic manipulation application 206 to use the visual characteristics <read on digital tool> of an input raster graphic 204"; [0011], "FIG. 3 depicts an example of a process for using a customization-identification network to automatically select and apply one or more custom vectorization operations"). Gupta and Tremblay are analogous since both of them are dealing with generating labeled training data and training neural network models for image-related processing workflows. Tremblay provided a way of generating task-specific training data (including segmentation maps) paired with input images for training a neural network model. Gupta provided a way of implementing the training and deployment of a trained network in a computing system (processor + memory) and outputting the trained network for downstream use. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate processor/memory implementation and trained-network output taught by Gupta into modified invention of Tremblay such that the trained model(s) are stored/output for later use by an image processing application. Regarding Claim 2, the combination of Tremblay and Gupta teaches the invention in Claim 1. The combination further teaches wherein the trained segmentation network and the trained classification network are further configured to automatically identify a parameter configuration for the digital tool used to achieve the visual appearance (Tremblay, Paragraph [0020], "A domain randomization technique to generate training data that is automatically labeled is described. The generated training data may be used to train neural networks for object detection and segmentation tasks" [0032], "During supervised training of a neural network model, the task specific training data is ground truth labels ... compared with an output generated by the neural network model when the input image is processed by the neural network model"). Tremblay does not explicitly disclose but Gupta teaches configured to automatically identify a parameter configuration for the digital tool used to achieve the visual appearance (Gupta, Paragraph [0037], "allows a graphic manipulation application 206 to use the visual characteristics < read on digital tool> of an input raster graphic 204"; [0011], "FIG. 3 depicts an example of a process for using a customization-identification network to automatically select and apply one or more custom vectorization operations"). As explained in rejection of Claim 1, the obviousness for combining of outputting the trained network for downstream use of Gupta into Tremblay is provided above. Regarding Claim 3, the combination of Tremblay and Gupta teaches the invention of generating the trained segmentation network and the trained classification network in Claim 1. The combination further teaches generating a rasterized image (Tremblay, Paragraph [0034], "At step 135, the GPU 110 renders a 30 object of interest to produce a rendered image of the object of interest; [0114], “the graphics processing pipeline 600 comprises a pipeline architecture that includes a number of stages. The stages include…a rasterization stage 660; [0112], “After the processed vertex data is rasterized <read on rasterized image>"). and generating a training sample mask for the rasterized image (Tremblay, Paragraph [0035], "for segmentation, the task-specific training data is the rendered objects of interest having each pixel within a rendered object of interest replaced with an object identifier <read on training sample mask>"). However, Tremblay does not explicitly disclose: [[from the labeled vector image]]. But Gupta teaches generating a rasterized image from the labeled vector image (Gupta, Paragraph [0051], "At block 312, the process 300 involves causing a computing device (e.g., a user device 227) to display or otherwise provide an output vector graphic 226 <read on labeled vector image>"). As explained in rejection of Claim 1, the obviousness for combining Gupta's output vector graphic into Tremblay's rasterization/training pipeline is provided above. Regarding Claim 4, The combination of Tremblay and Gupta teaches the invention in Claim 3. The combination further teaches wherein the processing device is configured to generate the rasterized image (Tremblay, Paragraph [0034], "At step 135, the GPU 110 renders a 3D object of interest to produce a rendered image of the object of interest <read on rasterized image>"). But Tremblay does not explicitly disclose: generating an augmented vector image from the labeled vector image and generate the rasterized image from the augmented vector image. However, Gupta teaches generating an augmented vector image from the labeled vector image (Gupta, Paragraph [0047], "the vectorization algorithm applies a set of customization operations <read on augmented vector image>"). and wherein the processing device is configured to generate the rasterized image from the augmented vector image (Gupta, Paragraph [0047], "Executing the graphic manipulation application 206 causes the processing device to perform a vectorization algorithm <read on processing device configured to generate rasterized image from augmented vector image>"; Paragraph [0051], "At block 312, the process 300 involves causing a computing device (e.g., a user device 227) to display or otherwise provide an output vector graphic 226 <read on rasterized image>"). Gupta and Tremblay are analogous since both are directed to image-processing pipelines that transform source representations into rasterized images for downstream processing and analysis. Tremblay provides a way of generating rasterized images for training neural network models by rendering source data using configurable parameters. Gupta provides a way of modifying and outputting vector graphics via customization operations executed by a processing device. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the vector-image customization operations taught by Gupta into the modified invention of Tremblay, such that an augmented vector image is first generated and then rasterized for use in training or processing, as opposed to rasterizing an unmodified source. The motivation for such a combination is to increase variation and diversity in generated training data by modifying vector representations prior to rasterization, as discussed by Gupta in Paragraph [0047], which explains applying customization operations during vectorization, and consistent with Tremblay's objective of generating diverse rendered images for training purposes. Regarding Claim 5, the combination of Tremblay and Gupta teaches the invention in Claim 4 with processing device is configured to generate the augmented vector image. The combination further teaches varying a position of a stylized geometry element in the labeled [[ vector ]] image (Tremblay, Paragraph [0042], “one or more of the 3D geometric shape are selected by the labeled training data generation system 100” [0043], "positioning the rendered object(s) of interest and the rendered geometric shapes at various positions within the input image <read on varying a position>"). varying a size of the stylized geometry element in the labeled [[ vector ]] image (Tremblay, Paragraph [0043], "Each the rendered geometric shape may be scaled in size and/or rotated <read on varying a size>"). varying a color of the stylized geometry element in the labeled [[ vector ]] image (Tremblay, Paragraph [0042], "A color of and/or the orientation of each 3D geometric shape may vary <read on varying a color>"). altering a parameter configuration of the digital tool applied to the stylized geometry element in the labeled [[ vector ]] image (Tremblay, Paragraph [0042], "Note that at least a portion of the rendering parameters vary for each one of the rendered images. <read on altering a parameter configuration> "). adjusting a hierarchical placement of the stylized geometry element in the labeled [[ vector ]] image (Tremblay, Paragraph [0044], "A particular rendered object of interest may be occluded by one or more other rendered objects of interest and/or rendered geometric shapes <read on adjusting a hierarchical placement>"). inserting a stylized geometry element from a different labeled [[ vector ]] image into the labeled vector image (Tremblay, Paragraph [0037], "a random number of rendered 30 geometric shapes may be inserted into the input image <read on inserting a stylized geometry element>"; [0042], “the object of interest may be rendered according to different rendering parameters to produce additional rendered images of the object of interest.”). But, Tremblay does not explicitly disclose that [[ the above variations are performed in the labeled ]] vector [[ image and/or that the inserted stylized geometry element is from a different labeled ]] vector [[ image]]. However, Gupta teaches varying/altering aspects of a vector image (Gupta, Paragraph [0047], "The graphic manipulation application 206 transforms the accessed raster graphic into a vector graphic…the vectorization algorithm applies a set of customization operations <read on varying position/size/color and inserting elements in a vector image>"). altering a parameter configuration of the digital tool (Gupta, Paragraph [0081], "sets the edge-sensitivity value to the user-specified slide value V for that pixel in the edge-sensitivity map <read on parameter configuration>"). Gupta and Tremblay are analogous since both are directed to systematic variation of visual characteristics in images using parameterized operations executed by a processing device. Tremblay provides a way of varying position, size, color, rendering parameters, relative placement, and insertion of visual elements during generation of rendered images for training purposes. Gupta provides a way of applying customization operations and parameter configurations directly to vector graphics during vectorization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate vector-image customization operations and parameter configurations taught by Gupta into the modified invention of Tremblay, such that the variations of stylized geometry elements (including position, size, color, parameter configuration, hierarchical placement, and insertion) are performed at the vector-image level prior to rasterization, rather than being limited to variations applied only during rendering. The motivation is to enable controlled, repeatable, and parameter-driven variation of stylized geometry elements using vector-level representations, as discussed by Gupta in Paragraphs [004 7] and [0081], which improves flexibility and efficiency in generating diverse training data consistent with Tremblay's image-generation objectives. Regarding Claim 6, the combination of Tremblay and Gupta teaches the invention in Claim 1. The combination further teaches generating, using the trained segmentation network and the trained classification network, an interactive image based on the input image (Tremblay, Paragraph [0049], "the training data computation unit 115 compares the output generated by the neural network model 260 when the input image is processed to the ground truth labels. <read on using trained network to process an input image and generate an output>"). But Tremblay does not explicitly disclose interactive image configured to display an image tool description]; the image tool description that provides an indication of the digital tool; and a parameter configuration for the digital tool. However, Gupta teaches an interactive image ... configured to display an image tool description (Gupta, Paragraph [0084], "the interface displays a tag 702 that provides feedback data 240. <read on image tool description>"). the image tool description that provides an indication of the digital tool (Gupta, Paragraph [0084], "the "sketch" label in the tag 702 indicates a set of customization operations performed by the vectorization algorithm 400. <read on indication of the digital tool>"). and a parameter configuration for the digital tool used to achieve the visual appearance (Gupta, Paragraph (0084], "The interface 700 also depicts an edge-sensitivity box 708 and a corresponding slide tool 710. <read on parameter configuration>"). Gupta and Tremblay are analogous since both of them are dealing with image-processing workflows that use trained-model outputs and present results to a user via a user interface. Tremblay provided a way of generating labeled training data (including segmentation maps) for training a neural network model that processes an input image. Gupta provided a way of displaying, in a user interface, an indication of identified operations and an associated adjustable parameter value (e.g., tag 702 and edge sensitivity 708). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to inco rporate user-interface presentation of identified operations and parameter information taught by Gupta into modified invention of Tremblay such that the trained segmentation network and trained classification network outputs are used to generate an interactive image configured to display an image tool description providing an indication of a digital tool and a parameter configuration for the digital tool used to achieve the visual appearance. The motivation is to provide user-visible feedback and controls corresponding to automatically determined operations/parameters discussed by Gupta in Paragraph [0084]; Regarding Claim 7, the combination of Tremblay and Gupta teaches the invention in Claim 6. The combination further teaches wherein the processing device is configured to apply the parameter configuration for the digital tool to an additional input image displayed by the processing device responsive to receiving input at the image tool description. (Gupta, Paragraph [0049] “At block 308, the process 300 involves selecting the first customization operation as the customization specific to the input raster graphic” [0045] “At block 302, the process 300 involves accessing an input raster graphic 204, such as a scan of a drawing, a scan of an oil painting, an image, etc <read on apply the parameter configuration>”; Paragraph [0084], "The interface 700 also provides an option ( i.e., a "default" button) to revert the classification decision <read on responsive to receiving input at the image tool description>"). Gupta and Tremblay are analogous since both of them are dealing with image-processing workflows in which identified operations/parameters can be applied and adjusted by a user. Tremblay provided a way of producing trained-model outputs from an input image. Gupta provided a way of selecting operations to be performed and providing user interface controls (e .g., a default button) to apply or revert the selected operations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate user-driven application/reversion of selected operations and associated parameter configurations taught by Gupta into modified invention of Tremblay such that, responsive to receiving input at the image tool description, the processing device applies the parameter configuration for the digital tool to an additional input image displayed by the processing device. The motivation is to allow a user to apply, adjust, or revert identified settings via interface interaction discussed by Gupta in Paragraphs [0049] and [0084]; Regarding Claim 8, the combination of Tremblay and Gupta teaches the invention in Claim 1. The combination further teaches wherein the t rained segmentation network is configured to output a binary mask that indicates whether each pixel of the input image was generated using the digital tool (Tremblay, Paragraph [0035], "for segmentation, the task-specific training data is the rendered objects of interest having each pixel within a rendered object of interest replaced with an object identifier"; Paragraph [0024], " the task is segmentation and the training data computation unit 115 determines an object identifier for each rendered image of an object of interest and computes the task-specific training data as a segmentation map corresponding to the input image"; it is noted since rendered is apply to each pixel and it's apply to identifier vs no identifier (which is two states), so it read on a binary mask output). Regarding Claim 9, it recites limitations similar in scope to the limitations of Claim 1 but as a method and the combination of Tremblay and Gupta teaches all the limitations as of Claim 1. Therefore is rejected under the same rationale. Regarding Claim 10, the combination of Tremblay and Gupta teaches the invention in Claim 9. The combination further teaches receiving a vector image (Gupta, Paragraph [0051], "At block 312, the process 300 involves causing a computing device (e.g., a user device 227) to display or otherwise provide an output vector graphic 226."); and wherein the labeled vector image is generated automatically and without user intervention (Gupta, Paragraph [0020], "using a trained neural network to select multiple vectorization operations specific to the characteristics of the input graphic can provide improved raster-to-vector conversions without requiring manual input <read on without user intervention> to modify the vectorization process <read on generated automatically>"). based on metadata associated with the digital tool and the vector image (Gupta, Paragraph [0037], "allows a graphic manipulation application 206 to use the visual characteristics <read on metadata> of an input raster graphic 204 to select customization operations <read on digital tool>"; Paragraph [0051], "At block 312, the process 300 involves causing a computing device (e.g., a user device 227) to display or otherwise provide an output vector graphic 226."). As explained in rejection of Claim 9, the obviousness for combining Gupta with Tremblay is provided above. Regarding Claim 11, the combination of Tremblay and Gupta teaches the invention in Claim 10. The combination further teaches generate a rasterized image (Tremblay, Paragraph [0034], "At step 135, the GPU 110 renders a 3D object of interest to produce a rendered image of the object of interest. <read on rasterized image>"). and generate the training sample mask from the rasterized image (Tremblay, Paragraph [0035], "for segmentation, the task-specific training data is the rendered objects of interest having each pixel within a rendered object of interest replaced with an object identifier. <read on generating the training sample mask from the rasterized image>"). But Tremblay does not explicitly disclose: generating an augmented vector image from the labeled vector image and generating the rasterized image from the augmented vector image. However, Gupta teaches generating an augmented vector image from the labeled vector image {Gupta, Paragraph [0047], "the vectorization algorithm applies a set of customization operations <read on augmented vector image>,") wherein the processing device is configured to generate the rasterized image from the augmented vector image (Gupta, Paragraph [0047], "Executing the graphic manipulation application 206 causes the processing device to perform a vectorization algorithm <read on generating the rasterized image from the augmented vector image>" Paragraph [0051], "At block 312, the process 300 involves causing a computing device (e.g., a user device 227) to display or otherwise provide an output vector graphic 226. <read on labeled vector image used as source for rasterization>"). Gupta and Tremblay are analogous since both of them are dealing with generating image data for downstream processing/training by applying configurable operations to source representations. Tremblay provided a way of generating rasterized images (rendered images) and corresponding segmentation training data. Gupta provided a way of generating an augmented vector image by applying customization operations and executing the application to perform vectorization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate vector-image customization operations taught by Gupta into modified invention of Tremblay such that the system generates augmented vector images prior to rasterization for use in generating rasterized images and training sample masks. The motivation is to improve diversity and controllability of generated training data discussed by Gupta in Paragraph [0047]; Regarding Claim 12, the combination of Tremblay and Gupta teaches the invention in Claim 11 . The combination further teaches altering a parameter configuration of the digital tool applied to a stylized geometry element in the labeled [[vector]] image. (Tremblay, Paragraph [0042], "Note that at least a portion of the rendering parameters vary for each one of the rendered images <read on altering a parameter configuration>”); adjusting a hierarchical placement of the stylized geometry element in the labeled [[vector]] image. (Tremblay, Paragraph [0044], "A particular rendered object of interest may be occluded by one or more other rendered objects of interest and/or rendered geometric shapes <read on adjusting a hierarchical placement>") or inserting a stylized geometry element from a different labeled [[vector]] image into the labeled vector image. (Tremblay, Paragraph [0037], "a random number of rendered 3D geometric shapes may be inserted into the input image <read on inserting a stylized geometry element>") But Tremblay does not explicitly disclose: variations are performed in the labeled vector image and/or that the inserted stylized geometry element is from a different labeled vector image. However, Gupta teaches the above variations are performed in the labeled vector image and/or that the inserted stylized geometry element is from a different labeled vector image (Gupta, Paragraph [0047], "the vectorization algorithm applies a set of customization operations <read on performing the above variations in a vector image and inserting elements from other vector images>") Gupta and Tremblay are analogous since both are directed to systematic variation of visual characteristics in images using parameterized operations executed by a processing device. Tremblay provides a way of varying position, size, color, rendering parameters, relative placement, and insertion of visual elements during generation of rendered images for training purposes. Gupta provides a way of applying customization operations and parameter configurations directly to vector graphics during vectorization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate vector-image customization operations and parameter configurations taught by Gupta into the modified invention of Tremblay, such that the variations of stylized geometry elements (including position, size, color, parameter configuration, hierarchical placement, and insertion) are performed at the vector-image level prior to rasterization, rather than being limited to variations applied only during rendering. The motivation is to enable controlled, repeatable, and parameter-driven variation of stylized geometry elements using vector-level representations, as discussed by Gupta in Paragraphs [004 7] and [0081], which improves flexibility and efficiency in generating diverse training data consistent with Tremblay's image-generation objectives. Regarding Claim 15, the combination of Tremblay and Gupta teaches the invention in Claim 9. The combination further teaches receiving, by the processing device, a classification network (Tremblay, Paragraph [0032], "During supervised training of a neural network model <read on classification network>, the task-specific training data is ground truth labels ... compared with an output generated by the neural network model when the input image is processed by the neural network model."); and generating an interactive image based on the input image ... using the segmentation network and the classification network (Tremblay, Paragraph [0049], "the training data computation unit 115 compares the output generated by the neural network model 260 when the input image is processed to the ground truth labels. <read on generating an output image based on processing the input image using the network>"); using the segmentation network (Tremblay, Paragraph [0024], "the task specific training data computation unit computes the task-specific training data as a segmentation map corresponding to the input image. <read on segmentation network>"). But, Tremblay does not explicitly disclose generating an interactive image ... configured to display an image tool description that provides an indication of the digital tool and a parameter configuration for the digital tool used to achieve the visual appearance. However, Gupta teaches generating an interactive image ... configured to display an image tool description that provides an indication of the digital tool (Gupta, Paragraph (0084], "the interface displays a tag 702 that provides feedback data 240 ... the "sketch" label in the tag 702 indicates a set of customization operations <read on digital tool> performed by the vectorization algorithm 400. <read on image tool description indicating the digital tool>"). and a parameter configuration for the digital tool used to achieve the visual appearance (Gupta, Paragraph [0084], "The interface 700 also depicts an edge-sensitivity box 708 and a corresponding slide tool 710 <read on parameter configuration>."). Gupta and Tremblay are analogous since both of them are dealing with image-processing workflows that apply (and/or infer) operations/parameters and present results to a user. Tremblay provided a way of processing an input image using a neural network model (including segmentation-map related processing). Gupta provided a way of displaying feedback data in a user interface, including an indication of an operation (tag 702) and an adjustable parameter (edgesensitivity 708). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate user-interface display of tool indication and parameter controls taught by Gupta into modified invention of Tremblay such that the interactive image generated from the segmentation/classification networks is configured to display an image tool description indicating the digital tool and a parameter configuration. The motivation is to provide feedback data and adjustable controls discussed by Gupta in Paragraph [0084]; Regarding Claim 16, the combination of Tremblay and Gupta teaches the invention in Claim 9. The combination further teaches wherein the training the segmentation network includes predicting whether the digital tool was applied to each pixel of the training sample mask using the labeled vector image as ground truth data. (Tremblay, Paragraph [0032], "During supervised training of a neural network model, the task-specific training data is ground truth labels ... compared with an output generated by the neural network model when the input image is processed by the neural network model <read on predicting whether applied per pixel using ground truth>") using the labeled vector image as ground truth data (Tremblay, Paragraph [0035], "each pixel within a rendered object of interest replaced with an object identifier <read on ground truth labels per pixel>"). Regarding Claim 17, it recites limitations similar in scope to the limitations of claim 1 and the combination of Tremblay and Gupta teaches all the limitations as of Claim 1. And Tremblay discloses these features can be implemented on a computer readable storage medium (Tremblay, Paragraph [0005], “A method, computer readable medium, and system are disclosed for generating synthetic images for training a neural network model” [0061], “a program executed by the host processor encodes a command stream in a buffer that provides workloads to the PPU 300 for processing. A workload may comprise several instructions and data to be processed by those instruction”). Regarding Claim 18, it recites limitations similar in scope to the limitations of Claim 3 and therefore is rejected under the same rationale. Regarding Claim 19, the combination of Tremblay and Gupta teaches the invention in Claim 17. The combination further teaches wherein the labeled vector image further specifies a parameter configuration for the digital tool used to stylize a geometry element in the labeled vector image (Gupta, Paragraph [0084], "The interface 700 also depicts an edge-sensitivity box 708 and a corresponding slide tool 710 <read on parameter configuration>";Paragraph [0051], "provide an output vector graphic 226 <read on labeled vector image>."). Gupta and Tremblay are analogous since both of them are dealing with digital graphics tools/operations that are parameterized and whose parameter values may be associated with a graphics representation for downstream use. Tremblay provided a way of training and outputting network(s) for use with images. Gupta provided a way of presenting and selecting a parameter value (edge-sensitivity) associated with a customization operation and providing an output vector graphic. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate associating a parameter configuration with a vector graphic taught by Gupta into modified invention of Tremblay such that the labeled vector image specifies the parameter configuration used to stylize a geometry element. The motivation is to enable subsequent reuse and consistent reproduction of the selected parameter setting discussed by Gupta in Paragraph [0084]; Regarding Claim 20, the combination of Tremblay and Gupta teaches the invention in Claim 17. The combination further teaches receiving a segmentation network (Tremblay, Paragraph [0024], "the task-specific training data computation unit computes the task-specific training data as a segmentation map corresponding to the input image <read on segmentation network>."). and generating an interactive image based on the input image (Tremblay, Paragraph [0049], "the training data computation unit 115 compares the output generated by the neural network model 260 when the input image is processed to the ground truth labels <read on generating an output based on the input image>."). But, Tremblay does not explicitly disclose: configured to display an image tool description that provides an indication of the digital tool and a parameter configuration for the digital tool used to achieve the visual appearance using the segmentation network and the classification network. However, Gupta teaches configured to display an image tool description that provides an indication of the digital tool and a parameter configuration for the digital tool used to achieve the visual appearance (Gupta, Paragraph [0084], "the interface displays a tag 702 that provides feedback data 240 ... the "sketch" label in the tag 702 indicates a set of customization operations <read on digital tool> ... The interface 700 also depicts an edge-sensitivity box 708 and a corresponding slide tool 710 <read on parameter configuration>."). Gupta and Tremblay are analogous since both of them are dealing with using trained models in image-processing applications and presenting results to a user via a user interface. Tremblay provided a way of processing an input image using a neural network model and producing outputs (including segmentation-map related outputs). Gupta provided a way of displaying feedback data in a user interface, including an indication of an operation (tag 702) and an adjustable parameter (edge-sensitivity 708). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate user-interface feedback of identified operations/parameters taught by Gupta into modified invention of Tremblay such that an interactive image is generated using the segmentation network and the classification network and is configured to display an image tool description indicating the digital tool and parameter configuration. The motivation is to provide feedback data to a user about identified operations and adjustable parameters discussed by Gupta in Paragraph [0084]; Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tremblay et al. (US 20190251397 A1, hereinafter Tremblay) in view of Gupta et al. (US 20190158112 A1, hereinafter Gupta) as applied to Claim 9 above and further in view of Carbonneau et al. (US 20130328879 A1, hereinafter Carbonneau). Regarding Claim 13. the combination of Tremblay and Gupta teaches the invention in Claim 9. The combination further teaches generating a [[ cropped ]] labeled [[ vector ]] image (Tremblay, Paragraph [0004], "The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. wherein the segmentation network is trained using the [[ cropped ]] labeled [[ vector ]] image and the [[ cropped ]] rasterized image (Tremblay, Paragraph [0020], "A domain randomization technique to generate training data that is automatically labeled is described. The generated training data may be used to train neural networks for object detection and segmentation tasks"; [0112], “After the processed vertex data is rasterized…to produce fragment data). But, Tremblay does not explicitly disclose the [[ labelled ]] vector [[ image ]]. However, Gupta teaches generating a labeled vector image and a rasterized image (Gupta, Paragraph [0051], "At block 312, the process 300 involves causing a computing device (e.g., a user device 227) to display or otherwise provide an output vector graphic 226 <read on labeled vector image>"; [0005], “The content-creation computing system provides the input raster graphic to a customization-identification network having a multi-label classifier”). As explained in rejection of Claim 9, the obviousness for combining Gupta's output vector graphic into Tremblay's rasterization/training pipeline is provided above. But the combination does not explicitly disclose generating a cropped labeled vector image and a cropped rasterized image. However, Carbonneau teaches generating a cropped labeled vector image (Carbonneau, Paragraph [0050], "Graphical images in a tile <read on cropped> are built up from individual pixels ... reduces the amount of data for a tile that could not be displayed separately on the scale of that tile"; [0051], "receiving at 310 a set of road vectors for a set of map tiles") and a cropped rasterized image (Carbonneau, Paragraph [0051], "The process 300 plots the vectors on the map tiles at a particular zoom level and rasterizes (at 320) the vectors. The rasterization identifies a set of pixels in the map tiles that the road vectors pass through") that depict a stylized geometry element of the labeled vector image (Carbonneau, Paragraph [0050], "detail smaller than 100 meters wide ... multiple roads in the same direction"), and wherein the segmentation network is trained using the cropped labeled vector image and the cropped rasterized image (Carbonneau, Paragraph [0005], "generates an equivalent of the road data by 1 rasterizing the vectors representing road segments lying within a tile"). Carbonneau and Tremblay (as modified by Gupta) are analogous since both of them are dealing with processing vector data and rasterizing it for downstream use. Tremblay provided a way of rendering 30 vector objects into 2D raster images for training neural networks. Carbonneau provided a way of handling large vector datasets (map data) by dividing them into tiles (crops) and rasterizing vectors within those tiles to manage data volume and complexity. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the tiling and cropping approach for vector data taught by Carbonneau into modified invention of Tremblay such that the training data generation process can handle large, complex vector scenes more efficiently by processing them in smaller, manageable cropped sections (tiles). The motivation is to reduce the computational load and memory usage when processing large vector images by breaking them down into smaller tiles, which allows for more efficient rasterization and mask generation reduces the amount of data for a tile"). Regarding Claim 14. the combination of Tremblay, Gupta and Carbonneau teaches the invention in Claim 13. The combination further teaches wherein the generating the training sample mask includes generating a mask including the stylized geometry element (Tremblay, Paragraph [0024], "computes the task-specific training data as a segmentation map ... where each pixel that is covered by a rendered image is colored according to the object identifier determined for the instance"), [[dilating]] the stylized geometry element (Tremblay, Paragraph [0024], "segmentation map comprises the input image, where each pixel that is covered by a rendered image is colored"), and rasterizing the [[ dilated ]] stylized geometry element against a [[ black backdrop ]] (Tremblay, Paragraph [0022], "rendered image of the 3D object"). But Tremblay does not explicitly disclose dilating the stylized geometry element. However, Gupta teaches wherein the generating the training sample mask includes generating a mask including the stylized geometry element (Gupta, Paragraph [0005], " a content-creation computing system transforms an input raster graphic into a output vector graphic by applying a customization specific to visual characteristics of the input raster graphic"), [[ dilating ]] the stylized geometry element (Gupta, Paragraph [0003], “designer may want to convert raster graphics depicting hand-drawn sketches, hazy images, or oil paintings into vector graphics in order to apply more robust editing techniques available to vector graphics”), and rasterizing the [[dilated]] stylized geometry element against a [[ black backdrop ]] (Gupta, Paragraph [0022] , "The graphic manipulation application outputs a vector graphic generated by the vectorization pipelines"). Gupta and Tremblay are analogous since both of them are dealing with generating synthetic training data for neural networks. Tremblay provided a way of generating segmentation maps for training. Gupta provided a way of generating synthetic images with diverse visual properties. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the synthetic data generation techniques taught by Gupta into modified invention of Tremblay such that the system produces high-quality training data. The motivation is to enhance the robustness of the trained models. But the combination does not explicitly disclose dilating the stylized geometry element. However, Carbonneau teaches wherein the generating the training sample mask includes generating a mask including the stylized geometry element (Carbonneau, Paragraph [0005], "generating a connectivity mask for the tile(s )."), dilating the stylized geometry element (Carbonneau, Paragraph [0005], "The connectivity mask keeps track of which pixels are connected to which other pixels along the vectors"; [0050], “Similar problems arise for other relatively small features… Sending such redundant information would use up potentially expensive bandwidth…process that reduces the amount of data for a tile” [0052], “By converting the vectors to pixels in a connectivity mask, the process 300 weeds out redundant vectors and vector segments”), and rasterizing the dilated stylized geometry element against a black backdrop (Carbonneau, Paragraph [0051], “The rasterization identifies a set of pixels in the map tiles that the road vectors pass through” [0058], “Because in the tile, each pixel is the smallest measure of detail, each pixel is either on or off When no vectors (roads) pass through the pixel, the pixel is off”). Carbonneau and Tremblay (as modified by Gupta) are analogous since both of them are dealing with converting vector data into rasterized formats for processing. Tremblay provided a way of creating segmentation masks for rendered objects. Carbonneau provided a way of creating connectivity masks from rasterized vectors to ensure data continuity and handle close vector segments. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the mask generation and pixel connectivity processing (which effectively dilates the element to ensure vector coverage) taught by Carbonneau into modified invention of Tremblay such that the generated training masks robustly capture the footprint of the vector elements, especially thin or complex ones, ensuring accurate training data. The motivation is to ensure that the rasterized representation accurately reflects the vector geometry and maintains connectivity between pixels representing the same element. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20220114698 A1 IMAGE GENERATION USING ONE OR MORE NEURAL NETWORKS US 20220108417 A1 IMAGE GENERATION USING ONE OR MORE NEURAL NETWORKS US 20210192740 A1 COUPLED MULTI-TASK FULLY CONVOLUTIONAL NETWORKS USING MULTI-SCALE CONTEXTUAL INFORMATION AND HIERARCHICAL HYPER-FEATURES FOR SEMANTIC IMAGE SEGMENTATION US 20210097691 A1 IMAGE GENERATION USING ONE OR MORE NEURAL NETWORKS US 20200161083 A1 PARAMETER ESTIMATION FOR METROLOGY OF FEATURES IN AN IMAGE US 20150170381 A1 TOOL LOCALIZATION SYSTEM WITH IMAGE ENHANCEMENT AND METHOD OF OPERATION THEREOF Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached on (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YuJang Tswei/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Nov 16, 2023
Application Filed
Jan 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579805
AUGMENTED, VIRTUAL AND MIXED-REALITY CONTENT SELECTION & DISPLAY FOR TRAVEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579838
Perspective Distortion Correction on Faces
2y 5m to grant Granted Mar 17, 2026
Patent 12567213
COMPUTER VISION AND ARTIFICIAL INTELLIGENCE METHOD TO OPTIMIZE OVERLAY PLACEMENT IN EXTENDED REALITY
2y 5m to grant Granted Mar 03, 2026
Patent 12567189
RELATIONAL LOSS FOR ENHANCING TEXT-BASED STYLE TRANSFER
2y 5m to grant Granted Mar 03, 2026
Patent 12561930
PARAMETRIC EYEBROW REPRESENTATION AND ENROLLMENT FROM IMAGE INPUT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+17.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 447 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month