Prosecution Insights
Last updated: April 19, 2026
Application No. 18/205,279

MARKING-BASED PORTRAIT RELIGHTING

Final Rejection §103
Filed
Jun 02, 2023
Examiner
AHN, CHRISTINE YERA
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
4 (Final)
69%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
11 granted / 16 resolved
+6.8% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 2. The amendment filed December 11, 2025 has been entered. Claims 1-3, 7-9, 13, 17-20, 22-26 and 28-31 remain pending in the application. Response to Arguments 3. Applicant's arguments filed December 11, 2025 have been fully considered but they are not persuasive. 4. Applicant argues that the prior art Pandey et al. ("Total Relighting: Learning to Relight Portraits for Background Replacement" -- IDS), hereinafter referred to as Pandey, and Vicente et al. (“Single Image Shadow Removal via Neighbor-Based Region Relighting”), hereinafter referred to as Vicente, fail to teach “modifying the training portrait image by drawing the subset of the superpixels from the superpixel representation on the training portrait image, the subset of the superpixels representing one or more computer-generated markings drawn on the training portrait image to be interpreted as a lighting condition to be applied to the training portrait image.” Examiner replies that although Pandey does not teach the above limitation, after further consideration it has been determined that Vicente does teach the new amendment. Vicente teaches drawing/superimposing the merged superpixels onto the input image in Section 3.1 and Figure 3. Vicente Section 3.1 teaches that after generating the superpixel representation, as seen in Figure 3a, a mask shown in Figure 3c is created. The mask can be considered a subset of superpixels selected. Then in Figure 3d the mask is overlaid on the original image. The overlaying of the mask teaches drawing the merged superpixels onto the input image which is the original image. Vicente Figure 5 also teaches in step (a) that the shadow mask or selected superpixels are overlaid on the original image which indicate a lighting condition to be applied. The lighting condition is removing the shadow in step (d). 5. Applicant argues that the selected superpixels in Vicente are portions of the input image itself, rather than superpixels derived from a separate image signal. Examiner replies that the superpixel segmentation representation taught in Vicente can be considered the separate image signal since it is the output of running a superpixel segmentation process on the input image. Thus, the output or superpixel segmentation representation is a separate image signal which superpixels are selected from. It can also be considered a shading map too since it consists of a shadow. Furthermore, Vicente does not teach away since the superpixel representation can be considered a different image signal since it is not the original image. It is the original image modified with a superpixel segmentation which makes it a different image signal under broadest reasonable interpretation. The Examiner advises Applicant to further define shading map and a different image signal. 6. Applicant argues that there are advantages of modifying the training portrait image in the manner claimed which is not disclosed by the prior art. Examiner replies that in response to applicant's argument that there are advantages not taught in the prior art like having the simulated markings "accurately , the fact that the inventor has recognized another advantage which would flow naturally from following the suggestion of the prior art cannot be the basis for patentability when the differences would otherwise be obvious. See Ex parte Obiaya, 227 USPQ 58, 60 (Bd. Pat. App. & Inter. 1985). Furthermore, MPEP 2144 IV asserts that having rationale for implementing a method different from the Applicant’s is permissible. Specifically, MPEP 2144 IV states that “the reason or motivation to modify the reference may often suggest what the inventor has done, but for a different purpose or to solve a different problem. It is not necessary that the prior art suggest the combination to achieve the same advantage or result discovered by applicant.” In addition, In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., the shading map "captures the shadows and highlights" in the training portrait image) are not recited in the rejected independent claims 1, 13, and 18. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). 7. Applicant argues that Price et al. (U.S. Patent Application Publication No. 2022/0189034 A1), hereinafter referred to as Price, does not teach a subset of superpixels having a brightness intensity exceeding a first threshold and another subset of superpixels having a brightness intensity less than a threshold. Examiner replies that the Applicant’s arguments with respect to claim(s) 20 has been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Wober (U.S. Patent No. 5,235,434 A) teaches a subset of superpixels having a brightness intensity exceeding a first threshold and another subset of superpixels having a brightness intensity less than a threshold. 8. Applicant argues that randomly selecting superpixel is not taught by Pandey, Vicente, Price, or Shesh et al. (“Crayon Lighting: Sketch-Guided Illumination of Models”), hereinafter referred to as Shesh. Examiner argues that the Applicant’s arguments with respect to claim(s) 20 has been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Li et al. (Chinese Patent Application Publication No. 114998132 A), hereinafter Li, teaches randomly selecting a superpixel. 9. Conclusion: The rejections set in the previous Office Action are shown to have been proper, and the claims are rejected below. New citations and parenthetical remarks can be considered new grounds of rejection and such new grounds of rejection are necessitated by the Applicant’s amendments to the claims. Therefore, the present Office Action is made final. Claim Rejections - 35 USC § 103 10. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 11. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. 12. Claims 1-3, 7-8, 13, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pandey et al. ("Total Relighting: Learning to Relight Portraits for Background Replacement" -- IDS), hereinafter referred to as Pandey, in view of Vicente et al. (“Single Image Shadow Removal via Neighbor-Based Region Relighting”), hereinafter referred to as Vicente. 13. Regarding claim 1, Pandey teaches a method, comprising: receiving a training portrait image (Figure 4 shows an input image being passed into the relighting network; Section 5 Paragraph 1 teaches acquiring various training portrait images of different subjects); generating a first shading map of the training portrait image, the first shading map representing a different image signal than the training portrait image (Figure 4 shows diffuse and specular light maps generated by combining the convolved light maps and surface normals. These light maps can be considered shading maps which are separate image signals under broadest reasonable interpretation; Page 4, Section 3.2 Paragraph 1 teaches using light maps which provide the lighting, and thus also shading, for the image. Thus, the light maps can be considered to be shading maps); generating, using one or more machine learning models, an albedo representation of the training portrait image by removing lighting effects from the training portrait image (Figure 4 shows an albedo representation of the training portrait image produced after passing through the Albedo Net; Section 4.1, Albedo Adversarial Loss subsection teaches that the Albedo Net is trained to remove shading effects to produce the albedo representation. Removing the shading effects can be considered as removing the lighting effects); generating, using the one or more machine learning models, a second shading map based on the modified training portrait image by: (Figure 5 teaches a second shading map being produced by applying the first shading map or first specular light maps to the albedo representation. The second shading map is the output from the Specular Net. Applying the lighting effects of the specular light maps to the albedo representation can be considered applying the lighting effect to a geometric representation because the albedo representation originates from the geometric representation as seen in Figure 4. The surface normals, or geometric representation, passes through the Albedo Net to get the albedo representation); generating, using the one or more machine learning models, a relit portrait image based on the albedo representation and the second shading map (Figure 5 teaches that a final relit foreground or relit portrait image is produced based on the second shading map and albedo representation; Figures 4 and 5 show one or more machine learning models are used to create the relit portrait image); and training the one or more machine learning models based on a comparison of the relit portrait image to the training portrait image (Section 4.1, Shading L1 Loss section teaches comparing the ground truth relit image with the predicted relit image from the relighting module. Then it trains the relighting module networks with that loss function). However, Pandey fails to teach generating a superpixel representation of the first shading map by partitioning the first shading map into superpixels using superpixel segmentation; selecting a subset of the superpixels from the superpixel representation; modifying the training portrait image by drawing the subset of the superpixels from the superpixel representation on the training portrait image, the subset of the superpixels representing one or more computer-generated markings drawn on the training portrait image to be interpreted as a lighting condition to be applied to the training portrait image; and designating the one or more computer-generated markings as the lighting condition. Vicente teaches generating a superpixel representation of the first shading map by partitioning the first shading map into superpixels using superpixel segmentation (Section 3.1, Paragraph 1 teaches segmenting the image into superpixels to group pixels into regions that are illuminated or in shadows); selecting a subset of the superpixels from the superpixel representation (Section 5 teaches selecting a subset of superpixels based on positive classifications and using them for relighting. The selected subsets can be considered the computer-generated markings per the claim language and are interpreted as a lighting condition); modifying the training portrait image by drawing the subset of the superpixels from the superpixel representation on the training portrait image, the subset of the superpixels representing one or more computer-generated markings drawn on the training portrait image to be interpreted as a lighting condition to be applied to the training portrait image (Section 3.1 teach after generating the superpixel representation, as seen in Figure 3a, a mask shown in Figure 3c is created. The mask can be considered a subset of superpixels selected. Then in Figure 3d the mask is overlaid on the original image; Figure 5 teaches in step (a) the shadow mask or selected superpixels are overlaid on the original image which indicate a lighting condition to be applied. The lighting condition is removing the shadow in step (d)); and designating the one or more computer-generated markings as the lighting condition (Section 5 teaches selecting a subset of superpixels based on positive classifications and using them for relighting. The selected subsets can be considered the computer-generated markings per the claim language and are interpreted as a lighting condition. Relighting the selected subset of superpixels can be considered a lighting condition which is applied to the image). Pandey and Vicente are considered analogous to the claimed invention as because all are in the same field of relighting images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of relighting taught by Pandey with the superpixel partitioning and segmentation taught by Vicente in order to identify illuminated and shadow regions (Vicente Section 3.1, Paragraph 1). 14. Regarding claim 2, Pandey in view of Vicente teaches the limitations of claim 1. Pandey further teaches the method wherein the lighting effects include shadows and highlights of the training portrait image (Figure 4 shows an albedo representation after passing through an Albedo Net; Section 4.1, Albedo Adversarial Loss section teaches removing shading effects; Section 6.3.3 also teaches that without the adversarial losses to train the network, it is unable to remove bright specular highlights and hard shadows. This means the Albedo Network, using the adversarial losses, is trained to remove bright specular highlights and hard shadows in the portrait image to produce the albedo representation). 15. Regarding claim 3, Pandey in view of Vicente teaches the limitations of claim 1. Pandey further teaches the method wherein the geometric representation of the training portrait image includes a surface normal for each pixel in the training portrait image that includes a human subject (Section 3.2.1 teaches a Geometry Net which generates the geometry image, which is the geometric representation, of the image through per-pixel surface normal; Figure 4 shows the input into the Geometry Net is a portrait image with a human subject. It also shows the output as the surface normal representation for the pixels including the human subject). 16. Regarding claim 7, Pandey in view of Vicente teaches the limitations of claim 1. Pandey further teaches the method wherein generating the relit portrait image includes encoding, using the one or more machine learning models, an albedo feature conditioned on the albedo representation (Section 3.2.4 and Figure 5 shows the albedo representation being passed into the Specular Net. The Specular Net is a U-Net which are known to have an encoder-decoder network structure. Thus, as the albedo representation is passed into the U-Net, it will be encoded which can be considered the albedo feature). 17. Regarding claim 8, Pandey in view of Vicente teaches the limitations of claim 7. Pandey further teaches the method wherein generating the relit portrait image includes: generating a combined feature of the training portrait image by combining the albedo feature with the second shading map (Section 3.2.4 and Figure 5 shows the albedo and light maps combined before being passed into the Specular Network, which has a U-Net architecture, and combined again with a second shading map before the Neural Rendering network, which is another U-Net, that will encode the combination and generate a combined feature); and conditioning the one or more machine learning models on the combined feature (Section 3.2.4 and Figure 5 shows the combined features from the Specular Net being passed into the Neural Rendering network; Section 4.1 teaches training the networks with losses computed from the albedo, geometry, and shading). 18. Regarding claim 13, Pandey teaches a system, comprising: a processing device; and a computer-readable storage media storing instructions that, responsive to execution by the processing device, cause the processing device to perform operations including (Section 4.2, Paragraph 1 teaches using a memory and a Nvidia GPU to perform the relighting operations): receiving a training portrait image depicting a human subject (Figure 4 shows an input image being passed into the relighting network which depicts a human subject; Section 5 Paragraph 1 teaches acquiring various training portrait images of different subjects); generating a shading map of the training portrait image, the shading map representing a different image signal than the training portrait image (Figure 4 shows diffuse and specular light maps generated by combining the convolved light maps and surface normals. These light maps can be considered shading maps which are different image signals from the training portrait image; Page 4, Section 3.2 Paragraph 1 teaches using light maps which provide the lighting, and thus also shading, for the image. Thus, the light maps can be considered to be shading maps); generating, using one or more machine learning models, an albedo representation of the training portrait image that captures a skin tone of the human subject (Figure 4 shows an albedo representation of the training portrait image produced after passing through the Albedo Net which is a machine learning model. The training portrait image has a human subject and thus also has a skin tone which will be present in the albedo representation; Section 4.1, Albedo Adversarial Loss subsection teaches that the Albedo Net is trained to remove shading effects to produce the albedo representation. Removing the shading effects can be considered as removing the lighting effects); generating, using the one or more machine learning models, a relit portrait image based on the modified training portrait image by: (Figure 4 shows a final relit foreground image produced after passing through the geometry, albedo, and shading nets which are all machine learning models. The relit foreground image is also created by applying a specular light map to the albedo where they pass through a shading net together to output the final relit portrait image. The specular light map can be considered applying a lighting condition to the albedo representation); and training the one or more machine learning models based on a comparison of the relit portrait image and the training portrait image (Section 4.1, Shading L1 Loss section teaches comparing the ground truth relit image with the predicted relit image from the relighting module. Then it trains the relighting module networks with that loss function). However, Pandey fails to teach generating a superpixel representation of the first shading map by partitioning the shading map into superpixels using superpixel segmentation; selecting a subset of the superpixels from the superpixel representation; modifying the training portrait image by drawing the subset of the superpixels from the superpixel representation on the training portrait image, the subset of the superpixels representing one or more computer-generated markings drawn on the training portrait image to be interpreted as a lighting condition to be applied to the training portrait image; and designating the one or more computer-generated markings as the lighting condition. Vicente teaches generating a superpixel representation of the first shading map by partitioning the shading map into superpixels using superpixel segmentation (Section 3.1, Paragraph 1 teaches segmenting the image into superpixels to group pixels into regions that are illuminated or in shadows); selecting a subset of the superpixels from the superpixel representation (Section 5 teaches selecting a subset of superpixels based on positive classifications and using them for relighting. The selected subsets can be considered the computer-generated markings per the claim language and are interpreted as a lighting condition); modifying the training portrait image by drawing the subset of the superpixels from the superpixel representation on the training portrait image, the subset of the superpixels representing one or more computer-generated markings drawn on the training portrait image to be interpreted as a lighting condition to be applied to the training portrait image (Section 3.1 teach after generating the superpixel representation, as seen in Figure 3a, a mask shown in Figure 3c is created. The mask can be considered a subset of superpixels selected. Then in Figure 3d the mask is overlaid on the original image; Figure 5 teaches in step (a) the shadow mask or selected superpixels are overlaid on the original image which indicate a lighting condition to be applied. The lighting condition is removing the shadow in step (d)); and designating the one or more computer-generated markings as the lighting condition (Section 5 teaches selecting a subset of superpixels based on positive classifications and using them for relighting. The selected subsets can be considered the computer-generated markings per the claim language and are interpreted as a lighting condition. Relighting the selected subset of superpixels can be considered a lighting condition which is applied to the image). Pandey and Vicente are considered analogous to the claimed invention as because all are in the same field of relighting images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the system of relighting taught by Pandey with the superpixel partitioning and segmentation taught by Vicente in order to identify illuminated and shadow regions (Vicente Section 3.1, Paragraph 1). 19. Regarding claim 17, Pandey and Vicente teach the limitations of claim 13. Pandey further teaches the system wherein generating the relit portrait image includes: generating, using the one or more machine learning models, an additional shading map of the portrait image by applying the lighting condition to a geometric representation of the training portrait image (Figure 5 teaches a second or additional shading map being produced by applying the first shading map or first specular light maps to the albedo representation. The second shading map is the output from the Specular Net. Applying the lighting effects of the specular light maps to the albedo representation can be considered applying the lighting effect to a geometric representation because the albedo representation originates from the geometric representation as seen in Figure 4. The surface normals, or geometric representation, passes through the Albedo Net to get the albedo representation); and transferring the lighting condition as applied to the geometric representation to the albedo representation (Figure 4 shows the light maps, which are from the lighting condition applied to the geometric representation, being applied to the albedo representation to obtain a final relit foreground image). 20. Regarding claim 18, Pandey teaches a non-transitory computer-readable medium storing executable instructions, which when executed by a processing device, cause the processing device to perform operations comprising (Section 4.2, Paragraph 1 teaches a memory and a NVIDIA GPU used to execute operations for relighting): receiving a training portrait image (Figure 4 shows an input image being passed into the relighting network which depicts a human subject; Section 5 Paragraph 1 teaches acquiring various training portrait images of different subjects); generating a shading map of the training portrait image, the shading map representing a different image signal than the training portrait image (Figure 4 shows diffuse and specular light maps generated by combining the convolved light maps and surface normals. These light maps can be considered shading maps which are different image signals than the training portrait image under broadest reasonable interpretation; Page 4, Section 3.2 Paragraph 1 teaches using light maps which provide the lighting, and thus also shading, for the image. Thus, the light maps can be considered to be shading maps); generating, using one or more machine learning models, a relit portrait image based on the modified training portrait image by:(Figure 4 shows a final relit foreground image produced after passing through the geometry, albedo, and shading nets which are all machine learning models. The relit foreground image is also created by applying a specular light map to the albedo where they pass through a shading net together to output the final relit portrait image. The specular light map can be considered applying a lighting condition to the albedo representation which is from the training portrait image); and training the one or more machine learning models based on a comparison of the relit portrait image and the training portrait image (Section 4.1, Shading L1 Loss section teaches comparing the ground truth relit image with the predicted relit image from the relighting module. Then it trains the relighting module networks with that loss function). However, Pandey fails to teach generating a superpixel representation of the first shading map by partitioning the shading map into superpixels using superpixel segmentation; selecting a subset of the superpixels from the superpixel representation; modifying the training portrait image by drawing the subset of the superpixels from the superpixel representation on the training portrait image, the subset of the superpixels representing one or more computer-generated markings drawn on the training portrait image to be interpreted as a lighting condition to be applied to the training portrait image; and designating the one or more computer-generated markings as the lighting condition. Vicente teaches generating a superpixel representation of the first shading map by partitioning the shading map into superpixels using superpixel segmentation (Section 3.1, Paragraph 1 teaches segmenting the image into superpixels to group pixels into regions that are illuminated or in shadows); selecting a subset of the superpixels from the superpixel representation (Section 3.1 teach after generating the superpixel representation, as seen in Figure 3a, a mask shown in Figure 3c is created. The mask can be considered a subset of superpixels selected; Section 5 teaches selecting a subset of superpixels based on positive classifications and using them for relighting. The selected subsets can be considered the computer-generated markings per the claim language and are interpreted as a lighting condition); modifying the training portrait image by drawing the subset of the superpixels from the superpixel representation on the training portrait image, the subset of the superpixels representing one or more computer-generated markings drawn on the training portrait image to be interpreted as a lighting condition to be applied to the training portrait image (Section 3.1 teach after generating the superpixel representation, as seen in Figure 3a, a mask shown in Figure 3c is created. The mask can be considered a subset of superpixels selected. Then in Figure 3d the mask is overlaid on the original image; Figure 5 teaches in step (a) the shadow mask or selected superpixels are overlaid on the original image which indicate a lighting condition to be applied. The lighting condition is removing the shadow in step (d)); and designating the one or more computer-generated markings as the lighting condition (Section 5 teaches selecting a subset of superpixels based on positive classifications and using them for relighting. The selected subsets can be considered the computer-generated markings per the claim language and are interpreted as a lighting condition. Relighting the selected subset of superpixels can be considered a lighting condition which is applied to the image). Pandey and Vicente are considered analogous to the claimed invention as because all are in the same field of relighting images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the operations of relighting a portrait image taught by Pandey with the superpixel partitioning and segmentation taught by Vicente in order to identify illuminated and shadow regions (Vicente Section 3.1, Paragraph 1). 21. Regarding claim 19, Pandey in view of Vicente teaches the limitations of claim 18. However, Pandey fails to teach the non-transitory computer-readable medium wherein each respective superpixel has a color value corresponding to an average color value of individual pixels of the shading map included in the respective superpixel. Vicente teaches wherein each respective superpixel has a color value corresponding to an average color value of individual pixels of the shading map included in the respective superpixel (Page 5, Paragraph 1 teaches that part of the superpixel segmentation involves using mean-shift clustering over the superpixel’s mean color. The mean color means that the superpixels have a color value that corresponds to an average color value of the pixels within the superpixel). Pandey and Vicente are considered analogous to the claimed invention as because all are in the same field of relighting images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the operations of relighting taught by Pandey with the superpixel color values taught by Vicente in order to identify illuminated regions (Vicente Section 3.1, Paragraph 1). 22. Claim 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pandey et al. ("Total Relighting: Learning to Relight Portraits for Background Replacement" -- IDS), hereinafter referred to as Pandey, and Vicente et al. (“Single Image Shadow Removal via Neighbor-Based Region Relighting”), hereinafter referred to as Vicente, as applied to claim 1 above, and further in view of Zhu et al. (“Designing an Illumination-Aware Network for Deep Image Relighting”), hereinafter referred to as Zhu. Regarding claim 9, Pandey in view of Vicente teaches the limitations of claim 1. Pandey further teaches the method wherein the one or more machine learning models include one or more U-Net convolutional neural networks (Figure 4 shows the relighting module consisting of a Geometry, Albedo, and Shading Net; Section 3.2.1 teaches the Geometry Net is a U-Net; Section 3.2.2 teaches the Albedo Net is a U-Net; Section 3.2.4 teaches the Shading Net consists of a Specular Net and Neural Renderer which are both U-Nets. Thus, Pandey teaches one or more machine learning models with one or more U-Nets) However, Pandey and Vicente fail to teach augmenting with one of one or more additional dilated convolutional layers and one or more additional non-local operation layers. Zhu teaches augmenting with one or more additional dilated convolutional layers and one or more additional non-local operation layers (Page 4, Column 1, Paragraph 2 teaches extending a U-Net network to have an enlarged receptive field. It teaches that the U-Net is extended to capture global information which can be considered including non-local operation layers. Capturing global information is non-local; Page 4, Column 2, Paragraph 2 teaches levels in the network model global illumination changes and global information. This discloses the network contains global or non-local operation layers; Section III, Subsection D, ‘Network Details’ subsection teaches that the network consists of dilated convolutions). Pandey, Vicente, and Zhu are considered analogous to the claimed invention as because all are in the same field of relighting images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of relighting with U-Nets taught by Pandey in view of Vicente with the dilated convolutional and non-local operation layers in Zhu in order to have a higher quality final relighting result by acquiring global information (Zhu Page 4, Column 1, Paragraph 2). 23. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pandey et al. ("Total Relighting: Learning to Relight Portraits for Background Replacement" -- IDS), hereinafter referred to as Pandey, in view of Vicente et al. (“Single Image Shadow Removal via Neighbor-Based Region Relighting”), hereinafter referred to as Vicente, as applied to claim 18 above, and further in view of Wober (U.S. Patent No. 5,235,434 A), and Li et al. (Chinese Patent Application Publication No. 114998132 A), hereinafter Li. Regarding claim 20, Pandey in view of Vicente teaches the limitations of claim 18. However, Pandey fails to teach the non-transitory computer-readable medium wherein the superpixels are clusters of contiguous pixels, and the subset of the superpixels includes a first sub-grouping of one or more superpixels having a rightness intensity that exceeds a first threshold, a second sub-grouping of one or more superpixels having a brightness intensity that is less than a second threshold, and at least one randomly selected superpixel. Vicente teaches the non-transitory computer-readable medium wherein each individual superpixel of the superpixels is a cluster of contiguous pixels (Section 3.1 and Figure 3a teach segmenting the images into regions using SLIC which clusters pixels to create superpixels. The regions segmented in Figure 3a disclose clusters of contiguous pixels), Pandey and Vicente are considered analogous to the claimed invention as because all are in the same field of relighting images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the operations of relighting taught by Pandey with the superpixels taught by Vicente in order to identify illuminated regions (Vicente Section 3.1, Paragraph 1). However, Pandey and Vicente fail to teach the subset of the superpixels includes a first sub-grouping of one or more superpixels having a brightness intensity that exceeds a first threshold, a second sub-grouping of one or more superpixels having a brightness intensity that is less than a second threshold, and at least one randomly selected superpixel. Wober teaches the subset of the superpixels includes a first sub-grouping of one or more superpixels having a brightness intensity that exceeds a first threshold, a second sub-grouping of one or more superpixels having a brightness intensity that is less than a second threshold (Column 11 Lines 14-25 teach detecting superpixels that have a brightness below a darkness threshold or above a brightness threshold. The darkness can be considered the second threshold and the brightness can be considered the first threshold. This teaches detecting a grouping or set of superpixels that exceed a first threshold and less than a second threshold. The first and second sub-groupings can be the same sub-grouping under broadest reasonable interpretation), Pandey, Vicente, and Wober are considered analogous to the claimed invention as because all are in the same field of editing the illumination in an image. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the operations of relighting an image taught by Pandey and Vicente with the selection of superpixels above and below a threshold taught in Wober in order to selectively adjust the brightness in large areas without affecting other regions (Wober Column 2 lines 3-8). However, Pandey, Vicente, and Wober fail to teach at least one randomly selected superpixel. Li teaches at least one randomly selected superpixel (Paragraph n0034 teaches randomly selecting a superpixel for training). Pandey, Vicente, and Wober are considered analogous to the claimed invention because all are in the same field of editing the illumination in an image. Li is analogous to the claimed invention because it is in the same field of training a model with superpixel segmented images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the operations of relighting taught by Pandey in view of Vicente and Wober with the random superpixel selection taught by Li in order to train a network (Li Paragraph n0031-34). 24. Claim(s) 22 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pandey et al. ("Total Relighting: Learning to Relight Portraits for Background Replacement" -- IDS), hereinafter referred to as Pandey, in view of Vicente et al. (“Single Image Shadow Removal via Neighbor-Based Region Relighting”), hereinafter referred to as Vicente, as applied to claim 1 above, and further in view of Pellacini et al. (“Lighting with Paint”), hereinafter referred to as Pellacini. 25. Regarding claim 22, Pandey in view of Vicente teaches the limitations of claim 1. Pandey further teaches the method further comprising: (Figure 4 shows an albedo representation of a portrait image produced after passing through the Albedo Net which is a machine learning model. It would be obvious to a person holding ordinary skill in the art to reuse the same process in Figure 4 for the training portrait image and another portrait image; Section 4.1, Albedo Adversarial Loss subsection teaches that the Albedo Net is trained to remove shading effects to produce the albedo representation. Removing the shading effects can be considered as removing the lighting effects); and generating, using the one or more trained machine learning models, an additional relit portrait image (Figure 4 shows a relit foreground image produced after passing through the geometry, albedo, and shading nets which are all machine learning models. This relit foreground image can be considered an additional relit portrait image that is output when reusing the process in Figure 4 for the new portrait image. The relit foreground image is also created by applying a specular light map to the albedo where they pass through a shading net together to output the final relit portrait image. The specular light map can be considered applying a lighting condition to the albedo representation which is from the training portrait image). However, Pandey and Vicente fail to teach receiving user input defining one or more markings drawn on a portrait image and interpreting the one or more markings as an additional lighting condition. Pellacini teaches receiving user input defining one or more markings drawn on a portrait image and interpreting the one or more markings as an additional lighting condition (Figure 2 and Section 3.2 steps (g)-(j) teach that when a user paints a yellow marking on the image, the computer adds a yellow light. The yellow marking is a user input that is drawn on an image. Similarly, when the user paints a blue marking on the image, the computer then adds a blue light. Thus, the markings are interpreted as an additional lighting condition; Section 4 paragraph 1 teaches the user painting markings directly onto the scene or image which can contain objects. Thus, the image could be a portrait image with the object being a human subject). Pandey, Vicente, and Pellacini are considered analogous to the claimed invention because all are in the same field of relighting images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of relighting an image taught by Pandey in view of Vicente with the user markings taught by Pellacini in order to allow users to easily add and manipulate colored lighting in a natural way without having to deal with tweaking various parameters in a drawing application (Pellacini, Section 3.2 Paragraph 2). 26. Regarding claim 26, Pandey in view of Vicente and Pellacini teaches the limitations of claim 22. However, Pandey and Vicente fail to teach the method wherein the one or more markings are drawn with one or more colors, and the lighting condition includes one or more colored lighting effects of the one or more colors. Pellacini teaches the method wherein the one or more markings are drawn with one or more colors, and the lighting condition includes one or more colored lighting effects of the one or more colors (Figure 2 and Section 3.2 steps (g)-(j) teach that when a user paints a yellow marking on the image, the computer adds a yellow light. The yellow marking is a user input that is drawn on an image. Similarly, when the user paints a blue marking on the image, the computer then adds a blue light; Section 4 paragraph 1 teaches the user painting markings directly onto the scene or image which can contain objects. Thus, the image could be a portrait image with the object being a human subject). Pandey, Vicente, and Pellacini are considered analogous to the claimed invention because all are in the same field of relighting images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of relighting an image taught by Pandey in view of Vicente with the user markings taught by Pellacini in order to allow users to easily add and manipulate colored lighting in a natural way without having to deal with tweaking various parameters in a drawing application (Pellacini, Section 3.2 Paragraph 2). 27. Claims 23-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pandey et al. ("Total Relighting: Learning to Relight Portraits for Background Replacement" -- IDS), hereinafter referred to as Pandey, in view of Vicente et al. (“Single Image Shadow Removal via Neighbor-Based Region Relighting”), hereinafter referred to as Vicente, and Pellacini et al. (“Lighting with Paint”), hereinafter referred to as Pellacini, as applied to claim 22 above, and further in view of Long et al. ("One-Click White Balance using Human Skin Reflectance”), hereinafter referred to as Long, and Kuo et al. (U.S. Patent Application Publication No. 2021/0390770 A1), hereinafter referred to as Kuo. 28. Regarding claim 23, Pandey in view of Vicente and Pellacini teaches the limitations of claim 22. Pandey further teaches generating the additional albedo representation (Figure 4 teaches an albedo representation of the portrait image produced after passing through the Albedo Net). However, Pandey, Vicente, and Pellacini fail to teach receiving user input specifying a skin tone color value; identifying a region of the portrait image that includes exposed skin of a human subject; and generating a skin tone map having the region filled with the skin tone color value. Long teaches receiving user input specifying a skin tone color value (Page 2, Section 3.1 teaches a user-selected patch of skin which is used to specify the skin tone; Figure 2 teaches the user selecting a path of skin and the description teaches that the selected patch of skin is used to select a reference skin color); identifying a region of the portrait image that includes exposed skin of a human subject (Figure 2 shows the user selecting a patch of skin which can also be considered identifying a region with the exposed skin of the human subject); Pandey, Vicente, Pellacini, and Long are considered analogous to the claimed invention as because all are in the same field of relighting an image. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of relighting and generating of the albedo taught by Pandey in view of Vicente and Pellacini with the skin tone color user selection in Long in order to easily recognize the skin despite distorted lighting (Long Section 3.1, first bullet point) and obtain an illuminant-independent representation, or albedo representation, of the image (Long Section 3.1, Paragraph 6). However, Pandey, Vicente, Pellacini, and Long fail to teach generating a skin tone map having the region filled with the skin tone color value. Kuo teaches generating a skin tone map having the region filled with the skin tone color value (Paragraph 62 and Figure 3 teaches creating a skin tone map 308 that has a region filled with a skin tone color value 306 selected. The skin tone color is extract from regions in the face). Pandey, Vicente, Pellacini, and Long are considered analogous to the claimed invention as because all are in the same field of relighting an image. Kuo is considered analogous to the claimed invention because it is in the same field of lighting a subject’s skin. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of relighting and generating the albedo as taught by Pandey in view of Vicente, Pellacini, and Long with the skin tone map in Kuo in order to create a more accurate light estimation (Kuo Paragraph 62). 29. Regarding claim 24, Pandey in view of Vicente, Pellacini, Long, and Kuo teaches the limitations of claim 23. Pandey further teaches the method wherein generating the additional albedo representation includes (Figure 4 shows an albedo representation of the portrait image produced after passing through the Albedo Net. It would be obvious to a person holding ordinary skill in the art to reuse the same process in Figure 4 for the training portrait image and another portrait image): conditioning the one or more trained machine learning models on the portrait image, the geometric representation (Figure 4 shows the networks being conditioned on the input foreground, surface normals, and the albedo. The input foreground is the portrait image and the surface normals are the geometric representation; Section 4.1 also mentions training the networks with losses computed from the albedo, geometry, and shading), and generating, using the one or more trained machine learning models, an initial albedo representation of the portrait image having the lighting effects removed (Figure 4 shows an albedo representation of the portrait image produced after passing through the Albedo Net; Section 4.1, Albedo Adversarial Loss subsection mentions that the Albedo Net is trained to remove shading effects to produce the albedo representation. Removing the shading effects can be considered as removing the lighting effects). However, Pandey, Vicente, Pellacini, and Long fail to teach the skin tone map. Kuo teaches the skin tone map (Paragraph 62 and Figure 3 teaches creating a skin tone map 308 that has a region filled with a skin tone color value 306 selected. The skin tone color is extract from regions in the face). Pandey, Vicente, Pellacini, and Long are considered analogous to the claimed invention as because all are in the same field of relighting an image. Kuo is considered analogous to the claimed invention because it is in the same field of lighting a subject’s skin. Pandey teaches conditioning the networks with the albedo representations which taught in combination with Kuo contains the skin tone map. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of relighting and generating of the albedo as taught by Pandey in view of Vicente, Pellacini, and Long with the skin tone map in Kuo in order to create a more accurate light estimation (Kuo Paragraph 62). 30. Claim(s) 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pandey et al. ("Total Relighting: Learning to Relight Portraits for Background Replacement" -- IDS), hereinafter referred to as Pandey, in view of Vicente et al. (“Single Image Shadow Removal via Neighbor-Based Region Relighting”), hereinafter referred to as Vicente, Pellacini et al. (“Lighting with Paint”), hereinafter referred to as Pellacini, Long et al. ("One-Click White Balance using Human Skin Reflectance”), hereinafter referred to as Long, and Kuo et al. (U.S. Patent Application Publication No. 2021/0390770 A1), hereinafter referred to as Kuo, as applied to claim 24 above, and further in view of Ouyang (Chinese Patent Application Publication No. 107862657 A). Regarding claim 25, Pandey in view of Vicente, Pellacini, Long, and Kuo teaches the limitations of claim 24. Pandey further teaches the method wherein generating the additional albedo representation (Figure 4 shows an albedo representation of the portrait image produced after passing through the Albedo Net) However, Pandey, Vicente, Pellacini, and Long fail to teach shifting pixel color values in the region of the initial albedo representation to be closer to the skin tone color value, wherein at least one pixel color value in the region is shifted by less than a difference between the at least one pixel color value and the skin tone color value. Kuo teaches shifting pixel color values in the region of the initial albedo representation to be closer to the skin tone color value (Paragraph 61-62 and Figure 3 teach obtaining a mean albedo using K-means or any clustering algorithm to obtain a skin tone color which is used to create a skin tone map 308 filled with that skin tone color. This creates an albedo representation with the skin tone color shifted to the skin tone color as seen in Figure 4B column 402 and 404), Pandey, Vicente, Pellacini, and Long are considered analogous to the claimed invention as because all are in the same field of relighting an image. Kuo is considered analogous to the claimed invention because it is in the same field of lighting a subject’s skin. Pandey teaches conditioning the networks with the albedo representations which taught in combination with Kuo contains the skin tone map. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the system of relighting and generating of the albedo as taught by Pandey in view of Vicente and Long with the shifting of skin tone color values to a skin tone color value in Kuo in order to create a more accurate light estimation (Kuo Paragraph 62). However, Pandey, Vicente, Pellacini, Long, and Kuo fail to teach wherein at least one pixel color value in the region is shifted by less than a difference between the at least one pixel color value and the skin tone color value. Ouyang teaches wherein at least one pixel color value in the region is shifted by less than a difference between the at least one pixel color value and the skin tone color value (Paragraph 44-45 teach shifting the pixel color of a region to become closer to a first target skin color. It teaches the pixel color value only needs to be shifted enough so that it is within a preset range of the first target skin color. The pixel color value is not required to be shifted all the way to be the first target skin color. Thus, Ouyang teaches that the pixel color value can be shifted by less than a difference between the original pixel color and skin tone color value). Pandey, Vicente, Pellacini, and Long are considered analogous to the claimed invention as because all are in the same field of relighting an image. Kuo and Ouyang are considered analogous to the claimed invention because both are in the same field of lighting a subject’s skin. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the system of relighting and generating the albedo representation taught by Pandey in view of Vicente, Long, and Kuo with the shifting of the pixel color value taught by Ouyang in order to make people in the processed image appear more realistic and natural (Ouyang Paragraph 21). 31. Claims 28-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pandey et al. ("Total Relighting: Learning to Relight Portraits for Background Replacement" -- IDS), hereinafter referred to as Pandey, in view of Vicente et al. (“Single Image Shadow Removal via Neighbor-Based Region Relighting”), hereinafter referred to as Vicente, as applied to claim 13 above, and further in view of Long et al. ("One-Click White Balance using Human Skin Reflectance”), hereinafter referred to as Long, and Kuo et al. (U.S. Patent Application Publication No. 2021/0390770 A1), hereinafter referred to as Kuo. 32. Regarding claim 28, Pandey in view of Vicente teaches the limitations of claim 13. Pandey further teaches the method wherein generating the albedo representation (Figure 4 shows an albedo representation of the portrait image produced after passing through the Albedo Net) includes: receiving a training albedo image of the human subject (Figure 4 shows the input foreground image being passed into the Albedo Net to generate an albedo representation. The input foreground image can be considered the training albedo image of the human subject since it is used to create the albedo representation. The Applicant has also not defined the training albedo image) However, Pandey and Vicente fail to teach the system identifying a region of the portrait image that includes exposed skin of the human subject; and generating a skin tone map having the region filled with an average skin tone color value in the region. Long teaches the system identifying a region of the portrait image that includes exposed skin of the human subject (Figure 2 shows the user selecting a patch of skin which can also be considered identifying a region with the exposed skin of the human subject); and generating a skin tone map having the region filled with an average skin tone color value in the region (). Pandey, Vicente, and Long are considered analogous to the claimed invention as because all are in the same field of relighting an image. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the system of relighting and generating of the albedo taught by Pandey in view of Vicente with the identification of exposed skin in Long in order to easily recognize the skin despite distorted lighting (Long Section 3.1, first bullet point) and obtain an illuminant-independent representation, or albedo representation, of the image (Long Section 3.1, Paragraph 6). However, Pandey, Vicente, and Long fail to teach generating a skin tone map having the region filled with an average skin tone color value in the region. Kuo teaches generating a skin tone map having the region filled with an average skin tone color value in the region (Paragraph 61-62 and Figure 3 teach obtaining a mean albedo using K-means or any clustering algorithm to obtain a skin tone color which is used to create a skin tone map 308 filled with that skin tone color. Using the K-means algorithm can be considered determining the average skin tone color value in the region). Pandey, Vicente, and Long are considered analogous to the claimed invention as because all are in the same field of relighting an image. Kuo is considered analogous to the claimed invention because it is in the same field of lighting a subject’s skin. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of relighting and generating of the albedo as taught by Pandey in view of Vicente and Long with the skin tone map in Kuo in order to create a more accurate light estimation (Kuo Paragraph 62). 33. Regarding claim 29, Pandey in view of Vicente, Long, and Kuo teach the limitations of claim 28. Pandey further teaches a system wherein generating the albedo representation includes (Figure 4 teaches an albedo representation of the portrait image produced after passing through the Albedo Net): conditioning the one or more machine learning models on the training portrait image, (Figure 4 teaches the networks being conditioned on the input foreground, which is the portrait image, the surface normals, which are the geometric representation, and the albedo; Section 4.1 also teaches training the networks with the losses computed from the albedo, geometry representation, and shading); and generating, using the one or more machine learning models, a first albedo representation of the training portrait image (Figure 4 teaches an albedo representation of the training portrait image generated after passing through the Albedo Net which is a machine learning model). However, Pandey, Vicente, and Long fail to teach a skin tone map. Kuo teaches a skin tone map (Paragraph 62 and Figure 3 teaches creating a skin tone map 308 that has a region filled with a skin tone color value 306 selected. The skin tone color is extract from regions in the face). Pandey, Vicente, and Long are considered analogous to the claimed invention as because all are in the same field of relighting an image. Kuo is considered analogous to the claimed invention because it is in the same field of lighting a subject’s skin. Pandey teaches conditioning the networks with the albedo representations which taught in combination with Kuo contains the skin tone map. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the system of relighting and generating of the albedo as taught by Pandey in view of Vicente and Long with the skin tone map in Kuo in order to create a more accurate light estimation (Kuo Paragraph 62). 34. Claim(s) 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pandey et al. ("Total Relighting: Learning to Relight Portraits for Background Replacement" -- IDS), hereinafter referred to as Pandey, in view of Vicente et al. (“Single Image Shadow Removal via Neighbor-Based Region Relighting”), hereinafter referred to as Vicente, Long et al. ("One-Click White Balance using Human Skin Reflectance”), hereinafter referred to as Long, and Kuo et al. (U.S. Patent Application Publication No. 2021/0390770 A1), hereinafter referred to as Kuo, as applied to claim 29 above, and further in view of Ouyang (Chinese Patent Application Publication No. 107862657 A). Regarding claim 30, Pandey in view of Vicente, Long, and Kuo teach the limitations of claim 29. Pandey further teaches the system wherein generating the albedo representation (Figure 4 shows an albedo representation of the portrait image produced after passing through the Albedo Net) However, Pandey, Vicente, and Long fail to teach shifting pixel color values in the region of the first albedo representation to be closer to the average skin tone color value, wherein at least one pixel color value in the region is shifted by less than a difference between the at least one pixel color value and the average skin tone color value. Kuo teaches shifting pixel color values in the region of the first albedo representation to be closer to the average skin tone color value (Paragraph 61-62 and Figure 3 teach obtaining a mean albedo using K-means or any clustering algorithm to obtain a skin tone color which is used to create a skin tone map 308 filled with that skin tone color. Using the K-means algorithm can be considered determining the average skin tone color value in the region. This creates an albedo representation with the skin tone color shifted to the average skin tone color as seen in Figure 4B column 402 and 404), Pandey, Vicente, and Long are considered analogous to the claimed invention as because all are in the same field of relighting an image. Kuo is considered analogous to the claimed invention because it is in the same field of lighting a subject’s skin. Pandey teaches conditioning the networks with the albedo representations which taught in combination with Kuo contains the skin tone map. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the system of relighting and generating of the albedo as taught by Pandey in view of Vicente and Long with the shifting of skin tone color values to an average skin tone color value in Kuo in order to create a more accurate light estimation (Kuo Paragraph 62). However, Pandey, Vicente, Long, and Kuo fail to teach wherein at least one pixel color value in the region is shifted by less than a difference between the at least one pixel color value and the average skin tone color value. Ouyang teaches wherein at least one pixel color value in the region is shifted by less than a difference between the at least one pixel color value and the average skin tone color value (Paragraph 44-45 teach shifting the pixel color of a region to become closer to a first target skin color. The target skin color can be the average skin tone color value as taught by Kuo above. Ouyang further teaches the pixel color value only needs to be shifted enough so that it is within a preset range of the first target skin color. The pixel color value is not required to be shifted all the way to be the first target skin color. Thus, Ouyang teaches that the pixel color value can be shifted by less than a difference between the original pixel color and skin tone color value). Pandey, Vicente, and Long are considered analogous to the claimed invention as because all are in the same field of relighting an image. Kuo and Ouyang are considered analogous to the claimed invention because both are in the same field of lighting a subject’s skin. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the system of relighting and generating the albedo representation taught by Pandey in view of Vicente, Long, and Kuo with the shifting of the pixel color value taught by Ouyang in order to make people in the processed image appear more realistic and natural (Ouyang Paragraph 21). 35. Claim(s) 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pandey et al. ("Total Relighting: Learning to Relight Portraits for Background Replacement" -- IDS), hereinafter referred to as Pandey, in view of Vicente et al. (“Single Image Shadow Removal via Neighbor-Based Region Relighting”), hereinafter referred to as Vicente, Wober (U.S. Patent No. 5,235,434 A), and Li et al. (Chinese Patent Application Publication No. 114998132 A), hereinafter Li, as applied to claim 20 above, and further in view of Li et al. (“Outlier-Robust Superpixel-Level CFAR Detector with Truncated Clutter for Single Look Complex SAR Images”), hereinafter referred to as “Outlier-Robust”. Regarding claim 31, Pandey in view of Vicente, Wober, and Li teaches the limitations of claim 20. However, Pandey, Vicente, Wober, and Li fail to teach wherein the at least one randomly selected superpixel is randomly sampled using a truncated exponential distribution. “Outlier-Robust” teaches wherein the at least one randomly selected superpixel is randomly sampled using a truncated exponential distribution (Page 5263, Section II Subsection C teaches selecting a superpixel through a truncated exponential distribution of clutter samples; Page 5265 Figure 3 teaches setting a truncation depth and selecting a sample or superpixel using the truncated exponential distribution). Pandey, Vicente, and Wober are considered analogous to the claimed invention because all are in the same field of editing the illumination in an image. Li and “Outlier-Robust” is analogous to the claimed invention because it is in the same field of superpixel segmentation of images. it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of relighting an image taught by Pandey, Vicente, Wober, and Li with the selecting of a superpixel through truncated exponential distribution taught by “Outlier-Robust” in order to ignore background superpixels (“Outlier-Robust” Page 5263 Section II Subsection C, Paragraph 1). Conclusion 36. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 37. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE Y AHN whose telephone number is (571)272-0672. The examiner can normally be reached M-F 8-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571)272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTINE YERA AHN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Jun 02, 2023
Application Filed
Mar 27, 2025
Non-Final Rejection — §103
Jul 02, 2025
Applicant Interview (Telephonic)
Jul 02, 2025
Response Filed
Jul 02, 2025
Examiner Interview Summary
Aug 06, 2025
Final Rejection — §103
Aug 20, 2025
Applicant Interview (Telephonic)
Aug 20, 2025
Examiner Interview Summary
Aug 26, 2025
Request for Continued Examination
Aug 31, 2025
Response after Non-Final Action
Sep 15, 2025
Non-Final Rejection — §103
Dec 04, 2025
Examiner Interview Summary
Dec 04, 2025
Applicant Interview (Telephonic)
Dec 11, 2025
Response Filed
Feb 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602877
BODY MODEL PROCESSING METHODS AND APPARATUSES, ELECTRONIC DEVICES AND STORAGE MEDIA
2y 5m to grant Granted Apr 14, 2026
Patent 12548187
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12456274
FACIAL EXPRESSION AND POSE TRANSFER UTILIZING AN END-TO-END MACHINE LEARNING MODEL
2y 5m to grant Granted Oct 28, 2025
Patent 12450810
ANIMATED FACIAL EXPRESSION AND POSE TRANSFER UTILIZING AN END-TO-END MACHINE LEARNING MODEL
2y 5m to grant Granted Oct 21, 2025
Patent 12439025
APPARATUS, SYSTEM, METHOD, STORAGE MEDIUM, AND FILE FORMAT
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+37.5%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month