Detailed Action
1. Claims 1-20 are pending in this application.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to amendment
3. Applicant’s response to the last Office Action filed on 01/08/2026 has been entered and made of record.
4. Claims1,9 and 17 have been amended.
.
Response to Argument
5. The Applicant’s argument filed on 02/26/2026 is fully consider. For Examiner response see discussion below.
a). The Applicant’s has amended claims 1, 9 and 17, and substantially argue that the applied prior art does note teach the added limitation. The Applicant argument is persuasive. However after further search and consideration a new prior art that teaches the added limitation is found
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. .Claims 1-5, 9-13 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over SHREYAS YADAV (hereafter by YADAV), “Sky Replacement Technique in Adobe Photoshop” pub., 2021, in view of Binglin Niu et al. (hereafter Binglin) “Semantic Segmentation of Remote Sensing Image Based on Convolutional Neural Network and Mask Generation” Mathematical Problems in Engineering Volume 2021
As to claim 1, YADAV teaches A method comprising: generating a clean mask (pages 6 and 11, Figs.6 and 6, YADAV specifically teaches a sky image adjustment that include Brightness and Temperature adjustment , where the Brightness adjustment filter (mask) increase the sharpness of the image. Thus, the clean mask corresponds to the Brightness adjustment filter. The Applicant disclosed a shape mask as example of a clean mask, see par. [0028] of the specification) and a compositing mask (Figs. 3 and 7,page 11, Once you make the Sky Adjustments, the next option is Foreground Adjustments. Foreground Adjustments help adjust Lightning Mode, Lightning Adjustment and Color Adjustment. These adjustments will help you to blend the Foreground along with the newly replaced Sky effectively. The compositing mask corresponds to Foreground, Lightning and Color Adjustments. This correspondence derive based on the specification of this application. The Applicant in par. [0028] disclosed: “a compositing mask (e.g., a soft mask used for blending portions of different images together into a composite) for an input image using a mask generation network (as discussed above the Brightness adjustment filter, Lightning Adjustment filter and Color Adjustment filter correspond to mask generation network );
generating a plurality of layers using the clean mask and the compositing mask, wherein the plurality of layers includes an edge lighting layer generated based on a subset of the plurality of layers and the clean mask (pages 9-10, Fig.5, teaches a shift edge filter and fade edge filter , where Fade Edge: This softens the transition between the sharpened area and the rest of the image. High Fade: Softens the sharp brightness spikes at the edge , making the "lighting" effect look more natural and gradual. This can be done by moving the sliding bare to the right. Low Fade: Keeps the edge lighting very abrupt and high-contrast, which can make fine details look much harsher. This can be done by moving the sliding bare to the left ); and
generating a composite image by combining the input image and the plurality of layers including the edge lighting layer(page 12-14, Figs. 10-11, illustrates a final image with cloudy sky, replaced using Sky Replacement features discussed above (see Fig.11) . The final image is generated by blending the original image with the cloudy sky, wherein the blending process replaces the original sky image (Fig. 10) the cloudy sky image(Fig.11)
However , it is noted that YADAV dies not specifically teaches “generation network, wherein the mask generation network includes one or more segmentation networks trained to generate one or more masks for input images;”
On the other hand Niu teaches generation network, wherein the mask generation network includes one or more segmentation networks trained to generate one or more masks for input images (Fig.1 Abstract, a semantic segmentation method for remote sensing images based on convolutional neural network and mask generation is proposed. In this method, the boundary box is used as the initial foreground segmentation profile, and the edge information of the foreground object is obtained by using the multilayer feature of the convolutional neural network. In order to obtain the rough object segmentation mask, the general shape and position of the foreground object are estimated by using the
high-level features in the process of layer-by-layer iteration )
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a method of semantic segmentation method for remote sensing images based on convolutional neural network and mask generation taught by Niu into YADAV.
The suggestion/motivation for doing so would have been allows user YADAV to adjust edges of the sky images by enabling precise pixel-level isolation of the sky region from objects like buildings, trees, or clouds, which facilitates more accurate and targeted processing.
As to claim 2, YADAV teaches generating a plurality of layers using the clean mask and the compositing mask (this limitation discussed in claim above), further comprises: generating a defringing layer by combining a lighting mask with a grayscale version of a second image (Mode > GrayscaleIn Photoshop 2021 and older versions, "defringing” are integral part of Photoshop 2021. Specifically, Photoshop 2021 includes Defringing Layer Edges and working on gray scale image using the droop down list Image > Mode > GrayscaleIn Photoshop 2021 and older versions, "defringing" typically refers to two different processes: removing halo-like edges from a cutout layer (selection matting) or fixing chromatic aberration (optical color fringing). In grayscale mode, standard color-based defringing tools may be unavailable or limited because there is no color data. The blending command of Photoshop 2021 combines lighting mask with a grayscale.).
As to claim 3, YADAV teaches generating a region-specific layer by combining the compositing mask with the second image(page 11, Fig.12, Foreground Adjustments help adjust Lightning Mode, Lightning Adjustment and Color Adjustment. You can make the foreground adjustments using these 3 options. These adjustments will help you to blend the Foreground along with the newly replaced Sky effectively, and the image generated by blending (combining the adjusted image with new sky as part of the background image).
As to claim 4, YADAV teaches generating a composite image by combining the input image and the plurality of layers including the edge lighting layer(Fig.3, page 7, Sky Edge Adjustments include Shift Edge adjustment and Fade Edge adjustment, where the Shift Edge adjustment lighting or darkening the edge ) further comprises: combining, in order, the input image, a foreground color adjustment layer (Figs.3,6 and 8, page 10-12, the Foreground Adjustments includes Lightning Adjustment and Color Adjustment. These adjustments will help you to blend the Foreground along with the newly replaced Sky effectively), the defringing layer, the edge lighting layer, and the region-specific layer to generate the composite image (page 12-14, Fig.10-11, the final image which is obtained by replacing the sky section of the original image cloudy sky obtained by combining sky preplacement feature that includes the Shift Edge adjustment, Fade Edge adjustment, the Foreground Adjustments includes Lightning Adjustment and Color Adjustment).
As to claim 5, YADAV teaches determining a blending mode of the edge lighting layer independently of other layers from the plurality of layers( Figs.4-5, pages 8-10, the sky edge adjustment includes Sky Edge Adjustment Shift Edge will help you to define the boundary between sky and a foreground. The Edge slider lightening the edge when the slider is dragged towards the right).
As to claim 9, YADAV teaches. A non-transitory computer-readable medium storing executable instructions, which when executed by a processing device, cause the processing device to perform operations (Figs.1-3 Adobe Photoshop 2020 is a professional software program used for image editing, graphic design, and digital art. The program is stored in a computer memory and when the program is excited by a computer processer cause the processes carry out various types of image editing); regarding the remaining limitation of claim 9,all the remaining limitation as same as the limitations of claim 1. Thus, argument analogous to that presented above for claim 1 is also applicable to claim 9.
Claim 10 is rejected the same as claim 2 except claim 10 is directed to a computer program claim. Thus, argument analogous to that presented above for claim 9 and claim 2 are applicable to claim 10.
Claim 11 is rejected the same as claim 3 except claim 11 is directed to a computer program claim. Thus, argument analogous to that presented above for claim 9 and claim 3 are applicable to claim 11.
Claim 12 is rejected the same as claim 4 except claim 12 is directed to a computer program claim. Thus, argument analogous to that presented above for claim 9 and claim 4 are applicable to claim 12.
Claim 13 is rejected the same as claim 5 except claim 13 is directed to a computer program claim. Thus, argument analogous to that presented above for claim 9 and claim 5 are applicable to claim 13.
As to claim 17, YADAV teaches. A system comprising: a memory component; and a processing device coupled to the memory component, the processing device to perform operations comprising (Adobe Photoshop 2020 is a professional software program used for image editing, graphic design, and digital art. The program is run in a computer system as shown in Figs.1-2 for example. The program is stored in a computer memory and when the program is excited by a computer processer cause the processes carry out various types of image editing);
receiving a request to replace a sky region of a first image with a sky image of a second image(Open the Image in Adobe Photoshop 2021, Go to : Edit > Sky replacement, Click on the dropdown under option Sky. In Adobe Photoshop, there are multiple Sky options available by default. You can add your Sky images as well for Sky Replacement.); obtaining a plurality of masks based on the first image; generating a plurality of sky replacement layers, including a defringing layer, an edge lighting layer, and a sky region layer, including a foreground color adjustment layer, (pages 9-10, Fig.5, teaches a shift edge filter and fade edge filter , where Fade Edge: This softens the transition between the sharpened area and the rest of the image. High Fade: Softens the sharp brightness spikes at the edge , making the "lighting" effect look more natural and gradual. This can be done by moving the sliding bare to the right. Low Fade: Keeps the edge lighting very abrupt and high-contrast, which can make fine details look much harsher. This can be done by moving the sliding bare to the left. Further a Mode > GrayscaleIn Photoshop 2021 and older versions, "defringing” are integral part of Photoshop 2021. Specifically, Photoshop 2021 includes Defringing Layer Edges and working on gray scale image using the droop down list Image > Mode > GrayscaleIn Photoshop 2021 and older versions ); and
compositing the plurality of sky replacement layers and the first image in an order of the first image followed by the foreground color adjustment layer, the defringing layer followed by the edge lighting layer and followed by the sky region layer to generate a composite image (page 12-14, Figs. 10-11, illustrates a final image with cloudy sky, replaced using Sky Replacement features discussed above (see Fig.11) . The final image is generated by blending the original image with the cloudy sky, wherein the blending process replaces the original sky image (Fig. 10) the cloudy sky image(Fig.11).
However , it is noted that YADAV dies not specifically teaches “generation network, wherein the mask generation network includes one or more segmentation networks trained to generate one or more masks for input images;”
On the other hand Niu teaches generation network, wherein the mask generation network includes one or more segmentation networks trained to generate one or more masks for input images ( Fg.1, Abstract, a semantic segmentation method for remote sensing images based on convolutional neural network and mask generation is proposed. In this method, the boundary box is used as the initial foreground segmentation profile, and the edge
information of the foreground object is obtained by using the multilayer feature of the convolutional neural network. In order to obtain the rough object segmentation mask, the general shape and position of the foreground object are estimated by using the
high-level features in the process of layer-by-layer iteration)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a method of semantic segmentation method for remote sensing images based on convolutional neural network and mask generation taught by Niu into YADAV.
The suggestion/motivation for doing so would have been allows user YADAV to adjust edges in sky images by enabling precise pixel-level isolation of the sky region from objects like buildings, trees, or clouds, which facilitates more accurate and targeted processing.
As to claim 18, YADAV teaches the operations further comprise: determining a blending mode of the edge lighting layer independently of other layers from the plurality of layers (Figs.4-5, pages 8-10, the sky edge adjustment includes Sky Edge Adjustment Shift Edge will help you to define the boundary between sky and a foreground. The Edge slider lightening the edge when the slider is dragged towards the right)
Allowable Subject Matter
7. Claims 6-8,14-16 and 19-20 are objected to as being dependent upon a rejected base claims but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claim.
8 Regarding independent claims 6,14 and 19 no prior art is found to anticipate or render the following limitation obvious:
“wherein determining a blending mode of the edge lighting layer independently of other layers from the plurality of layers further comprises: identifying a plurality of edge pixels and a plurality of sky pixels in the input image; determining an average level of the plurality of edge pixels; determining an average level of the plurality of sky pixels; calculating a difference between the average level of the plurality of edge pixels and the average level of the plurality of sky pixels; comparing the difference to a threshold value; and determining the blending mode based on the comparison.”
Claims 7-8, 15-16 and 20 are objected to as being dependent upon the objected claims 6,14 and 19 respectively.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communication from the examiner should be directed to Mekonen Bekele whose telephone number is (469) 295-9077.The examiner can normally be reached on Monday -Friday from 9:00AM to 6:50 PM Eastern Time.
If attempt to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Eng, George can be reached on (571) 272-7495.The fax phone number for the organization where the application or proceeding is assigned is 571-237-8300. Information regarding the status of an application may be obtained from the patent Application Information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR.
Status information for unpublished application is available through Privet PAIR only.
For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have question on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217-919 (tool-free)
/MEKONEN T BEKELE/Primary Examiner, Art Unit 2699